threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "The use of the --echo-hidden flag in psql is used to show people the way\npsql performs its magic for its backslash commands. None of them has more\nmagic than \"\\d relation\", but it suffers from needing a lot of separate\nqueries to gather all of the information it needs. Unfortunately, those\nqueries can get overwhelming and hard to figure out which one does what,\nespecially for those not already very familiar with the system catalogs.\nAttached is a patch to add a small SQL comment to the top of each SELECT\nquery inside describeOneTableDetail. All other functions use a single\nquery, and thus need no additional context. But \"\\d mytable\" has the\npotential to run over a dozen SQL queries! The new format looks like this:\n\n/******** QUERY *********/\n/* Get information about row-level policies */\nSELECT pol.polname, pol.polpermissive,\n CASE WHEN pol.polroles = '{0}' THEN NULL ELSE\npg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles\nwhere oid = any (pol.polroles) order by 1),',') END,\n pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n CASE pol.polcmd\n WHEN 'r' THEN 'SELECT'\n WHEN 'a' THEN 'INSERT'\n WHEN 'w' THEN 'UPDATE'\n WHEN 'd' THEN 'DELETE'\n END AS cmd\nFROM pg_catalog.pg_policy pol\nWHERE pol.polrelid = '134384' ORDER BY 1;\n/************************/\n\nCheers,\nGreg",
"msg_date": "Mon, 11 Dec 2023 16:53:01 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 4:34 PM Greg Sabino Mullane <[email protected]> wrote:\n>\n> The use of the --echo-hidden flag in psql is used to show people the way psql performs its magic for its backslash commands. None of them has more magic than \"\\d relation\", but it suffers from needing a lot of separate queries to gather all of the information it needs. Unfortunately, those queries can get overwhelming and hard to figure out which one does what, especially for those not already very familiar with the system catalogs. Attached is a patch to add a small SQL comment to the top of each SELECT query inside describeOneTableDetail. All other functions use a single query, and thus need no additional context. But \"\\d mytable\" has the potential to run over a dozen SQL queries! The new format looks like this:\n>\n> /******** QUERY *********/\n> /* Get information about row-level policies */\n> SELECT pol.polname, pol.polpermissive,\n> CASE WHEN pol.polroles = '{0}' THEN NULL ELSE pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles where oid = any (pol.polroles) order by 1),',') END,\n> pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n> pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n> CASE pol.polcmd\n> WHEN 'r' THEN 'SELECT'\n> WHEN 'a' THEN 'INSERT'\n> WHEN 'w' THEN 'UPDATE'\n> WHEN 'd' THEN 'DELETE'\n> END AS cmd\n> FROM pg_catalog.pg_policy pol\n> WHERE pol.polrelid = '134384' ORDER BY 1;\n> /************************/\n>\n> Cheers,\n> Greg\n\nThanks, this looks like some helpful information. In the same vein,\nI'm including a patch which adds information about the command that\ngenerates the given query as well (atop your commit). This will\nmodify the query line to include the command itself:\n\n/******** QUERY (\\dRs) *********/\n\nBest,\n\nDavid",
"msg_date": "Thu, 1 Feb 2024 16:39:08 -0600",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "Hi Greg, hi David\n\nOn 01.02.24 23:39, David Christensen wrote:\n> On Thu, Feb 1, 2024 at 4:34 PM Greg Sabino Mullane <[email protected]> wrote:\n>> The use of the --echo-hidden flag in psql is used to show people the way psql performs its magic for its backslash commands. None of them has more magic than \"\\d relation\", but it suffers from needing a lot of separate queries to gather all of the information it needs. Unfortunately, those queries can get overwhelming and hard to figure out which one does what, especially for those not already very familiar with the system catalogs. Attached is a patch to add a small SQL comment to the top of each SELECT query inside describeOneTableDetail. All other functions use a single query, and thus need no additional context. But \"\\d mytable\" has the potential to run over a dozen SQL queries! The new format looks like this:\n>>\n>> /******** QUERY *********/\n>> /* Get information about row-level policies */\n>> SELECT pol.polname, pol.polpermissive,\n>> CASE WHEN pol.polroles = '{0}' THEN NULL ELSE pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles where oid = any (pol.polroles) order by 1),',') END,\n>> pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n>> pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n>> CASE pol.polcmd\n>> WHEN 'r' THEN 'SELECT'\n>> WHEN 'a' THEN 'INSERT'\n>> WHEN 'w' THEN 'UPDATE'\n>> WHEN 'd' THEN 'DELETE'\n>> END AS cmd\n>> FROM pg_catalog.pg_policy pol\n>> WHERE pol.polrelid = '134384' ORDER BY 1;\n>> /************************/\n>>\n>> Cheers,\n>> Greg\n> Thanks, this looks like some helpful information. In the same vein,\n> I'm including a patch which adds information about the command that\n> generates the given query as well (atop your commit). This will\n> modify the query line to include the command itself:\n>\n> /******** QUERY (\\dRs) *********/\n>\n> Best,\n>\n> David\n\nHaving this kind of information in each query would have saved me a lot\nof time in the past :) +1\n\nThere is a tiny little issue in the last patch (qualifiers):\n\ncommand.c:312:16: warning: assignment discards ‘const’ qualifier from\npointer target type [-Wdiscarded-qualifiers]\n 312 | curcmd = cmd;\n\nThanks\n\n-- \nJim\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 14:21:59 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "Hi Jim,\n\nThanks for the feedback. Enclosed is a v2 of this series(?) rebased\nand with that warning fixed; @Greg Sabino Mullane I just created a\ncommit on your behalf with the message to hackers. I'm also creating\na commit-fest entry for this thread.\n\nBest,\n\nDavid",
"msg_date": "Thu, 21 Mar 2024 12:31:42 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "Created the CF entry in commitfest 48 but didn't see it was already in 47; closing the CFEntry in 48. (Doesn't appear to be a different status than \"Withdrawn\"...)",
"msg_date": "Thu, 21 Mar 2024 18:14:26 +0000",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On 21.03.24 18:31, David Christensen wrote:\n> Thanks for the feedback. Enclosed is a v2 of this series(?) rebased\n> and with that warning fixed; @Greg Sabino Mullane I just created a\n> commit on your behalf with the message to hackers. I'm also creating\n> a commit-fest entry for this thread.\n\nI don't think your patch takes into account that the\n\n/**... QUERY ...**/\n...\n/**... ...**/\n\nlines are supposed to align vertically. With your patch, the first line \nwould have variable length depending on the command.\n\n\n",
"msg_date": "Thu, 21 Mar 2024 23:20:05 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 6:20 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> lines are supposed to align vertically. With your patch, the first line\n> would have variable length depending on the command.\n>\n\nYes, that is a good point. Aligning those would be quite tricky, what if we\njust kept a standard width for the closing query? Probably the 24 stars we\ncurrently have to match \"QUERY\", which it appears nobody has changed for\ntranslation purposes yet anyway. (If I am reading the code correctly, it\nwould be up to the translators to maintain the vertical alignment).\n\nCheers,\nGreg\n\nOn Thu, Mar 21, 2024 at 6:20 PM Peter Eisentraut <[email protected]> wrote:\nlines are supposed to align vertically. With your patch, the first line \nwould have variable length depending on the command.Yes, that is a good point. Aligning those would be quite tricky, what if we just kept a standard width for the closing query? Probably the 24 stars we currently have to match \"QUERY\", which it appears nobody has changed for translation purposes yet anyway. (If I am reading the code correctly, it would be up to the translators to maintain the vertical alignment).Cheers,Greg",
"msg_date": "Fri, 22 Mar 2024 10:46:48 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 9:47 AM Greg Sabino Mullane <[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 6:20 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> lines are supposed to align vertically. With your patch, the first line\n>> would have variable length depending on the command.\n>\n>\n> Yes, that is a good point. Aligning those would be quite tricky, what if we just kept a standard width for the closing query? Probably the 24 stars we currently have to match \"QUERY\", which it appears nobody has changed for translation purposes yet anyway. (If I am reading the code correctly, it would be up to the translators to maintain the vertical alignment).\n\nI think it's easier to keep the widths balanced than constant (patch\nversion included here), but if we needed to squeeze the opening string\nto a standard width that would be possible without too much trouble.\nThe internal comment strings seem to be a bit more of a pain if we\nwanted all of the comments to be the same width, as we'd need a table\nor something so we can compute the longest string width, etc; doesn't\nseem worth the convolutions IMHO.\n\nNo changes to Greg's patch, just keeping 'em both so cfbot stays happy.\n\nDavid",
"msg_date": "Fri, 22 Mar 2024 10:39:10 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 11:39 AM David Christensen <[email protected]>\nwrote:\n\n> I think it's easier to keep the widths balanced than constant (patch\n> version included here)\n\n\nYeah, I'm fine with that, especially because nobody is translating it, nor\nare they likely to, to be honest.\n\nCheers,\nGreg\n\nOn Fri, Mar 22, 2024 at 11:39 AM David Christensen <[email protected]> wrote:I think it's easier to keep the widths balanced than constant (patch\nversion included here)Yeah, I'm fine with that, especially because nobody is translating it, nor are they likely to, to be honest.Cheers,Greg",
"msg_date": "Fri, 22 Mar 2024 13:37:48 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "I got Greg's blessing on squashing the commits down, and now including\na v4 with additional improvements on the output formatting front.\nMain changes:\n\n- all generated comments are the same width\n- width has been bumped to 80\n- removed _() functions for consumers of the new output functions\n\nThis patch adds two new helper functions, OutputComment() and\nOutputCommentStars() to output and wrap the comments to the\nappropriate widths. Everything should continue to work just fine if\nwe want to adjust the width by just adjusting the MAX_COMMENT_WIDTH\nsymbol.\n\nI removed _() in the output of the query/stars since there'd be no\nsensible existing translations for the constructed string, which\nincluded the query string itself. If we need it for the \"QUERY\"\nstring, this could be added fairly easily, but the existing piece\nwould have been nonsensical and never used in practice.\n\nThanks,\n\nDavid",
"msg_date": "Wed, 3 Apr 2024 12:16:05 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On 03.04.24 19:16, David Christensen wrote:\n> I removed _() in the output of the query/stars since there'd be no\n> sensible existing translations for the constructed string, which\n> included the query string itself. If we need it for the \"QUERY\"\n> string, this could be added fairly easily, but the existing piece\n> would have been nonsensical and never used in practice.\n\n\"QUERY\" is currently translated. Your patch loses that.\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 16:31:58 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 9:32 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 03.04.24 19:16, David Christensen wrote:\n> > I removed _() in the output of the query/stars since there'd be no\n> > sensible existing translations for the constructed string, which\n> > included the query string itself. If we need it for the \"QUERY\"\n> > string, this could be added fairly easily, but the existing piece\n> > would have been nonsensical and never used in practice.\n>\n> \"QUERY\" is currently translated. Your patch loses that.\n\nI see; enclosed is v5 which fixes this.\n\nThe effective diff from the last one is:\n\n- char *label = \"QUERY\";\n+ char *label = _(\"QUERY\");\n\nand\n\n- label = psprintf(\"QUERY (\\\\%s)\", curcmd);\n+ label = psprintf(_(\"QUERY (\\\\%s)\"), curcmd);\n\nBest,\n\nDavid",
"msg_date": "Thu, 4 Apr 2024 11:12:18 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 11:12 AM David Christensen <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 9:32 AM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 03.04.24 19:16, David Christensen wrote:\n> > > I removed _() in the output of the query/stars since there'd be no\n> > > sensible existing translations for the constructed string, which\n> > > included the query string itself. If we need it for the \"QUERY\"\n> > > string, this could be added fairly easily, but the existing piece\n> > > would have been nonsensical and never used in practice.\n> >\n> > \"QUERY\" is currently translated. Your patch loses that.\n>\n> I see; enclosed is v5 which fixes this.\n>\n> The effective diff from the last one is:\n>\n> - char *label = \"QUERY\";\n> + char *label = _(\"QUERY\");\n>\n> and\n>\n> - label = psprintf(\"QUERY (\\\\%s)\", curcmd);\n> + label = psprintf(_(\"QUERY (\\\\%s)\"), curcmd);\n\nAny further concerns/issues with this patch that I can address to help\nmove it forward?\n\nDavid\n\n\n",
"msg_date": "Tue, 11 Jun 2024 17:54:34 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding comments to help understand psql hidden queries"
}
] |
[
{
"msg_contents": "Not sold on the name, but --check is a combination of --silent-diff and \n--show-diff. I envision --check mostly being used in CI environments. \nI recently came across a situation where this behavior would have been \nuseful. Without --check, you're left to capture the output of \n--show-diff and exit 2 if the output isn't empty by yourself.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Mon, 11 Dec 2023 18:09:21 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add --check option to pgindent"
},
{
"msg_contents": "> On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:\n> \n> Not sold on the name, but --check is a combination of --silent-diff and --show-diff. I envision --check mostly being used in CI environments. I recently came across a situation where this behavior would have been useful. Without --check, you're left to capture the output of --show-diff and exit 2 if the output isn't empty by yourself.\n\nI haven’t studied the patch but I can see that being useful.\n\n./daniel\n\n",
"msg_date": "Tue, 12 Dec 2023 01:21:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "> On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:\n> \n> Not sold on the name, but --check is a combination of --silent-diff and --show-diff. I envision --check mostly being used in CI environments. I recently came across a situation where this behavior would have been useful. Without --check, you're left to capture the output of --show-diff and exit 2 if the output isn't empty by yourself.\n\nI wonder if we should model this around the semantics of git diff to keep it\nsimilar to other CI jobs which often use git diff? git diff --check means \"are\nthere conflicts or issues\" which isn't really comparable to here, git diff\n--exit-code however is pretty much exactly what this is trying to accomplish.\n\nThat would make pgindent --show-diff --exit-code exit with 1 if there were\ndiffs and 0 if there are no diffs.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 12 Dec 2023 11:18:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 12, 2023 at 11:18:59AM +0100, Daniel Gustafsson wrote:\n> > On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:\n> > \n> > Not sold on the name, but --check is a combination of --silent-diff\n> > and --show-diff. I envision --check mostly being used in CI\n> > environments. I recently came across a situation where this behavior\n> > would have been useful. Without --check, you're left to capture the\n> > output of --show-diff and exit 2 if the output isn't empty by\n> > yourself.\n> \n> I wonder if we should model this around the semantics of git diff to\n> keep it similar to other CI jobs which often use git diff? git diff\n> --check means \"are there conflicts or issues\" which isn't really\n> comparable to here, git diff --exit-code however is pretty much\n> exactly what this is trying to accomplish.\n> \n> That would make pgindent --show-diff --exit-code exit with 1 if there\n> were diffs and 0 if there are no diffs.\n\nTo be honest, I find that rather convoluted; contrary to \"git diff\", I\nbelieve the primary action of pgident is not to show diffs, so I find\nthe proposed --check option to be entirely reasonable from a UX\nperspective.\n\nOn the other hand, tying a \"does this need re-indenting?\" question to a\n\"--show-diff --exit-code\" option combination is not very obvious (to me,\nat least).\n\n\nCheers,\n\nMichael\n\n\n",
"msg_date": "Tue, 12 Dec 2023 11:28:10 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Tue, Dec 12, 2023, at 7:28 AM, Michael Banck wrote:\n> On Tue, Dec 12, 2023 at 11:18:59AM +0100, Daniel Gustafsson wrote:\n> > > On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:\n> > > \n> > > Not sold on the name, but --check is a combination of --silent-diff\n> > > and --show-diff. I envision --check mostly being used in CI\n> > > environments. I recently came across a situation where this behavior\n> > > would have been useful. Without --check, you're left to capture the\n> > > output of --show-diff and exit 2 if the output isn't empty by\n> > > yourself.\n> > \n> > I wonder if we should model this around the semantics of git diff to\n> > keep it similar to other CI jobs which often use git diff? git diff\n> > --check means \"are there conflicts or issues\" which isn't really\n> > comparable to here, git diff --exit-code however is pretty much\n> > exactly what this is trying to accomplish.\n> > \n> > That would make pgindent --show-diff --exit-code exit with 1 if there\n> > were diffs and 0 if there are no diffs.\n> \n> To be honest, I find that rather convoluted; contrary to \"git diff\", I\n> believe the primary action of pgident is not to show diffs, so I find\n> the proposed --check option to be entirely reasonable from a UX\n> perspective.\n> \n> On the other hand, tying a \"does this need re-indenting?\" question to a\n> \"--show-diff --exit-code\" option combination is not very obvious (to me,\n> at least).\n\nMultiple options to accomplish a use case might not be obvious. I'm wondering\nif we can combine it into a unique option.\n\n--show-diff show the changes that would be made\n--silent-diff exit with status 2 if any changes would be made\n+ --check combination of --show-diff and --silent-diff\n\nI mean\n\n--diff=show,silent,check\n\nWhen you add exceptions, it starts to complicate the UI.\n\nusage(\"Cannot have both --silent-diff and --show-diff\")\n if $silent_diff && $show_diff;\n \n+usage(\"Cannot have both --check and --show-diff\")\n+ if $check && $show_diff;\n+\n+usage(\"Cannot have both --check and --silent-diff\")\n+ if $check && $silent_diff;\n+\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Dec 12, 2023, at 7:28 AM, Michael Banck wrote:On Tue, Dec 12, 2023 at 11:18:59AM +0100, Daniel Gustafsson wrote:> > On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:> > > > Not sold on the name, but --check is a combination of --silent-diff> > and --show-diff. I envision --check mostly being used in CI> > environments. I recently came across a situation where this behavior> > would have been useful. Without --check, you're left to capture the> > output of --show-diff and exit 2 if the output isn't empty by> > yourself.> > I wonder if we should model this around the semantics of git diff to> keep it similar to other CI jobs which often use git diff? git diff> --check means \"are there conflicts or issues\" which isn't really> comparable to here, git diff --exit-code however is pretty much> exactly what this is trying to accomplish.> > That would make pgindent --show-diff --exit-code exit with 1 if there> were diffs and 0 if there are no diffs.To be honest, I find that rather convoluted; contrary to \"git diff\", Ibelieve the primary action of pgident is not to show diffs, so I findthe proposed --check option to be entirely reasonable from a UXperspective.On the other hand, tying a \"does this need re-indenting?\" question to a\"--show-diff --exit-code\" option combination is not very obvious (to me,at least).Multiple options to accomplish a use case might not be obvious. I'm wonderingif we can combine it into a unique option.--show-diff show the changes that would be made--silent-diff exit with status 2 if any changes would be made+\t--check combination of --show-diff and --silent-diffI mean--diff=show,silent,checkWhen you add exceptions, it starts to complicate the UI.usage(\"Cannot have both --silent-diff and --show-diff\") if $silent_diff && $show_diff; +usage(\"Cannot have both --check and --show-diff\")+ if $check && $show_diff;++usage(\"Cannot have both --check and --silent-diff\")+ if $check && $silent_diff;+--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Dec 2023 08:47:34 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "\"Euler Taveira\" <[email protected]> writes:\n> When you add exceptions, it starts to complicate the UI.\n\nIndeed. It seems like --silent-diff was poorly defined and poorly\nnamed, and we need to rethink that option along the way to adding\nthis behavior. The idea that --show-diff and --silent-diff can\nbe used together is just inherently confusing, because they sound\nlike opposites.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Dec 2023 10:18:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Tue Dec 12, 2023 at 5:47 AM CST, Euler Taveira wrote:\n> On Tue, Dec 12, 2023, at 7:28 AM, Michael Banck wrote:\n> > On Tue, Dec 12, 2023 at 11:18:59AM +0100, Daniel Gustafsson wrote:\n> > > > On 12 Dec 2023, at 01:09, Tristan Partin <[email protected]> wrote:\n> > > > \n> > > > Not sold on the name, but --check is a combination of --silent-diff\n> > > > and --show-diff. I envision --check mostly being used in CI\n> > > > environments. I recently came across a situation where this behavior\n> > > > would have been useful. Without --check, you're left to capture the\n> > > > output of --show-diff and exit 2 if the output isn't empty by\n> > > > yourself.\n> > > \n> > > I wonder if we should model this around the semantics of git diff to\n> > > keep it similar to other CI jobs which often use git diff? git diff\n> > > --check means \"are there conflicts or issues\" which isn't really\n> > > comparable to here, git diff --exit-code however is pretty much\n> > > exactly what this is trying to accomplish.\n> > > \n> > > That would make pgindent --show-diff --exit-code exit with 1 if there\n> > > were diffs and 0 if there are no diffs.\n> > \n> > To be honest, I find that rather convoluted; contrary to \"git diff\", I\n> > believe the primary action of pgident is not to show diffs, so I find\n> > the proposed --check option to be entirely reasonable from a UX\n> > perspective.\n> > \n> > On the other hand, tying a \"does this need re-indenting?\" question to a\n> > \"--show-diff --exit-code\" option combination is not very obvious (to me,\n> > at least).\n>\n> Multiple options to accomplish a use case might not be obvious. I'm wondering\n> if we can combine it into a unique option.\n>\n> --show-diff show the changes that would be made\n> --silent-diff exit with status 2 if any changes would be made\n> + --check combination of --show-diff and --silent-diff\n>\n> I mean\n>\n> --diff=show,silent,check\n>\n> When you add exceptions, it starts to complicate the UI.\n>\n> usage(\"Cannot have both --silent-diff and --show-diff\")\n> if $silent_diff && $show_diff;\n> \n> +usage(\"Cannot have both --check and --show-diff\")\n> + if $check && $show_diff;\n> +\n> +usage(\"Cannot have both --check and --silent-diff\")\n> + if $check && $silent_diff;\n> +\n\nI definitely agree. I just didn't want to remove options, but if people \nare ok with that, I will just change the interface.\n\nI like the git-diff semantics or have --diff and --check, similar to how \nPython's black[0] is.\n\n[0]: https://github.com/psf/black\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 12 Dec 2023 09:26:58 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On 2023-Dec-12, Tom Lane wrote:\n\n> \"Euler Taveira\" <[email protected]> writes:\n> > When you add exceptions, it starts to complicate the UI.\n> \n> Indeed. It seems like --silent-diff was poorly defined and poorly\n> named, and we need to rethink that option along the way to adding\n> this behavior. The idea that --show-diff and --silent-diff can\n> be used together is just inherently confusing, because they sound\n> like opposites.\n\nMaybe it's enough to rename --silent-diff to --check. You can do\n\"--show-diff --check\" and get both the error and the diff printed; or\njust \"--check\" and it'll throw an error without further ado; or\n\"--show-diff\" and it will both apply the diff and print it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Tue, 12 Dec 2023 16:30:01 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "\nOn 2023-12-12 Tu 10:30, Alvaro Herrera wrote:\n> On 2023-Dec-12, Tom Lane wrote:\n>\n>> \"Euler Taveira\" <[email protected]> writes:\n>>> When you add exceptions, it starts to complicate the UI.\n>> Indeed. It seems like --silent-diff was poorly defined and poorly\n>> named, and we need to rethink that option along the way to adding\n>> this behavior. The idea that --show-diff and --silent-diff can\n>> be used together is just inherently confusing, because they sound\n>> like opposites\n> Maybe it's enough to rename --silent-diff to --check. You can do\n> \"--show-diff --check\" and get both the error and the diff printed; or\n> just \"--check\" and it'll throw an error without further ado; or\n> \"--show-diff\" and it will both apply the diff and print it.\n>\n\nThat seems reasonable. These features were fairly substantially debated \nwhen we put them in, but I'm fine with some tweaking. But note: \n--show-diff doesn't apply the diff, it's intentionally non-destructive.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Dec 2023 15:46:08 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Wed Dec 13, 2023 at 2:46 PM CST, Andrew Dunstan wrote:\n>\n> On 2023-12-12 Tu 10:30, Alvaro Herrera wrote:\n> > On 2023-Dec-12, Tom Lane wrote:\n> >\n> >> \"Euler Taveira\" <[email protected]> writes:\n> >>> When you add exceptions, it starts to complicate the UI.\n> >> Indeed. It seems like --silent-diff was poorly defined and poorly\n> >> named, and we need to rethink that option along the way to adding\n> >> this behavior. The idea that --show-diff and --silent-diff can\n> >> be used together is just inherently confusing, because they sound\n> >> like opposites\n> > Maybe it's enough to rename --silent-diff to --check. You can do\n> > \"--show-diff --check\" and get both the error and the diff printed; or\n> > just \"--check\" and it'll throw an error without further ado; or\n> > \"--show-diff\" and it will both apply the diff and print it.\n> >\n>\n> That seems reasonable. These features were fairly substantially debated \n> when we put them in, but I'm fine with some tweaking. But note: \n> --show-diff doesn't apply the diff, it's intentionally non-destructive.\n\nHere is a new patch:\n\n- Renames --silent-diff to --check\n- Renames --show-diff to --diff\n- Allows one to use --check and --diff in the same command\n\nI am not tied to the second patch if people like --show-diff better than \n--diff.\n\nWeirdly enough, my email client doesn't show this as part of the \noriginal thread, but I will respond here anyway.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Thu, 14 Dec 2023 10:53:39 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "This part of the first patch seems incorrect, i.e. same condition in\nthe if as elsif\n\n- if ($silent_diff)\n+ if ($check)\n+ {\n+ print show_diff($source, $source_filename);\n+ exit 2;\n+ }\n+ elsif ($check)\n {\n exit 2;\n }\n\nOn Thu, 14 Dec 2023 at 17:54, Tristan Partin <[email protected]> wrote:\n>\n> On Wed Dec 13, 2023 at 2:46 PM CST, Andrew Dunstan wrote:\n> >\n> > On 2023-12-12 Tu 10:30, Alvaro Herrera wrote:\n> > > On 2023-Dec-12, Tom Lane wrote:\n> > >\n> > >> \"Euler Taveira\" <[email protected]> writes:\n> > >>> When you add exceptions, it starts to complicate the UI.\n> > >> Indeed. It seems like --silent-diff was poorly defined and poorly\n> > >> named, and we need to rethink that option along the way to adding\n> > >> this behavior. The idea that --show-diff and --silent-diff can\n> > >> be used together is just inherently confusing, because they sound\n> > >> like opposites\n> > > Maybe it's enough to rename --silent-diff to --check. You can do\n> > > \"--show-diff --check\" and get both the error and the diff printed; or\n> > > just \"--check\" and it'll throw an error without further ado; or\n> > > \"--show-diff\" and it will both apply the diff and print it.\n> > >\n> >\n> > That seems reasonable. These features were fairly substantially debated\n> > when we put them in, but I'm fine with some tweaking. But note:\n> > --show-diff doesn't apply the diff, it's intentionally non-destructive.\n>\n> Here is a new patch:\n>\n> - Renames --silent-diff to --check\n> - Renames --show-diff to --diff\n> - Allows one to use --check and --diff in the same command\n>\n> I am not tied to the second patch if people like --show-diff better than\n> --diff.\n>\n> Weirdly enough, my email client doesn't show this as part of the\n> original thread, but I will respond here anyway.\n>\n> --\n> Tristan Partin\n> Neon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 15 Dec 2023 15:18:46 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Fri Dec 15, 2023 at 8:18 AM CST, Jelte Fennema-Nio wrote:\n> This part of the first patch seems incorrect, i.e. same condition in\n> the if as elsif\n>\n> - if ($silent_diff)\n> + if ($check)\n> + {\n> + print show_diff($source, $source_filename);\n> + exit 2;\n> + }\n> + elsif ($check)\n> {\n> exit 2;\n> }\n\nWeird how I was able to screw that up so bad :). Thanks! Here is a v3.\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Fri, 15 Dec 2023 09:43:32 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "> On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:\n\n> Here is a v3.\n\nI think this is pretty much ready to go, the attached v4 squashes the changes\nand fixes the man-page which also needed an update. The referenced Wiki page\nwill need an edit or two after this goes in, but that's easy enough.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 18 Dec 2023 13:41:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, Dec 18, 2023, at 9:41 AM, Daniel Gustafsson wrote:\n> > On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:\n> \n> > Here is a v3.\n> \n> I think this is pretty much ready to go, the attached v4 squashes the changes\n> and fixes the man-page which also needed an update. The referenced Wiki page\n> will need an edit or two after this goes in, but that's easy enough.\n\n... and pgbuildfarm client [1] needs to be changed too.\n\n\n[1] https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/CheckIndent.pm#L55-L90\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Dec 18, 2023, at 9:41 AM, Daniel Gustafsson wrote:> On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:> Here is a v3.I think this is pretty much ready to go, the attached v4 squashes the changesand fixes the man-page which also needed an update. The referenced Wiki pagewill need an edit or two after this goes in, but that's easy enough.... and pgbuildfarm client [1] needs to be changed too.[1] https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/CheckIndent.pm#L55-L90--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 18 Dec 2023 10:56:07 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 6:41 AM CST, Daniel Gustafsson wrote:\n> > On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:\n>\n> > Here is a v3.\n>\n> I think this is pretty much ready to go, the attached v4 squashes the changes\n> and fixes the man-page which also needed an update. The referenced Wiki page\n> will need an edit or two after this goes in, but that's easy enough.\n\nI have never edited the Wiki before. How can I do that? More than happy \nto do it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 09:45:05 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 7:56 AM CST, Euler Taveira wrote:\n> On Mon, Dec 18, 2023, at 9:41 AM, Daniel Gustafsson wrote:\n> > > On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:\n> > \n> > > Here is a v3.\n> > \n> > I think this is pretty much ready to go, the attached v4 squashes the changes\n> > and fixes the man-page which also needed an update. The referenced Wiki page\n> > will need an edit or two after this goes in, but that's easy enough.\n>\n> ... and pgbuildfarm client [1] needs to be changed too.\n>\n>\n> [1] https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/CheckIndent.pm#L55-L90\n\nThanks Euler. I am on it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 09:45:24 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 16:45, Tristan Partin <[email protected]> wrote:\n>\n> On Mon Dec 18, 2023 at 6:41 AM CST, Daniel Gustafsson wrote:\n> > > On 15 Dec 2023, at 16:43, Tristan Partin <[email protected]> wrote:\n> >\n> > > Here is a v3.\n> >\n> > I think this is pretty much ready to go, the attached v4 squashes the changes\n> > and fixes the man-page which also needed an update. The referenced Wiki page\n> > will need an edit or two after this goes in, but that's easy enough.\n>\n> I have never edited the Wiki before. How can I do that? More than happy\n> to do it.\n\nAs mentioned at the bottom of the main page of the wiki:\n\n NOTE: due to recent spam activity \"editor\" privileges are granted\nmanually for the time being.\n If you just created a new community account or if your current\naccount used to have \"editor\" privileges ask on either the PostgreSQL\n-www Mailinglist or the PostgreSQL IRC Channel (direct your request to\n'pginfra', multiple individuals in the channel highlight on that\nstring) for help. Please include your community account name in those\nrequests.\n\nAfter that, it's just a case of loggin in on the wiki (link top right\ncorner, which uses the community account) and then just go on to your\nprefered page, click edit, and do your modifications. Don't forget to\nsave the modifications - I'm not sure whether the wiki editor's state\nis locally persisted.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:59:46 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n> I think this is pretty much ready to go, the attached v4 squashes the changes\n> and fixes the man-page which also needed an update. The referenced Wiki page\n> will need an edit or two after this goes in, but that's easy enough.\n\nOne thing I'm wondering: When both --check and --diff are passed,\nshould pgindent still early exit with 2 on the first incorrectly\nformatted file? Or should it show diffs for all failing files? I'm\nleaning towards the latter making more sense.\n\nRelated (but not required for this patch): For my pre-commit hook I\nwould very much like it if there was an option to have pgindent write\nthe changes to disk, but still exit non-zero, e.g. a --write flag that\ncould be combined with --check just like --diff and --check can be\ncombined with this patch. Currently my pre-commit hook need two\nseparate calls to pgindent to both abort the commit and write the\nrequired changes to disk.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:14:30 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 17:14, Jelte Fennema-Nio <[email protected]> wrote:\n> One thing I'm wondering: When both --check and --diff are passed,\n> should pgindent still early exit with 2 on the first incorrectly\n> formatted file? Or should it show diffs for all failing files? I'm\n> leaning towards the latter making more sense.\n\nTo be clear, in the latter case it would still exit with 2 at the end\nof the script, but only after showing diffs for all files.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:15:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 10:14 AM CST, Jelte Fennema-Nio wrote:\n> On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n> > I think this is pretty much ready to go, the attached v4 squashes the changes\n> > and fixes the man-page which also needed an update. The referenced Wiki page\n> > will need an edit or two after this goes in, but that's easy enough.\n>\n> One thing I'm wondering: When both --check and --diff are passed,\n> should pgindent still early exit with 2 on the first incorrectly\n> formatted file? Or should it show diffs for all failing files? I'm\n> leaning towards the latter making more sense.\n\nMakes sense. Let me iterate on the squashed version of the patch.\n\n> Related (but not required for this patch): For my pre-commit hook I\n> would very much like it if there was an option to have pgindent write\n> the changes to disk, but still exit non-zero, e.g. a --write flag that\n> could be combined with --check just like --diff and --check can be\n> combined with this patch. Currently my pre-commit hook need two\n> separate calls to pgindent to both abort the commit and write the\n> required changes to disk.\n\nI could propose something. It would help if I had an example of such \na tool that already exists.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 10:50:15 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 17:50, Tristan Partin <[email protected]> wrote:\n> I could propose something. It would help if I had an example of such\n> a tool that already exists.\n\nBasically the same behaviour as what you're trying to add now for\n--check, only instead of printing the diff it would actually change\nthe files just like running pgindent without a --check flag does. i.e.\nallow pgindent --check to not run in \"dry-run\" mode\nMy pre-commit hook looks like this currently (removed boring cruft around it):\n\n if ! src/tools/pgindent/pgindent --check $files > /dev/null; then\n exit 0\n fi\n echo \"Running pgindent on changed files\"\n src/tools/pgindent/pgindent $files\n echo \"Commit abandoned. Rerun git commit to adopt pgindent changes\"\n exit 1\n\nBut I would like it to look like:\n\n if src/tools/pgindent/pgindent --check --write $files > /dev/null; then\n exit 0\n fi\n echo \"Commit abandoned. Rerun git commit to adopt pgindent changes\"\n exit 1\n\n\n",
"msg_date": "Mon, 18 Dec 2023 18:21:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 10:50 AM CST, Tristan Partin wrote:\n> On Mon Dec 18, 2023 at 10:14 AM CST, Jelte Fennema-Nio wrote:\n> > On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n> > > I think this is pretty much ready to go, the attached v4 squashes the changes\n> > > and fixes the man-page which also needed an update. The referenced Wiki page\n> > > will need an edit or two after this goes in, but that's easy enough.\n> >\n> > One thing I'm wondering: When both --check and --diff are passed,\n> > should pgindent still early exit with 2 on the first incorrectly\n> > formatted file? Or should it show diffs for all failing files? I'm\n> > leaning towards the latter making more sense.\n>\n> Makes sense. Let me iterate on the squashed version of the patch.\n\nHere is an additional patch which implements the behavior you described. \nThe first patch is just Daniel's squashed version of my patches.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Mon, 18 Dec 2023 15:18:47 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 11:21 AM CST, Jelte Fennema-Nio wrote:\n> On Mon, 18 Dec 2023 at 17:50, Tristan Partin <[email protected]> wrote:\n> > I could propose something. It would help if I had an example of such\n> > a tool that already exists.\n>\n> Basically the same behaviour as what you're trying to add now for\n> --check, only instead of printing the diff it would actually change\n> the files just like running pgindent without a --check flag does. i.e.\n> allow pgindent --check to not run in \"dry-run\" mode\n> My pre-commit hook looks like this currently (removed boring cruft around it):\n>\n> if ! src/tools/pgindent/pgindent --check $files > /dev/null; then\n> exit 0\n> fi\n> echo \"Running pgindent on changed files\"\n> src/tools/pgindent/pgindent $files\n> echo \"Commit abandoned. Rerun git commit to adopt pgindent changes\"\n> exit 1\n>\n> But I would like it to look like:\n>\n> if src/tools/pgindent/pgindent --check --write $files > /dev/null; then\n> exit 0\n> fi\n> echo \"Commit abandoned. Rerun git commit to adopt pgindent changes\"\n> exit 1\n\nTo me, the two options seem at odds, like you either check or write. How \nwould you feel about just capturing the diffs that are printed and \npatch(1)-ing them?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 15:34:05 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "\nOn 2023-12-18 Mo 11:14, Jelte Fennema-Nio wrote:\n> On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n>> I think this is pretty much ready to go, the attached v4 squashes the changes\n>> and fixes the man-page which also needed an update. The referenced Wiki page\n>> will need an edit or two after this goes in, but that's easy enough.\n> One thing I'm wondering: When both --check and --diff are passed,\n> should pgindent still early exit with 2 on the first incorrectly\n> formatted file? Or should it show diffs for all failing files? I'm\n> leaning towards the latter making more sense.\n\n\nIt should show them all.\n\n\n>\n> Related (but not required for this patch): For my pre-commit hook I\n> would very much like it if there was an option to have pgindent write\n> the changes to disk, but still exit non-zero, e.g. a --write flag that\n> could be combined with --check just like --diff and --check can be\n> combined with this patch. Currently my pre-commit hook need two\n> separate calls to pgindent to both abort the commit and write the\n> required changes to disk.\n\n\nNot sold on this. I don't want pgindent applied automatically to my \nuncommitted changes, but I do want a reminder that I forgot to run it. \nIn any case, as you say it's a different topic.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 19 Dec 2023 08:51:41 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "> On 19 Dec 2023, at 14:51, Andrew Dunstan <[email protected]> wrote:\n> \n> On 2023-12-18 Mo 11:14, Jelte Fennema-Nio wrote:\n>> On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n>>> I think this is pretty much ready to go, the attached v4 squashes the changes\n>>> and fixes the man-page which also needed an update. The referenced Wiki page\n>>> will need an edit or two after this goes in, but that's easy enough.\n>> One thing I'm wondering: When both --check and --diff are passed,\n>> should pgindent still early exit with 2 on the first incorrectly\n>> formatted file? Or should it show diffs for all failing files? I'm\n>> leaning towards the latter making more sense.\n> \n> It should show them all.\n\nAgreed.\n\n>> Related (but not required for this patch): For my pre-commit hook I\n>> would very much like it if there was an option to have pgindent write\n>> the changes to disk, but still exit non-zero, e.g. a --write flag that\n>> could be combined with --check just like --diff and --check can be\n>> combined with this patch. Currently my pre-commit hook need two\n>> separate calls to pgindent to both abort the commit and write the\n>> required changes to disk.\n> \n> Not sold on this. I don't want pgindent applied automatically to my uncommitted changes, but I do want a reminder that I forgot to run it. In any case, as you say it's a different topic.\n\nI think we risk making the tool confusing if we add too many options which can\nall be combined to suit every need. The posted v5 seems like a good compromise\nI reckon.\n\nAndrew: When applying this, how do we synchronize with the buildfarm to avoid\nfalse negatives due to the BF using the wrong options?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 19 Dec 2023 14:57:58 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon Dec 18, 2023 at 3:18 PM CST, Tristan Partin wrote:\n> On Mon Dec 18, 2023 at 10:50 AM CST, Tristan Partin wrote:\n> > On Mon Dec 18, 2023 at 10:14 AM CST, Jelte Fennema-Nio wrote:\n> > > On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n> > > > I think this is pretty much ready to go, the attached v4 squashes the changes\n> > > > and fixes the man-page which also needed an update. The referenced Wiki page\n> > > > will need an edit or two after this goes in, but that's easy enough.\n> > >\n> > > One thing I'm wondering: When both --check and --diff are passed,\n> > > should pgindent still early exit with 2 on the first incorrectly\n> > > formatted file? Or should it show diffs for all failing files? I'm\n> > > leaning towards the latter making more sense.\n> >\n> > Makes sense. Let me iterate on the squashed version of the patch.\n>\n> Here is an additional patch which implements the behavior you described. \n> The first patch is just Daniel's squashed version of my patches.\n\nAssuming all this is good, I now have access to edit the Wiki. The PR \nfor buildfarm client code is up, and hopefully that PR is correct.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Dec 2023 09:33:54 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 22:18, Tristan Partin <[email protected]> wrote:\n> Here is an additional patch which implements the behavior you described.\n> The first patch is just Daniel's squashed version of my patches.\n\nI think we'd still want the early exit behaviour when only --check is\nprovided. No need to spend time checking more files if you're not\ndoing anything else.\n\nOn Mon, 18 Dec 2023 at 22:34, Tristan Partin <[email protected]> wrote:\n> To me, the two options seem at odds, like you either check or write. How\n> would you feel about just capturing the diffs that are printed and\n> patch(1)-ing them?\n\nI tried capturing the diffs and patching them, but simply piping the\npgindent output to patch(1) didn't work because the pipe loses the\nexit code of pgindent. -o pipefail would help with this, but it's not\navailable in all shells. Also then it's suddenly unclear if pgident\nfailed or if patch failed.\n\nAttached are two additional patches (in addition to your unchanged\nprevious ones). One which adds the --write flag and one which early\nexits with --check when neither --write or --diff are provided. The\nadditional code I added for the --write flag is really minimal IMO.\n\nOn Tue, 19 Dec 2023 at 14:51, Andrew Dunstan <[email protected]> wrote:\n> Not sold on this. I don't want pgindent applied automatically to my\n> uncommitted changes, but I do want a reminder that I forgot to run it.\n> In any case, as you say it's a different topic.\n\nTo be clear, with this patch just passing --check won't apply the\nchanges automatically. Only when passing both --write and --check will\nit write the files.\n\nTo clarify my commit workflow is as follows:\ngit add -p # interactively select all the changes that I want in my commit\ngit commit\n# pre-commit hook fails because of pgindent and \"fixes\" my files in\nplace but doesn't add them to the commit yet\ngit add -p\n# I look at all the changes that pgindent make to see if they are\nsensible and accept them, if not I find some other way to fix them\ngit commit\n# now commit succeeded\n\nOn Tue, 19 Dec 2023 at 14:58, Daniel Gustafsson <[email protected]> wrote:\n> I think we risk making the tool confusing if we add too many options which can\n> all be combined to suit every need. The posted v5 seems like a good compromise\n> I reckon.\n\nI personally think these options are completely independent. So it's\nmore confusing to me that they cannot be combined. I updated the help\nmessage in 0003 as well, to describe them as completely independent:\n --diff show the changes that need to be made\n --check exit with status 2 if any changes need to be made\n --write rewrites files that need changes (default\nif neither --check/--diff/--write are provided)\n\n\nPS. prettier (javascript formatter) allows both --check and --write at\nthe same time to do exactly this\nPPS. the help message didn't contain anything about pgindent writing\nfiles by default (i.e. when --check or --diff are not provided)\nPPPS. I attached my new pre-commit hook for reference.",
"msg_date": "Tue, 19 Dec 2023 17:36:12 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Tue Dec 19, 2023 at 10:36 AM CST, Jelte Fennema-Nio wrote:\n> On Mon, 18 Dec 2023 at 22:18, Tristan Partin <[email protected]> wrote:\n> > Here is an additional patch which implements the behavior you described.\n> > The first patch is just Daniel's squashed version of my patches.\n>\n> I think we'd still want the early exit behaviour when only --check is\n> provided. No need to spend time checking more files if you're not\n> doing anything else.\n\nGood point. Patch looks good.\n\n> On Mon, 18 Dec 2023 at 22:34, Tristan Partin <[email protected]> wrote:\n> > To me, the two options seem at odds, like you either check or write. How\n> > would you feel about just capturing the diffs that are printed and\n> > patch(1)-ing them?\n>\n> I tried capturing the diffs and patching them, but simply piping the\n> pgindent output to patch(1) didn't work because the pipe loses the\n> exit code of pgindent. -o pipefail would help with this, but it's not\n> available in all shells. Also then it's suddenly unclear if pgident\n> failed or if patch failed.\n\nI was envisioning something along the lines of:\n\n\tpgindent --check --diff > patches.txt\n\tstatus=$?\n\tpatch <patches.txt # no idea if this works, or if you need a for loop with manual parsing\n\texit $status\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Dec 2023 10:54:36 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "\nOn 2023-12-19 Tu 08:57, Daniel Gustafsson wrote:\n>> On 19 Dec 2023, at 14:51, Andrew Dunstan <[email protected]> wrote:\n>>\n>> On 2023-12-18 Mo 11:14, Jelte Fennema-Nio wrote:\n>>> On Mon, 18 Dec 2023 at 13:42, Daniel Gustafsson <[email protected]> wrote:\n>>>> I think this is pretty much ready to go, the attached v4 squashes the changes\n>>>> and fixes the man-page which also needed an update. The referenced Wiki page\n>>>> will need an edit or two after this goes in, but that's easy enough.\n>>> One thing I'm wondering: When both --check and --diff are passed,\n>>> should pgindent still early exit with 2 on the first incorrectly\n>>> formatted file? Or should it show diffs for all failing files? I'm\n>>> leaning towards the latter making more sense.\n>> It should show them all.\n> Agreed.\n>\n>>> Related (but not required for this patch): For my pre-commit hook I\n>>> would very much like it if there was an option to have pgindent write\n>>> the changes to disk, but still exit non-zero, e.g. a --write flag that\n>>> could be combined with --check just like --diff and --check can be\n>>> combined with this patch. Currently my pre-commit hook need two\n>>> separate calls to pgindent to both abort the commit and write the\n>>> required changes to disk.\n>> Not sold on this. I don't want pgindent applied automatically to my uncommitted changes, but I do want a reminder that I forgot to run it. In any case, as you say it's a different topic.\n> I think we risk making the tool confusing if we add too many options which can\n> all be combined to suit every need. The posted v5 seems like a good compromise\n> I reckon.\n>\n> Andrew: When applying this, how do we synchronize with the buildfarm to avoid\n> false negatives due to the BF using the wrong options?\n\n\nThe only buildfarm animal involved here is koel, which I run, so the \nsimplest way will be for me to commit the core patch and adjust the \nbuildfarm code at the same time.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:05:14 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "\nOn 2023-12-20 We 11:05, Andrew Dunstan wrote:\n>\n> On 2023-12-19 Tu 08:57, Daniel Gustafsson wrote:\n>> The posted v5 seems like a good compromise\n>> I reckon.\n>>\n>> Andrew: When applying this, how do we synchronize with the buildfarm \n>> to avoid\n>> false negatives due to the BF using the wrong options?\n>\n>\n> The only buildfarm animal involved here is koel, which I run, so the \n> simplest way will be for me to commit the core patch and adjust the \n> buildfarm code at the same time.\n>\n>\n\nV5 seems to be the consensus. I went with that, but added in logic to \nexit the loop early for --check if we're not also doing --diff.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 17:38:40 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 17:54, Tristan Partin <[email protected]> wrote:\n> I was envisioning something along the lines of:\n>\n> pgindent --check --diff > patches.txt\n> status=$?\n> patch <patches.txt # no idea if this works, or if you need a for loop with manual parsing\n> exit $status\n\nOkay, I got a working version. And I updated the pre-commit hook on\nthe wiki accordingly.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:58:19 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add --check option to pgindent"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs the comment of ReorderBufferLargestTXN() says, it's very slow with\nmany subtransactions:\n\n/*\n * Find the largest transaction (toplevel or subxact) to evict (spill to disk).\n *\n * XXX With many subtransactions this might be quite slow, because we'll have\n * to walk through all of them. There are some options how we could improve\n * that: (a) maintain some secondary structure with transactions sorted by\n * amount of changes, (b) not looking for the entirely largest transaction,\n * but e.g. for transaction using at least some fraction of the memory limit,\n * and (c) evicting multiple transactions at once, e.g. to free a given portion\n * of the memory limit (e.g. 50%).\n */\n\nThis is because the reorderbuffer has transaction entries for each\ntop-level and sub transaction, and ReorderBufferLargestTXN() walks\nthrough all transaction entries to pick the transaction to evict.\nI've heard the report internally that replication lag became huge when\ndecoding transactions each consisting of 500k sub transactions. Note\nthat ReorderBufferLargetstTXN() is used only in non-streaming mode.\n\nHere is a test script for a many subtransactions scenario. In my\nenvironment, the logical decoding took over 2min to decode one top\ntransaction having 100k subtransctions.\n\n-----\ncreate table test (c int);\ncreate or replace function testfn (cnt int) returns void as $$\nbegin\n for i in 1..cnt loop\n begin\n insert into test values (i);\n exception when division_by_zero then\n raise notice 'caught error';\n return;\n end;\n end loop;\nend;\n$$\nlanguage plpgsql;\nselect testfn(100000)\nset logical_decoding_work_mem to '4MB';\nselect count(*) from pg_logical_slot_peek_changes('s', null, null)\n----\n\nTo deal with this problem, I initially thought of the idea (a)\nmentioned in the comment; use a binary heap to maintain the\ntransactions sorted by the amount of changes or the size. But it seems\nnot a good idea to try maintaining all transactions by its size since\nthe size of each transaction could be changed frequently.\n\nThe attached patch uses a different approach that consists of three\nstrategies; (1) maintain the list of transactions whose size is larger\nthan 10% of logical_decoding_work_mem, and preferentially evict a\ntransaction from this list. If the list is empty, all transactions are\nsmall enough, (2) so we evict the oldest top-level transaction from\nrb->toplevel_by_lsn list. Evicting older transactions would help in\nfreeing memory blocks in GenerationContext. Finally, if this is also\nempty, (3) we evict a transaction that size is > 0. Here, we need to\nnote the fact that even if a transaction is evicted the\nReorderBufferTXN entry is not removed from rb->by_txn but its size is\n0. In the worst case where all (quite a few) transactions are smaller\nthan 10% of the memory limit, we might end up checking many\ntransactions to find non-zero size transaction entries to evict. So\nthe patch adds a new list to maintain all transactions that have at\nleast one change in memory.\n\nSummarizing the algorithm I've implemented in the patch,\n\n1. pick a transaction from the list of large transactions (larger than\n10% of memory limit).\n2. pick a transaction from the top-level transaction list in LSN order.\n3. pick a transaction from the list of transactions that have at least\none change in memory.\n\nWith the patch, the above test case completed within 3 seconds in my\nenvironment.\n\nAs a side note, the idea (c) mentioned in the comment, evicting\nmultiple transactions at once to free a given portion of the memory,\nwould also help in avoiding back and forth the memory threshold. It's\nalso worth considering.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 12 Dec 2023 12:31:03 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 9:01 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi all,\n>\n> As the comment of ReorderBufferLargestTXN() says, it's very slow with\n> many subtransactions:\n>\n> /*\n> * Find the largest transaction (toplevel or subxact) to evict (spill to disk).\n> *\n> * XXX With many subtransactions this might be quite slow, because we'll have\n> * to walk through all of them. There are some options how we could improve\n> * that: (a) maintain some secondary structure with transactions sorted by\n> * amount of changes, (b) not looking for the entirely largest transaction,\n> * but e.g. for transaction using at least some fraction of the memory limit,\n> * and (c) evicting multiple transactions at once, e.g. to free a given portion\n> * of the memory limit (e.g. 50%).\n> */\n>\n> This is because the reorderbuffer has transaction entries for each\n> top-level and sub transaction, and ReorderBufferLargestTXN() walks\n> through all transaction entries to pick the transaction to evict.\n> I've heard the report internally that replication lag became huge when\n> decoding transactions each consisting of 500k sub transactions. Note\n> that ReorderBufferLargetstTXN() is used only in non-streaming mode.\n>\n> Here is a test script for a many subtransactions scenario. In my\n> environment, the logical decoding took over 2min to decode one top\n> transaction having 100k subtransctions.\n>\n> -----\n> create table test (c int);\n> create or replace function testfn (cnt int) returns void as $$\n> begin\n> for i in 1..cnt loop\n> begin\n> insert into test values (i);\n> exception when division_by_zero then\n> raise notice 'caught error';\n> return;\n> end;\n> end loop;\n> end;\n> $$\n> language plpgsql;\n> select testfn(100000)\n> set logical_decoding_work_mem to '4MB';\n> select count(*) from pg_logical_slot_peek_changes('s', null, null)\n> ----\n>\n> To deal with this problem, I initially thought of the idea (a)\n> mentioned in the comment; use a binary heap to maintain the\n> transactions sorted by the amount of changes or the size. But it seems\n> not a good idea to try maintaining all transactions by its size since\n> the size of each transaction could be changed frequently.\n>\n> The attached patch uses a different approach that consists of three\n> strategies; (1) maintain the list of transactions whose size is larger\n> than 10% of logical_decoding_work_mem, and preferentially evict a\n> transaction from this list. If the list is empty, all transactions are\n> small enough, (2) so we evict the oldest top-level transaction from\n> rb->toplevel_by_lsn list. Evicting older transactions would help in\n> freeing memory blocks in GenerationContext. Finally, if this is also\n> empty, (3) we evict a transaction that size is > 0. Here, we need to\n> note the fact that even if a transaction is evicted the\n> ReorderBufferTXN entry is not removed from rb->by_txn but its size is\n> 0. In the worst case where all (quite a few) transactions are smaller\n> than 10% of the memory limit, we might end up checking many\n> transactions to find non-zero size transaction entries to evict. So\n> the patch adds a new list to maintain all transactions that have at\n> least one change in memory.\n>\n> Summarizing the algorithm I've implemented in the patch,\n>\n> 1. pick a transaction from the list of large transactions (larger than\n> 10% of memory limit).\n> 2. pick a transaction from the top-level transaction list in LSN order.\n> 3. pick a transaction from the list of transactions that have at least\n> one change in memory.\n>\n> With the patch, the above test case completed within 3 seconds in my\n> environment.\n\nThanks for working on this, I think it would be good to test other\nscenarios as well where this might have some negative impact and see\nwhere we stand. I mean\n1) A scenario where suppose you have one very large transaction that\nis consuming ~40% of the memory and 5-6 comparatively smaller\ntransactions that are just above 10% of the memory limit. And now for\ncoming under the memory limit instead of getting 1 large transaction\nevicted out, we are evicting out multiple times.\n2) Another scenario where all the transactions are under 10% of the\nmemory limit but let's say there are some transactions are consuming\naround 8-9% of the memory limit each but those are not very old\ntransactions whereas there are certain old transactions which are\nfairly small and consuming under 1% of memory limit and there are many\nsuch transactions. So how it would affect if we frequently select\nmany of these transactions to come under memory limit instead of\nselecting a couple of large transactions which are consuming 8-9%?\n\n>\n> As a side note, the idea (c) mentioned in the comment, evicting\n> multiple transactions at once to free a given portion of the memory,\n> would also help in avoiding back and forth the memory threshold. It's\n> also worth considering.\n\nYes, I think it is worth considering.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Dec 2023 10:03:05 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 1:33 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Dec 12, 2023 at 9:01 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > Hi all,\n> >\n> > As the comment of ReorderBufferLargestTXN() says, it's very slow with\n> > many subtransactions:\n> >\n> > /*\n> > * Find the largest transaction (toplevel or subxact) to evict (spill to disk).\n> > *\n> > * XXX With many subtransactions this might be quite slow, because we'll have\n> > * to walk through all of them. There are some options how we could improve\n> > * that: (a) maintain some secondary structure with transactions sorted by\n> > * amount of changes, (b) not looking for the entirely largest transaction,\n> > * but e.g. for transaction using at least some fraction of the memory limit,\n> > * and (c) evicting multiple transactions at once, e.g. to free a given portion\n> > * of the memory limit (e.g. 50%).\n> > */\n> >\n> > This is because the reorderbuffer has transaction entries for each\n> > top-level and sub transaction, and ReorderBufferLargestTXN() walks\n> > through all transaction entries to pick the transaction to evict.\n> > I've heard the report internally that replication lag became huge when\n> > decoding transactions each consisting of 500k sub transactions. Note\n> > that ReorderBufferLargetstTXN() is used only in non-streaming mode.\n> >\n> > Here is a test script for a many subtransactions scenario. In my\n> > environment, the logical decoding took over 2min to decode one top\n> > transaction having 100k subtransctions.\n> >\n> > -----\n> > create table test (c int);\n> > create or replace function testfn (cnt int) returns void as $$\n> > begin\n> > for i in 1..cnt loop\n> > begin\n> > insert into test values (i);\n> > exception when division_by_zero then\n> > raise notice 'caught error';\n> > return;\n> > end;\n> > end loop;\n> > end;\n> > $$\n> > language plpgsql;\n> > select testfn(100000)\n> > set logical_decoding_work_mem to '4MB';\n> > select count(*) from pg_logical_slot_peek_changes('s', null, null)\n> > ----\n> >\n> > To deal with this problem, I initially thought of the idea (a)\n> > mentioned in the comment; use a binary heap to maintain the\n> > transactions sorted by the amount of changes or the size. But it seems\n> > not a good idea to try maintaining all transactions by its size since\n> > the size of each transaction could be changed frequently.\n> >\n> > The attached patch uses a different approach that consists of three\n> > strategies; (1) maintain the list of transactions whose size is larger\n> > than 10% of logical_decoding_work_mem, and preferentially evict a\n> > transaction from this list. If the list is empty, all transactions are\n> > small enough, (2) so we evict the oldest top-level transaction from\n> > rb->toplevel_by_lsn list. Evicting older transactions would help in\n> > freeing memory blocks in GenerationContext. Finally, if this is also\n> > empty, (3) we evict a transaction that size is > 0. Here, we need to\n> > note the fact that even if a transaction is evicted the\n> > ReorderBufferTXN entry is not removed from rb->by_txn but its size is\n> > 0. In the worst case where all (quite a few) transactions are smaller\n> > than 10% of the memory limit, we might end up checking many\n> > transactions to find non-zero size transaction entries to evict. So\n> > the patch adds a new list to maintain all transactions that have at\n> > least one change in memory.\n> >\n> > Summarizing the algorithm I've implemented in the patch,\n> >\n> > 1. pick a transaction from the list of large transactions (larger than\n> > 10% of memory limit).\n> > 2. pick a transaction from the top-level transaction list in LSN order.\n> > 3. pick a transaction from the list of transactions that have at least\n> > one change in memory.\n> >\n> > With the patch, the above test case completed within 3 seconds in my\n> > environment.\n>\n> Thanks for working on this, I think it would be good to test other\n> scenarios as well where this might have some negative impact and see\n> where we stand.\n\nAgreed.\n\n> 1) A scenario where suppose you have one very large transaction that\n> is consuming ~40% of the memory and 5-6 comparatively smaller\n> transactions that are just above 10% of the memory limit. And now for\n> coming under the memory limit instead of getting 1 large transaction\n> evicted out, we are evicting out multiple times.\n\nGiven the large transaction list will have up to 10 transactions, I\nthink it's cheap to pick the largest transaction among them. It's O(N)\nbut N won't be large.\n\n> 2) Another scenario where all the transactions are under 10% of the\n> memory limit but let's say there are some transactions are consuming\n> around 8-9% of the memory limit each but those are not very old\n> transactions whereas there are certain old transactions which are\n> fairly small and consuming under 1% of memory limit and there are many\n> such transactions. So how it would affect if we frequently select\n> many of these transactions to come under memory limit instead of\n> selecting a couple of large transactions which are consuming 8-9%?\n\nYeah, probably we can do something for small transactions (i.e. small\nand on-memory transactions). One idea is to pick the largest\ntransaction among them by iterating over all of them. Given that the\nmore transactions are evicted, the less transactions the on-memory\ntransaction list has, unlike the current algorithm, we would still\nwin. Or we could even split it into several sub-lists in order to\nreduce the number of transactions to check. For example, splitting it\ninto two lists: transactions consuming 5% < and 5% >= of the memory\nlimit, and checking the 5% >= list preferably. The cost for\nmaintaining these lists could increase, though.\n\nDo you have any ideas?\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Dec 2023 09:30:44 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n>\n> > Thanks for working on this, I think it would be good to test other\n> > scenarios as well where this might have some negative impact and see\n> > where we stand.\n>\n> Agreed.\n>\n> > 1) A scenario where suppose you have one very large transaction that\n> > is consuming ~40% of the memory and 5-6 comparatively smaller\n> > transactions that are just above 10% of the memory limit. And now for\n> > coming under the memory limit instead of getting 1 large transaction\n> > evicted out, we are evicting out multiple times.\n>\n> Given the large transaction list will have up to 10 transactions, I\n> think it's cheap to pick the largest transaction among them. It's O(N)\n> but N won't be large.\n\nYeah, that makes sense.\n\n> > 2) Another scenario where all the transactions are under 10% of the\n> > memory limit but let's say there are some transactions are consuming\n> > around 8-9% of the memory limit each but those are not very old\n> > transactions whereas there are certain old transactions which are\n> > fairly small and consuming under 1% of memory limit and there are many\n> > such transactions. So how it would affect if we frequently select\n> > many of these transactions to come under memory limit instead of\n> > selecting a couple of large transactions which are consuming 8-9%?\n>\n> Yeah, probably we can do something for small transactions (i.e. small\n> and on-memory transactions). One idea is to pick the largest\n> transaction among them by iterating over all of them. Given that the\n> more transactions are evicted, the less transactions the on-memory\n> transaction list has, unlike the current algorithm, we would still\n> win. Or we could even split it into several sub-lists in order to\n> reduce the number of transactions to check. For example, splitting it\n> into two lists: transactions consuming 5% < and 5% >= of the memory\n> limit, and checking the 5% >= list preferably. The cost for\n> maintaining these lists could increase, though.\n>\n> Do you have any ideas?\n\nYeah something like what you mention might be good, we maintain 3 list\nthat says large, medium, and small transactions. In a large\ntransaction, list suppose we allow transactions that consume more than\n10% so there could be at max 10 transactions so we can do a sequence\nsearch and spill the largest of all. Whereas in the medium list\nsuppose we keep transactions ranging from e.g. 3-10% then it's just\nfine to pick from the head because the size differences between the\nlargest and smallest transaction in this list are not very\nsignificant. And remaining in the small transaction list and from the\nsmall transaction list we can choose to spill multiple transactions at\na time.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Dec 2023 11:13:35 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Dec 12, 2023 at 1:33 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Dec 12, 2023 at 9:01 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > I've heard the report internally that replication lag became huge when\n> > > decoding transactions each consisting of 500k sub transactions. Note\n> > > that ReorderBufferLargetstTXN() is used only in non-streaming mode.\n> > >\n\nCan't you suggest them to use streaming mode to avoid this problem or\ndo you see some problem with that?\n\n> > > Here is a test script for a many subtransactions scenario. In my\n> > > environment, the logical decoding took over 2min to decode one top\n> > > transaction having 100k subtransctions.\n> > >\n> > > -----\n> > > create table test (c int);\n> > > create or replace function testfn (cnt int) returns void as $$\n> > > begin\n> > > for i in 1..cnt loop\n> > > begin\n> > > insert into test values (i);\n> > > exception when division_by_zero then\n> > > raise notice 'caught error';\n> > > return;\n> > > end;\n> > > end loop;\n> > > end;\n> > > $$\n> > > language plpgsql;\n> > > select testfn(100000)\n> > > set logical_decoding_work_mem to '4MB';\n> > > select count(*) from pg_logical_slot_peek_changes('s', null, null)\n> > > ----\n> > >\n> > > To deal with this problem, I initially thought of the idea (a)\n> > > mentioned in the comment; use a binary heap to maintain the\n> > > transactions sorted by the amount of changes or the size. But it seems\n> > > not a good idea to try maintaining all transactions by its size since\n> > > the size of each transaction could be changed frequently.\n> > >\n> > > The attached patch uses a different approach that consists of three\n> > > strategies; (1) maintain the list of transactions whose size is larger\n> > > than 10% of logical_decoding_work_mem, and preferentially evict a\n> > > transaction from this list.\n> > >\n\nIIUC, you are giving preference to multiple list ideas as compared to\n(a) because you don't need to adjust the list each time the\ntransaction size changes, is that right? If so, I think there is a\ncost to keep that data structure up-to-date but it can help in\nreducing the number of times we need to serialize.\n\n If the list is empty, all transactions are\n> > > small enough, (2) so we evict the oldest top-level transaction from\n> > > rb->toplevel_by_lsn list. Evicting older transactions would help in\n> > > freeing memory blocks in GenerationContext. Finally, if this is also\n> > > empty, (3) we evict a transaction that size is > 0. Here, we need to\n> > > note the fact that even if a transaction is evicted the\n> > > ReorderBufferTXN entry is not removed from rb->by_txn but its size is\n> > > 0. In the worst case where all (quite a few) transactions are smaller\n> > > than 10% of the memory limit, we might end up checking many\n> > > transactions to find non-zero size transaction entries to evict. So\n> > > the patch adds a new list to maintain all transactions that have at\n> > > least one change in memory.\n> > >\n> > > Summarizing the algorithm I've implemented in the patch,\n> > >\n> > > 1. pick a transaction from the list of large transactions (larger than\n> > > 10% of memory limit).\n> > > 2. pick a transaction from the top-level transaction list in LSN order.\n> > > 3. pick a transaction from the list of transactions that have at least\n> > > one change in memory.\n> > >\n> > > With the patch, the above test case completed within 3 seconds in my\n> > > environment.\n> >\n> > Thanks for working on this, I think it would be good to test other\n> > scenarios as well where this might have some negative impact and see\n> > where we stand.\n>\n> Agreed.\n>\n> > 1) A scenario where suppose you have one very large transaction that\n> > is consuming ~40% of the memory and 5-6 comparatively smaller\n> > transactions that are just above 10% of the memory limit. And now for\n> > coming under the memory limit instead of getting 1 large transaction\n> > evicted out, we are evicting out multiple times.\n>\n> Given the large transaction list will have up to 10 transactions, I\n> think it's cheap to pick the largest transaction among them. It's O(N)\n> but N won't be large.\n>\n> > 2) Another scenario where all the transactions are under 10% of the\n> > memory limit but let's say there are some transactions are consuming\n> > around 8-9% of the memory limit each but those are not very old\n> > transactions whereas there are certain old transactions which are\n> > fairly small and consuming under 1% of memory limit and there are many\n> > such transactions. So how it would affect if we frequently select\n> > many of these transactions to come under memory limit instead of\n> > selecting a couple of large transactions which are consuming 8-9%?\n>\n> Yeah, probably we can do something for small transactions (i.e. small\n> and on-memory transactions). One idea is to pick the largest\n> transaction among them by iterating over all of them. Given that the\n> more transactions are evicted, the less transactions the on-memory\n> transaction list has, unlike the current algorithm, we would still\n> win. Or we could even split it into several sub-lists in order to\n> reduce the number of transactions to check. For example, splitting it\n> into two lists: transactions consuming 5% < and 5% >= of the memory\n> limit, and checking the 5% >= list preferably.\n>\n\nWhich memory limit are you referring to here? Is it logical_decoding_work_mem?\n\n> The cost for\n> maintaining these lists could increase, though.\n>\n\nYeah, can't we maintain a single list of all xacts that are consuming\nequal to or greater than the memory limit? Considering that the memory\nlimit is logical_decoding_work_mem, then I think just picking one\ntransaction to serialize would be sufficient.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 15 Dec 2023 09:06:50 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 12:37 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Dec 12, 2023 at 1:33 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Tue, Dec 12, 2023 at 9:01 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > I've heard the report internally that replication lag became huge when\n> > > > decoding transactions each consisting of 500k sub transactions. Note\n> > > > that ReorderBufferLargetstTXN() is used only in non-streaming mode.\n> > > >\n>\n> Can't you suggest them to use streaming mode to avoid this problem or\n> do you see some problem with that?\n\nYeah, that's one option. But I can suggest\n\n>\n> > > > Here is a test script for a many subtransactions scenario. In my\n> > > > environment, the logical decoding took over 2min to decode one top\n> > > > transaction having 100k subtransctions.\n> > > >\n> > > > -----\n> > > > create table test (c int);\n> > > > create or replace function testfn (cnt int) returns void as $$\n> > > > begin\n> > > > for i in 1..cnt loop\n> > > > begin\n> > > > insert into test values (i);\n> > > > exception when division_by_zero then\n> > > > raise notice 'caught error';\n> > > > return;\n> > > > end;\n> > > > end loop;\n> > > > end;\n> > > > $$\n> > > > language plpgsql;\n> > > > select testfn(100000)\n> > > > set logical_decoding_work_mem to '4MB';\n> > > > select count(*) from pg_logical_slot_peek_changes('s', null, null)\n> > > > ----\n> > > >\n> > > > To deal with this problem, I initially thought of the idea (a)\n> > > > mentioned in the comment; use a binary heap to maintain the\n> > > > transactions sorted by the amount of changes or the size. But it seems\n> > > > not a good idea to try maintaining all transactions by its size since\n> > > > the size of each transaction could be changed frequently.\n> > > >\n> > > > The attached patch uses a different approach that consists of three\n> > > > strategies; (1) maintain the list of transactions whose size is larger\n> > > > than 10% of logical_decoding_work_mem, and preferentially evict a\n> > > > transaction from this list.\n> > > >\n>\n> IIUC, you are giving preference to multiple list ideas as compared to\n> (a) because you don't need to adjust the list each time the\n> transaction size changes, is that right?\n\nRight.\n\n> If so, I think there is a\n> cost to keep that data structure up-to-date but it can help in\n> reducing the number of times we need to serialize.\n\nYes, there is a trade-off.\n\nWhat I don't want to do is to keep all transactions ordered since it's\ntoo costly. The proposed idea uses multiple lists to keep all\ntransactions roughly ordered. The maintenance cost would be cheap\nsince each list is unordered.\n\nIt might be a good idea to have a threshold to switch how to pick the\nlargest transaction based on the number of transactions in the\nreorderbuffer. If there are many transactions, we can use the proposed\nalgorithm to find a possibly-largest transaction, otherwise use the\ncurrent way.\n\n>\n> If the list is empty, all transactions are\n> > > > small enough, (2) so we evict the oldest top-level transaction from\n> > > > rb->toplevel_by_lsn list. Evicting older transactions would help in\n> > > > freeing memory blocks in GenerationContext. Finally, if this is also\n> > > > empty, (3) we evict a transaction that size is > 0. Here, we need to\n> > > > note the fact that even if a transaction is evicted the\n> > > > ReorderBufferTXN entry is not removed from rb->by_txn but its size is\n> > > > 0. In the worst case where all (quite a few) transactions are smaller\n> > > > than 10% of the memory limit, we might end up checking many\n> > > > transactions to find non-zero size transaction entries to evict. So\n> > > > the patch adds a new list to maintain all transactions that have at\n> > > > least one change in memory.\n> > > >\n> > > > Summarizing the algorithm I've implemented in the patch,\n> > > >\n> > > > 1. pick a transaction from the list of large transactions (larger than\n> > > > 10% of memory limit).\n> > > > 2. pick a transaction from the top-level transaction list in LSN order.\n> > > > 3. pick a transaction from the list of transactions that have at least\n> > > > one change in memory.\n> > > >\n> > > > With the patch, the above test case completed within 3 seconds in my\n> > > > environment.\n> > >\n> > > Thanks for working on this, I think it would be good to test other\n> > > scenarios as well where this might have some negative impact and see\n> > > where we stand.\n> >\n> > Agreed.\n> >\n> > > 1) A scenario where suppose you have one very large transaction that\n> > > is consuming ~40% of the memory and 5-6 comparatively smaller\n> > > transactions that are just above 10% of the memory limit. And now for\n> > > coming under the memory limit instead of getting 1 large transaction\n> > > evicted out, we are evicting out multiple times.\n> >\n> > Given the large transaction list will have up to 10 transactions, I\n> > think it's cheap to pick the largest transaction among them. It's O(N)\n> > but N won't be large.\n> >\n> > > 2) Another scenario where all the transactions are under 10% of the\n> > > memory limit but let's say there are some transactions are consuming\n> > > around 8-9% of the memory limit each but those are not very old\n> > > transactions whereas there are certain old transactions which are\n> > > fairly small and consuming under 1% of memory limit and there are many\n> > > such transactions. So how it would affect if we frequently select\n> > > many of these transactions to come under memory limit instead of\n> > > selecting a couple of large transactions which are consuming 8-9%?\n> >\n> > Yeah, probably we can do something for small transactions (i.e. small\n> > and on-memory transactions). One idea is to pick the largest\n> > transaction among them by iterating over all of them. Given that the\n> > more transactions are evicted, the less transactions the on-memory\n> > transaction list has, unlike the current algorithm, we would still\n> > win. Or we could even split it into several sub-lists in order to\n> > reduce the number of transactions to check. For example, splitting it\n> > into two lists: transactions consuming 5% < and 5% >= of the memory\n> > limit, and checking the 5% >= list preferably.\n> >\n>\n> Which memory limit are you referring to here? Is it logical_decoding_work_mem?\n\nlogical_decoding_work_mem.\n\n>\n> > The cost for\n> > maintaining these lists could increase, though.\n> >\n>\n> Yeah, can't we maintain a single list of all xacts that are consuming\n> equal to or greater than the memory limit? Considering that the memory\n> limit is logical_decoding_work_mem, then I think just picking one\n> transaction to serialize would be sufficient.\n\nIIUC we serialize a transaction when the sum of all transactions'\nmemory usage in the reorderbuffer exceeds logical_decoding_work_mem.\nIn what cases are multiple transactions consuming equal to or greater\nthan the logical_decoding_work_mem?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Dec 2023 14:59:16 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 2023-Dec-12, Masahiko Sawada wrote:\n\n> To deal with this problem, I initially thought of the idea (a)\n> mentioned in the comment; use a binary heap to maintain the\n> transactions sorted by the amount of changes or the size. But it seems\n> not a good idea to try maintaining all transactions by its size since\n> the size of each transaction could be changed frequently.\n\nHmm, maybe you can just use binaryheap_add_unordered and just let the\nsizes change, and do binaryheap_build() at the point where the eviction\nis needed.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n",
"msg_date": "Fri, 15 Dec 2023 11:10:24 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 7:10 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Dec-12, Masahiko Sawada wrote:\n>\n> > To deal with this problem, I initially thought of the idea (a)\n> > mentioned in the comment; use a binary heap to maintain the\n> > transactions sorted by the amount of changes or the size. But it seems\n> > not a good idea to try maintaining all transactions by its size since\n> > the size of each transaction could be changed frequently.\n>\n> Hmm, maybe you can just use binaryheap_add_unordered and just let the\n> sizes change, and do binaryheap_build() at the point where the eviction\n> is needed.\n\nI assume you mean to add ReorderBufferTXN entries to the binaryheap\nand then build it by comparing their sizes (i.e. txn->size). But\nReorderBufferTXN entries are removed and deallocated once the\ntransaction finished. How can we efficiently remove these entries from\nbinaryheap? I guess it would be O(n) to find the entry among the\nunordered entries, where n is the number of transactions in the\nreorderbuffer.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Dec 2023 21:11:41 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 2:59 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Dec 15, 2023 at 12:37 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Tue, Dec 12, 2023 at 1:33 PM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Tue, Dec 12, 2023 at 9:01 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > I've heard the report internally that replication lag became huge when\n> > > > > decoding transactions each consisting of 500k sub transactions. Note\n> > > > > that ReorderBufferLargetstTXN() is used only in non-streaming mode.\n> > > > >\n> >\n> > Can't you suggest them to use streaming mode to avoid this problem or\n> > do you see some problem with that?\n>\n> Yeah, that's one option. But I can suggest\n>\n\nSorry, it was still in the middle of editing.\n\nYeah, that's one option. But since there is a trade-off I cannot\nsuggest using streaming mode for every user. Also, the logical\nreplication client (e.g. third party tool receiving logical change\nset) might not support the streaming mode yet.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Dec 2023 21:18:55 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Dec 15, 2023, at 9:11 AM, Masahiko Sawada wrote:\n> \n> I assume you mean to add ReorderBufferTXN entries to the binaryheap\n> and then build it by comparing their sizes (i.e. txn->size). But\n> ReorderBufferTXN entries are removed and deallocated once the\n> transaction finished. How can we efficiently remove these entries from\n> binaryheap? I guess it would be O(n) to find the entry among the\n> unordered entries, where n is the number of transactions in the\n> reorderbuffer.\n\nO(log n) for both functions: binaryheap_remove_first() and\nbinaryheap_remove_node(). I didn't read your patch but do you really need to\nfree entries one by one? If not, binaryheap_free().\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Dec 15, 2023, at 9:11 AM, Masahiko Sawada wrote:I assume you mean to add ReorderBufferTXN entries to the binaryheapand then build it by comparing their sizes (i.e. txn->size). ButReorderBufferTXN entries are removed and deallocated once thetransaction finished. How can we efficiently remove these entries frombinaryheap? I guess it would be O(n) to find the entry among theunordered entries, where n is the number of transactions in thereorderbuffer.O(log n) for both functions: binaryheap_remove_first() andbinaryheap_remove_node(). I didn't read your patch but do you really need to free entries one by one? If not, binaryheap_free().--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 15 Dec 2023 13:36:27 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Sat, Dec 16, 2023 at 1:36 AM Euler Taveira <[email protected]> wrote:\n>\n> On Fri, Dec 15, 2023, at 9:11 AM, Masahiko Sawada wrote:\n>\n>\n> I assume you mean to add ReorderBufferTXN entries to the binaryheap\n> and then build it by comparing their sizes (i.e. txn->size). But\n> ReorderBufferTXN entries are removed and deallocated once the\n> transaction finished. How can we efficiently remove these entries from\n> binaryheap? I guess it would be O(n) to find the entry among the\n> unordered entries, where n is the number of transactions in the\n> reorderbuffer.\n>\n>\n> O(log n) for both functions: binaryheap_remove_first() and\n> binaryheap_remove_node().\n\nRight. The binaryheap_binaryheap_remove_first() removes the topmost\nentry in O(log n), but the ReorderBufferTXN being removed is not\nnecessarily the topmost entry, since we remove the entry when the\ntransaction completes (committed or aborted). The\nbinaryheap_remove_node() removes the entry at the given Nth in O(log\nn), but I'm not sure how we can know the indexes of each entry. I\nthink we can remember the index of newly added entry after calling\nbinaryheap_add_unordered() but once we call binaryheap_build() the\nindex is out-of-date. So I think that in the worst case we would need\nto check all entries in order to remove an arbitrary entry in\nbinaryheap. It's O(n). I might be missing something though.\n\n> I didn't read your patch but do you really need to\n> free entries one by one? If not, binaryheap_free().\n\nThe patch doesn't touch on how to free entries. ReorderBufferTXN\nentries are freed one by one after each of which completes (committed\nor aborted).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 16 Dec 2023 21:13:54 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 11:29 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Dec 15, 2023 at 12:37 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> >\n> > IIUC, you are giving preference to multiple list ideas as compared to\n> > (a) because you don't need to adjust the list each time the\n> > transaction size changes, is that right?\n>\n> Right.\n>\n> > If so, I think there is a\n> > cost to keep that data structure up-to-date but it can help in\n> > reducing the number of times we need to serialize.\n>\n> Yes, there is a trade-off.\n>\n> What I don't want to do is to keep all transactions ordered since it's\n> too costly. The proposed idea uses multiple lists to keep all\n> transactions roughly ordered. The maintenance cost would be cheap\n> since each list is unordered.\n>\n> It might be a good idea to have a threshold to switch how to pick the\n> largest transaction based on the number of transactions in the\n> reorderbuffer. If there are many transactions, we can use the proposed\n> algorithm to find a possibly-largest transaction, otherwise use the\n> current way.\n>\n\nYeah, that makes sense.\n\n> >\n> > >\n> > > > 1) A scenario where suppose you have one very large transaction that\n> > > > is consuming ~40% of the memory and 5-6 comparatively smaller\n> > > > transactions that are just above 10% of the memory limit. And now for\n> > > > coming under the memory limit instead of getting 1 large transaction\n> > > > evicted out, we are evicting out multiple times.\n> > >\n> > > Given the large transaction list will have up to 10 transactions, I\n> > > think it's cheap to pick the largest transaction among them. It's O(N)\n> > > but N won't be large.\n> > >\n> > > > 2) Another scenario where all the transactions are under 10% of the\n> > > > memory limit but let's say there are some transactions are consuming\n> > > > around 8-9% of the memory limit each but those are not very old\n> > > > transactions whereas there are certain old transactions which are\n> > > > fairly small and consuming under 1% of memory limit and there are many\n> > > > such transactions. So how it would affect if we frequently select\n> > > > many of these transactions to come under memory limit instead of\n> > > > selecting a couple of large transactions which are consuming 8-9%?\n> > >\n> > > Yeah, probably we can do something for small transactions (i.e. small\n> > > and on-memory transactions). One idea is to pick the largest\n> > > transaction among them by iterating over all of them. Given that the\n> > > more transactions are evicted, the less transactions the on-memory\n> > > transaction list has, unlike the current algorithm, we would still\n> > > win. Or we could even split it into several sub-lists in order to\n> > > reduce the number of transactions to check. For example, splitting it\n> > > into two lists: transactions consuming 5% < and 5% >= of the memory\n> > > limit, and checking the 5% >= list preferably.\n> > >\n> >\n> > Which memory limit are you referring to here? Is it logical_decoding_work_mem?\n>\n> logical_decoding_work_mem.\n>\n> >\n> > > The cost for\n> > > maintaining these lists could increase, though.\n> > >\n> >\n> > Yeah, can't we maintain a single list of all xacts that are consuming\n> > equal to or greater than the memory limit? Considering that the memory\n> > limit is logical_decoding_work_mem, then I think just picking one\n> > transaction to serialize would be sufficient.\n>\n> IIUC we serialize a transaction when the sum of all transactions'\n> memory usage in the reorderbuffer exceeds logical_decoding_work_mem.\n> In what cases are multiple transactions consuming equal to or greater\n> than the logical_decoding_work_mem?\n>\n\nThe individual transactions shouldn't cross\n'logical_decoding_work_mem'. I got a bit confused by your proposal to\nmaintain the lists: \"...splitting it into two lists: transactions\nconsuming 5% < and 5% >= of the memory limit, and checking the 5% >=\nlist preferably.\". In the previous sentence, what did you mean by\ntransactions consuming 5% >= of the memory limit? I got the impression\nthat you are saying to maintain them in a separate transaction list\nwhich doesn't seems to be the case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 17 Dec 2023 08:10:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Dec 15, 2023 at 11:29 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Dec 15, 2023 at 12:37 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Dec 13, 2023 at 6:01 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > >\n> > > IIUC, you are giving preference to multiple list ideas as compared to\n> > > (a) because you don't need to adjust the list each time the\n> > > transaction size changes, is that right?\n> >\n> > Right.\n> >\n> > > If so, I think there is a\n> > > cost to keep that data structure up-to-date but it can help in\n> > > reducing the number of times we need to serialize.\n> >\n> > Yes, there is a trade-off.\n> >\n> > What I don't want to do is to keep all transactions ordered since it's\n> > too costly. The proposed idea uses multiple lists to keep all\n> > transactions roughly ordered. The maintenance cost would be cheap\n> > since each list is unordered.\n> >\n> > It might be a good idea to have a threshold to switch how to pick the\n> > largest transaction based on the number of transactions in the\n> > reorderbuffer. If there are many transactions, we can use the proposed\n> > algorithm to find a possibly-largest transaction, otherwise use the\n> > current way.\n> >\n>\n> Yeah, that makes sense.\n>\n> > >\n> > > >\n> > > > > 1) A scenario where suppose you have one very large transaction that\n> > > > > is consuming ~40% of the memory and 5-6 comparatively smaller\n> > > > > transactions that are just above 10% of the memory limit. And now for\n> > > > > coming under the memory limit instead of getting 1 large transaction\n> > > > > evicted out, we are evicting out multiple times.\n> > > >\n> > > > Given the large transaction list will have up to 10 transactions, I\n> > > > think it's cheap to pick the largest transaction among them. It's O(N)\n> > > > but N won't be large.\n> > > >\n> > > > > 2) Another scenario where all the transactions are under 10% of the\n> > > > > memory limit but let's say there are some transactions are consuming\n> > > > > around 8-9% of the memory limit each but those are not very old\n> > > > > transactions whereas there are certain old transactions which are\n> > > > > fairly small and consuming under 1% of memory limit and there are many\n> > > > > such transactions. So how it would affect if we frequently select\n> > > > > many of these transactions to come under memory limit instead of\n> > > > > selecting a couple of large transactions which are consuming 8-9%?\n> > > >\n> > > > Yeah, probably we can do something for small transactions (i.e. small\n> > > > and on-memory transactions). One idea is to pick the largest\n> > > > transaction among them by iterating over all of them. Given that the\n> > > > more transactions are evicted, the less transactions the on-memory\n> > > > transaction list has, unlike the current algorithm, we would still\n> > > > win. Or we could even split it into several sub-lists in order to\n> > > > reduce the number of transactions to check. For example, splitting it\n> > > > into two lists: transactions consuming 5% < and 5% >= of the memory\n> > > > limit, and checking the 5% >= list preferably.\n> > > >\n> > >\n> > > Which memory limit are you referring to here? Is it logical_decoding_work_mem?\n> >\n> > logical_decoding_work_mem.\n> >\n> > >\n> > > > The cost for\n> > > > maintaining these lists could increase, though.\n> > > >\n> > >\n> > > Yeah, can't we maintain a single list of all xacts that are consuming\n> > > equal to or greater than the memory limit? Considering that the memory\n> > > limit is logical_decoding_work_mem, then I think just picking one\n> > > transaction to serialize would be sufficient.\n> >\n> > IIUC we serialize a transaction when the sum of all transactions'\n> > memory usage in the reorderbuffer exceeds logical_decoding_work_mem.\n> > In what cases are multiple transactions consuming equal to or greater\n> > than the logical_decoding_work_mem?\n> >\n>\n> The individual transactions shouldn't cross\n> 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> maintain the lists: \"...splitting it into two lists: transactions\n> consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> list preferably.\". In the previous sentence, what did you mean by\n> transactions consuming 5% >= of the memory limit? I got the impression\n> that you are saying to maintain them in a separate transaction list\n> which doesn't seems to be the case.\n\nI wanted to mean that there are three lists in total: the first one\nmaintain the transactions consuming more than 10% of\nlogical_decoding_work_mem, the second one maintains other transactions\nconsuming more than or equal to 5% of logical_decoding_work_mem, and\nthe third one maintains other transactions consuming more than 0 and\nless than 5% of logical_decoding_work_mem.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 12:00:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > The individual transactions shouldn't cross\n> > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > maintain the lists: \"...splitting it into two lists: transactions\n> > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > list preferably.\". In the previous sentence, what did you mean by\n> > transactions consuming 5% >= of the memory limit? I got the impression\n> > that you are saying to maintain them in a separate transaction list\n> > which doesn't seems to be the case.\n>\n> I wanted to mean that there are three lists in total: the first one\n> maintain the transactions consuming more than 10% of\n> logical_decoding_work_mem,\n>\n\nHow can we have multiple transactions in the list consuming more than\n10% of logical_decoding_work_mem? Shouldn't we perform serialization\nbefore any xact reaches logical_decoding_work_mem?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 16:32:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > The individual transactions shouldn't cross\n> > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > maintain the lists: \"...splitting it into two lists: transactions\n> > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > list preferably.\". In the previous sentence, what did you mean by\n> > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > that you are saying to maintain them in a separate transaction list\n> > > which doesn't seems to be the case.\n> >\n> > I wanted to mean that there are three lists in total: the first one\n> > maintain the transactions consuming more than 10% of\n> > logical_decoding_work_mem,\n> >\n>\n> How can we have multiple transactions in the list consuming more than\n> 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> before any xact reaches logical_decoding_work_mem?\n\nWell, suppose logical_decoding_work_mem is set to 64MB, transactions\nconsuming more than 6.4MB are added to the list. So for example, it's\npossible that the list has three transactions each of which are\nconsuming 10MB while the total memory usage in the reorderbuffer is\nstill 30MB (less than logical_decoding_work_mem).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:19:02 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > >\n> > > > The individual transactions shouldn't cross\n> > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > that you are saying to maintain them in a separate transaction list\n> > > > which doesn't seems to be the case.\n> > >\n> > > I wanted to mean that there are three lists in total: the first one\n> > > maintain the transactions consuming more than 10% of\n> > > logical_decoding_work_mem,\n> > >\n> >\n> > How can we have multiple transactions in the list consuming more than\n> > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > before any xact reaches logical_decoding_work_mem?\n>\n> Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> consuming more than 6.4MB are added to the list. So for example, it's\n> possible that the list has three transactions each of which are\n> consuming 10MB while the total memory usage in the reorderbuffer is\n> still 30MB (less than logical_decoding_work_mem).\n>\n\nThanks for the clarification. I misunderstood the list to have\ntransactions greater than 70.4 MB (64 + 6.4) in your example. But one\nthing to note is that maintaining these lists by default can also have\nsome overhead unless the list of open transactions crosses a certain\nthreshold.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 08:41:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there was some CFbot test failures last time it was run [2].\nPlease have a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4699/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4699\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 15:49:13 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > >\n> > > > > The individual transactions shouldn't cross\n> > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > that you are saying to maintain them in a separate transaction list\n> > > > > which doesn't seems to be the case.\n> > > >\n> > > > I wanted to mean that there are three lists in total: the first one\n> > > > maintain the transactions consuming more than 10% of\n> > > > logical_decoding_work_mem,\n> > > >\n> > >\n> > > How can we have multiple transactions in the list consuming more than\n> > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > before any xact reaches logical_decoding_work_mem?\n> >\n> > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > consuming more than 6.4MB are added to the list. So for example, it's\n> > possible that the list has three transactions each of which are\n> > consuming 10MB while the total memory usage in the reorderbuffer is\n> > still 30MB (less than logical_decoding_work_mem).\n> >\n>\n> Thanks for the clarification. I misunderstood the list to have\n> transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> thing to note is that maintaining these lists by default can also have\n> some overhead unless the list of open transactions crosses a certain\n> threshold.\n>\n\nOn further analysis, I realized that the approach discussed here might\nnot be the way to go. The idea of dividing transactions into several\nsubgroups is to divide a large number of entries into multiple\nsub-groups so we can reduce the complexity to search for the\nparticular entry. Since we assume that there are no big differences in\nentries' sizes within a sub-group, we can pick the entry to evict in\nO(1). However, what we really need to avoid here is that we end up\nincreasing the number of times to evict entries because serializing an\nentry to the disk is more costly than searching an entry on memory in\ngeneral.\n\nI think that it's no problem in a large-entries subgroup but when it\ncomes to the smallest-entries subgroup, like for entries consuming\nless than 5% of the limit, it could end up evicting many entries. For\nexample, there would be a huge difference between serializing 1 entry\nconsuming 5% of the memory limit and serializing 5000 entries\nconsuming 0.001% of the memory limit. Even if we can select 5000\nentries quickly, I think the latter would be slower in total. The more\nsubgroups we create, the more the algorithm gets complex and the\noverheads could cause. So I think we need to search for the largest\nentry in order to minimize the number of evictions anyway.\n\nLooking for data structures and algorithms, I think binaryheap with\nsome improvements could be promising. I mentioned before why we cannot\nuse the current binaryheap[1]. The missing pieces are efficient ways\nto remove the arbitrary entry and to update the arbitrary entry's key.\nThe current binaryheap provides binaryheap_remove_node(), which is\nO(log n), but it requires the entry's position in the binaryheap. We\ncan know the entry's position just after binaryheap_add_unordered()\nbut it might be changed after heapify. Searching the node's position\nis O(n). So the improvement idea is to add a hash table to the\nbinaryheap so that it can track the positions for each entry so that\nwe can remove the arbitrary entry in O(log n) and also update the\narbitrary entry's key in O(log n). This is known as the indexed\npriority queue. I've attached the patch for that (0001 and 0002).\n\nThat way, in terms of reorderbuffer, we can update and remove the\ntransaction's memory usage in O(log n) (in worst case and O(1) in\naverage) and then pick the largest transaction in O(1). Since we might\nneed to call ReorderBufferSerializeTXN() even in non-streaming case,\nwe need to maintain the binaryheap anyway. I've attached the patch for\nthat (0003).\n\nHere are test script for many sub-transactions case:\n\ncreate table test (c int);\ncreate or replace function testfn (cnt int) returns void as $$\nbegin\n for i in 1..cnt loop\n begin\n insert into test values (i);\n exception when division_by_zero then\n raise notice 'caught error';\n return;\n end;\n end loop;\nend;\n$$\nlanguage plpgsql;\nselect pg_create_logical_replication_slot('s', 'test_decoding');\nselect testfn(50000);\nset logical_decoding_work_mem to '4MB';\nselect count(*) from pg_logical_slot_peek_changes('s', null, null)\";\n\nand here are results:\n\n* HEAD: 16877.281 ms\n* HEAD w/ patches (0001 and 0002): 655.154 ms\n\nThere is huge improvement in a many-subtransactions case.\n\nFinally, we need to note that memory counter updates could happen\nfrequently as we update it for each change. So even though we update\nthe binaryheap in O(log n), it could be a huge overhead if it happens\nquite often. One idea is to batch the memory counter updates where\navailable. I've attached the patch for that (0004). I'll benchmark\noverheads for normal cases.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 26 Jan 2024 17:36:36 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 5:36 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > > The individual transactions shouldn't cross\n> > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > which doesn't seems to be the case.\n> > > > >\n> > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > maintain the transactions consuming more than 10% of\n> > > > > logical_decoding_work_mem,\n> > > > >\n> > > >\n> > > > How can we have multiple transactions in the list consuming more than\n> > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > before any xact reaches logical_decoding_work_mem?\n> > >\n> > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > possible that the list has three transactions each of which are\n> > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > still 30MB (less than logical_decoding_work_mem).\n> > >\n> >\n> > Thanks for the clarification. I misunderstood the list to have\n> > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > thing to note is that maintaining these lists by default can also have\n> > some overhead unless the list of open transactions crosses a certain\n> > threshold.\n> >\n>\n> On further analysis, I realized that the approach discussed here might\n> not be the way to go. The idea of dividing transactions into several\n> subgroups is to divide a large number of entries into multiple\n> sub-groups so we can reduce the complexity to search for the\n> particular entry. Since we assume that there are no big differences in\n> entries' sizes within a sub-group, we can pick the entry to evict in\n> O(1). However, what we really need to avoid here is that we end up\n> increasing the number of times to evict entries because serializing an\n> entry to the disk is more costly than searching an entry on memory in\n> general.\n>\n> I think that it's no problem in a large-entries subgroup but when it\n> comes to the smallest-entries subgroup, like for entries consuming\n> less than 5% of the limit, it could end up evicting many entries. For\n> example, there would be a huge difference between serializing 1 entry\n> consuming 5% of the memory limit and serializing 5000 entries\n> consuming 0.001% of the memory limit. Even if we can select 5000\n> entries quickly, I think the latter would be slower in total. The more\n> subgroups we create, the more the algorithm gets complex and the\n> overheads could cause. So I think we need to search for the largest\n> entry in order to minimize the number of evictions anyway.\n>\n> Looking for data structures and algorithms, I think binaryheap with\n> some improvements could be promising. I mentioned before why we cannot\n> use the current binaryheap[1]. The missing pieces are efficient ways\n> to remove the arbitrary entry and to update the arbitrary entry's key.\n> The current binaryheap provides binaryheap_remove_node(), which is\n> O(log n), but it requires the entry's position in the binaryheap. We\n> can know the entry's position just after binaryheap_add_unordered()\n> but it might be changed after heapify. Searching the node's position\n> is O(n). So the improvement idea is to add a hash table to the\n> binaryheap so that it can track the positions for each entry so that\n> we can remove the arbitrary entry in O(log n) and also update the\n> arbitrary entry's key in O(log n). This is known as the indexed\n> priority queue. I've attached the patch for that (0001 and 0002).\n>\n> That way, in terms of reorderbuffer, we can update and remove the\n> transaction's memory usage in O(log n) (in worst case and O(1) in\n> average) and then pick the largest transaction in O(1). Since we might\n> need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> we need to maintain the binaryheap anyway.\n\nSince if the number of transactions being decoded is small, updating\nmax-heap for each memory counter update could lead to some\nregressions, I've measured it with the case where updating memory\ncounter happens frequently:\n\nsetup script:\ncreate table test (c int);\nselect pg_create_logical_replication_slot('s', 'test_decoding');\ninsert into test select generate_series(1, 8000000);\n\nbenchmark script:\nset work_mem to '3GB';\nset logical_decoding_work_mem to '5GB';\nselect count(*) from pg_logical_slot_peek_changes('s', null, null);\n\nHere are results (the median of five executions):\n\n* HEAD\n5274.765 ms\n\n* HEAD + 0001-0003 patch\n5532.203 ms\n\nThere were approximately 5% performance regressions.\n\nAn improvement idea is that we use two strategies for updating\nmax-heap depending on the number of transactions. That is, if the\nnumber of transactions being decoded is small, we add a transaction to\nmax-heap by binaryheap_add_unordered(), which is O(1), and heapify it\njust before picking the largest transactions, which is O(n). That way,\nwe can minimize the overhead of updating the memory counter. Once the\nnumber of transactions being decoded exceeds the threshold, say 1024,\nwe use another strategy. We call binaryheap_update_up/down() when\nupdating the memory counter to preserve heap property, which is O(log\nn), and pick the largest transaction in O(1). This strategy minimizes\nthe cost of picking the largest transactions instead of paying some\ncosts to update the memory counters.\n\nI've experimented with this idea and run the same tests:\n\n* HEAD + new patches (0001 - 0003)\n5277.524 ms\n\nThe number looks good. I've attached these patches. Feedback is very welcome.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 30 Jan 2024 17:06:34 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nI have started to read your patches. Here are my initial comments.\r\nAt least, all subscription tests were passed on my env.\r\n\r\nA comment for 0001:\r\n\r\n01.\r\n```\r\n+static void\r\n+bh_enlarge_node_array(binaryheap *heap)\r\n+{\r\n+ if (heap->bh_size < heap->bh_space)\r\n+ return;\r\n+\r\n+ heap->bh_space *= 2;\r\n+ heap->bh_nodes = repalloc(heap->bh_nodes,\r\n+ sizeof(bh_node_type) * heap->bh_space);\r\n+}\r\n```\r\n\r\nI'm not sure it is OK to use repalloc() for enlarging bh_nodes. This data\r\nstructure public one and arbitrary codes and extensions can directly refer\r\nbh_nodes. But if the array is repalloc()'d, the pointer would be updated so that\r\ntheir referring would be a dangling pointer.\r\nI think the internal of the structure should be a private one in this case.\r\n\r\nComments for 0002:\r\n\r\n02.\r\n```\r\n+#include \"utils/palloc.h\"\r\n```\r\n\r\nIs it really needed? I'm not sure who referrs it.\r\n\r\n03.\r\n```\r\ntypedef struct bh_nodeidx_entry\r\n{\r\n\tbh_node_type\tkey;\r\n\tchar\t\t\tstatus;\r\n\tint\t\t\t\tidx;\r\n} bh_nodeidx_entry;\r\n```\r\n\r\nSorry if it is a stupid question. Can you tell me how \"status\" is used?\r\nNone of binaryheap and reorderbuffer components refer it. \r\n\r\n04.\r\n```\r\n extern binaryheap *binaryheap_allocate(int capacity,\r\n binaryheap_comparator compare,\r\n- void *arg);\r\n+ bool indexed, void *arg);\r\n```\r\n\r\nI felt pre-existing API should not be changed. How about adding\r\nbinaryheap_allocate_extended() or something which can specify the `bool indexed`?\r\nbinaryheap_allocate() sets heap->bh_indexed to false.\r\n\r\n05.\r\n```\r\n+extern void binaryheap_update_up(binaryheap *heap, bh_node_type d);\r\n+extern void binaryheap_update_down(binaryheap *heap, bh_node_type d);\r\n```\r\n\r\nIIUC, callers must consider whether the node should be shift up/down and use\r\nappropriate function, right? I felt it may not be user-friendly.\r\n\r\nComments for 0003:\r\n\r\n06.\r\n```\r\n This commit changes the eviction algorithm in ReorderBuffer to use\r\n max-heap with transaction size,a nd use two strategies depending on\r\n the number of transactions being decoded.\r\n```\r\n\r\ns/a nd/ and/\r\n\r\n07.\r\n```\r\n It could be too expensive to pudate max-heap while preserving the heap\r\n property each time the transaction's memory counter is updated, as it\r\n could happen very frquently. So when the number of transactions being\r\n decoded is small, we add the transactions to max-heap but don't\r\n preserve the heap property, which is O(1). We heapify the max-heap\r\n just before picking the largest transaction, which is O(n). This\r\n strategy minimizes the overheads of updating the transaction's memory\r\n counter.\r\n```\r\n\r\ns/pudate/update/\r\n\r\n08.\r\nIIUC, if more than 1024 transactions are running but they have small amount of\r\nchanges, the performance may be degraded, right? Do you have a result in sucha\r\na case?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Wed, 31 Jan 2024 05:18:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 13:37, Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 5:36 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > >\n> > > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > The individual transactions shouldn't cross\n> > > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > > which doesn't seems to be the case.\n> > > > > >\n> > > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > > maintain the transactions consuming more than 10% of\n> > > > > > logical_decoding_work_mem,\n> > > > > >\n> > > > >\n> > > > > How can we have multiple transactions in the list consuming more than\n> > > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > > before any xact reaches logical_decoding_work_mem?\n> > > >\n> > > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > > possible that the list has three transactions each of which are\n> > > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > > still 30MB (less than logical_decoding_work_mem).\n> > > >\n> > >\n> > > Thanks for the clarification. I misunderstood the list to have\n> > > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > > thing to note is that maintaining these lists by default can also have\n> > > some overhead unless the list of open transactions crosses a certain\n> > > threshold.\n> > >\n> >\n> > On further analysis, I realized that the approach discussed here might\n> > not be the way to go. The idea of dividing transactions into several\n> > subgroups is to divide a large number of entries into multiple\n> > sub-groups so we can reduce the complexity to search for the\n> > particular entry. Since we assume that there are no big differences in\n> > entries' sizes within a sub-group, we can pick the entry to evict in\n> > O(1). However, what we really need to avoid here is that we end up\n> > increasing the number of times to evict entries because serializing an\n> > entry to the disk is more costly than searching an entry on memory in\n> > general.\n> >\n> > I think that it's no problem in a large-entries subgroup but when it\n> > comes to the smallest-entries subgroup, like for entries consuming\n> > less than 5% of the limit, it could end up evicting many entries. For\n> > example, there would be a huge difference between serializing 1 entry\n> > consuming 5% of the memory limit and serializing 5000 entries\n> > consuming 0.001% of the memory limit. Even if we can select 5000\n> > entries quickly, I think the latter would be slower in total. The more\n> > subgroups we create, the more the algorithm gets complex and the\n> > overheads could cause. So I think we need to search for the largest\n> > entry in order to minimize the number of evictions anyway.\n> >\n> > Looking for data structures and algorithms, I think binaryheap with\n> > some improvements could be promising. I mentioned before why we cannot\n> > use the current binaryheap[1]. The missing pieces are efficient ways\n> > to remove the arbitrary entry and to update the arbitrary entry's key.\n> > The current binaryheap provides binaryheap_remove_node(), which is\n> > O(log n), but it requires the entry's position in the binaryheap. We\n> > can know the entry's position just after binaryheap_add_unordered()\n> > but it might be changed after heapify. Searching the node's position\n> > is O(n). So the improvement idea is to add a hash table to the\n> > binaryheap so that it can track the positions for each entry so that\n> > we can remove the arbitrary entry in O(log n) and also update the\n> > arbitrary entry's key in O(log n). This is known as the indexed\n> > priority queue. I've attached the patch for that (0001 and 0002).\n> >\n> > That way, in terms of reorderbuffer, we can update and remove the\n> > transaction's memory usage in O(log n) (in worst case and O(1) in\n> > average) and then pick the largest transaction in O(1). Since we might\n> > need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> > we need to maintain the binaryheap anyway.\n>\n> Since if the number of transactions being decoded is small, updating\n> max-heap for each memory counter update could lead to some\n> regressions, I've measured it with the case where updating memory\n> counter happens frequently:\n>\n> setup script:\n> create table test (c int);\n> select pg_create_logical_replication_slot('s', 'test_decoding');\n> insert into test select generate_series(1, 8000000);\n>\n> benchmark script:\n> set work_mem to '3GB';\n> set logical_decoding_work_mem to '5GB';\n> select count(*) from pg_logical_slot_peek_changes('s', null, null);\n>\n> Here are results (the median of five executions):\n>\n> * HEAD\n> 5274.765 ms\n>\n> * HEAD + 0001-0003 patch\n> 5532.203 ms\n>\n> There were approximately 5% performance regressions.\n>\n> An improvement idea is that we use two strategies for updating\n> max-heap depending on the number of transactions. That is, if the\n> number of transactions being decoded is small, we add a transaction to\n> max-heap by binaryheap_add_unordered(), which is O(1), and heapify it\n> just before picking the largest transactions, which is O(n). That way,\n> we can minimize the overhead of updating the memory counter. Once the\n> number of transactions being decoded exceeds the threshold, say 1024,\n> we use another strategy. We call binaryheap_update_up/down() when\n> updating the memory counter to preserve heap property, which is O(log\n> n), and pick the largest transaction in O(1). This strategy minimizes\n> the cost of picking the largest transactions instead of paying some\n> costs to update the memory counters.\n>\n> I've experimented with this idea and run the same tests:\n>\n> * HEAD + new patches (0001 - 0003)\n> 5277.524 ms\n>\n> The number looks good. I've attached these patches. Feedback is very welcome.\n\nFew comments:\n1) Here we are changing memtrack_state to\nREORDER_BUFFER_MEM_TRACK_NORMAL immediately once the size is less than\nREORDE_BUFFER_MEM_TRACK_THRESHOLD. In this scenario we will be\nbuilding the heap many times if there are transactions getting added\nand removed. How about we wait for txn_heap to become less than 95% of\nREORDE_BUFFER_MEM_TRACK_THRESHOLD to avoid building the heap many\ntimes in this scenario.\n+ {\n+ Assert(rb->memtrack_state ==\nREORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP);\n+\n+ /*\n+ * If the number of transactions gets lowered than the\nthreshold, switch\n+ * to the state where we heapify the max-heap right\nbefore picking the\n+ * largest transaction while doing nothing for memory\ncounter update.\n+ */\n+ if (binaryheap_size(rb->txn_heap) <\nREORDE_BUFFER_MEM_TRACK_THRESHOLD)\n+ rb->memtrack_state = REORDER_BUFFER_MEM_TRACK_NORMAL;\n }\n\n2) I felt init variable is not needed, we can directly check txn->size\ninstead like it is done in the else case:\n+ bool init = (txn->size == 0);\n+\n txn->size += sz;\n rb->size += sz;\n\n /* Update the total size in the top transaction. */\n toptxn->total_size += sz;\n+\n+ /* Update the transaction in the max-heap */\n+ if (init)\n+ {\n+ /* Add the transaction to the max-heap */\n+ if (rb->memtrack_state ==\nREORDER_BUFFER_MEM_TRACK_NORMAL)\n+ binaryheap_add_unordered(rb->txn_heap,\nPointerGetDatum(txn));\n+ else\n+ binaryheap_add(rb->txn_heap,\nPointerGetDatum(txn));\n+ }\n+ else if (rb->memtrack_state ==\nREORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n+ {\n+ /*\n+ * If we're maintaining max-heap even while\nupdating the memory counter,\n+ * we reflect the updates to the max-heap.\n+ */\n+ binaryheap_update_up(rb->txn_heap,\nPointerGetDatum(txn));\n+ }\n\n3) we can add some comments for this:\n+typedef enum ReorderBufferMemTrackState\n+{\n+ REORDER_BUFFER_MEM_TRACK_NORMAL,\n+ REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n+} ReorderBufferMemTrackState;\n+\n\n4) This should be added to typedefs.list:\n+typedef enum ReorderBufferMemTrackState\n+{\n+ REORDER_BUFFER_MEM_TRACK_NORMAL,\n+ REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n+} ReorderBufferMemTrackState;\n+\n\n5)Few typos:\n5.a) enlareable should be enlargeable\n[PATCH v2 1/4] Make binaryheap enlareable.\n\n5.b) subtranasctions should be subtransactions:\nOn the other hand, when the number of transactions being decoded is\nfairly large, such as when a transaction has many subtranasctions,\n\n5.c) evaludate should be evaluate:\nXXX: updating the transaction's memory counter and the max-heap is now\nO(log n), so we need to evaludate it. If there are some regression, we\n\n5.d) pudate should be update:\nIt could be too expensive to pudate max-heap while preserving the heap\nproperty each time the transaction's memory counter is updated, as it\n\n5.e) frquently should be frequently:\ncould happen very frquently. So when the number of transactions being\ndecoded is small, we add the transactions to max-heap but don't\n\n6) This should be added to typedefs.list:\n+/*\n+ * Struct for A hash table element to store the node's index in the bh_nodes\n+ * array.\n+ */\n+typedef struct bh_nodeidx_entry\n+{\n+ bh_node_type key;\n+ char status;\n+ int idx;\n+} bh_nodeidx_entry;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 31 Jan 2024 14:01:39 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 2:07 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > > The individual transactions shouldn't cross\n> > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > which doesn't seems to be the case.\n> > > > >\n> > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > maintain the transactions consuming more than 10% of\n> > > > > logical_decoding_work_mem,\n> > > > >\n> > > >\n> > > > How can we have multiple transactions in the list consuming more than\n> > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > before any xact reaches logical_decoding_work_mem?\n> > >\n> > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > possible that the list has three transactions each of which are\n> > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > still 30MB (less than logical_decoding_work_mem).\n> > >\n> >\n> > Thanks for the clarification. I misunderstood the list to have\n> > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > thing to note is that maintaining these lists by default can also have\n> > some overhead unless the list of open transactions crosses a certain\n> > threshold.\n> >\n>\n> On further analysis, I realized that the approach discussed here might\n> not be the way to go. The idea of dividing transactions into several\n> subgroups is to divide a large number of entries into multiple\n> sub-groups so we can reduce the complexity to search for the\n> particular entry. Since we assume that there are no big differences in\n> entries' sizes within a sub-group, we can pick the entry to evict in\n> O(1). However, what we really need to avoid here is that we end up\n> increasing the number of times to evict entries because serializing an\n> entry to the disk is more costly than searching an entry on memory in\n> general.\n>\n> I think that it's no problem in a large-entries subgroup but when it\n> comes to the smallest-entries subgroup, like for entries consuming\n> less than 5% of the limit, it could end up evicting many entries. For\n> example, there would be a huge difference between serializing 1 entry\n> consuming 5% of the memory limit and serializing 5000 entries\n> consuming 0.001% of the memory limit. Even if we can select 5000\n> entries quickly, I think the latter would be slower in total. The more\n> subgroups we create, the more the algorithm gets complex and the\n> overheads could cause. So I think we need to search for the largest\n> entry in order to minimize the number of evictions anyway.\n>\n> Looking for data structures and algorithms, I think binaryheap with\n> some improvements could be promising. I mentioned before why we cannot\n> use the current binaryheap[1]. The missing pieces are efficient ways\n> to remove the arbitrary entry and to update the arbitrary entry's key.\n> The current binaryheap provides binaryheap_remove_node(), which is\n> O(log n), but it requires the entry's position in the binaryheap. We\n> can know the entry's position just after binaryheap_add_unordered()\n> but it might be changed after heapify. Searching the node's position\n> is O(n). So the improvement idea is to add a hash table to the\n> binaryheap so that it can track the positions for each entry so that\n> we can remove the arbitrary entry in O(log n) and also update the\n> arbitrary entry's key in O(log n). This is known as the indexed\n> priority queue. I've attached the patch for that (0001 and 0002).\n>\n> That way, in terms of reorderbuffer, we can update and remove the\n> transaction's memory usage in O(log n) (in worst case and O(1) in\n> average) and then pick the largest transaction in O(1). Since we might\n> need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> we need to maintain the binaryheap anyway. I've attached the patch for\n> that (0003).\n>\n> Here are test script for many sub-transactions case:\n>\n> create table test (c int);\n> create or replace function testfn (cnt int) returns void as $$\n> begin\n> for i in 1..cnt loop\n> begin\n> insert into test values (i);\n> exception when division_by_zero then\n> raise notice 'caught error';\n> return;\n> end;\n> end loop;\n> end;\n> $$\n> language plpgsql;\n> select pg_create_logical_replication_slot('s', 'test_decoding');\n> select testfn(50000);\n> set logical_decoding_work_mem to '4MB';\n> select count(*) from pg_logical_slot_peek_changes('s', null, null)\";\n>\n> and here are results:\n>\n> * HEAD: 16877.281 ms\n> * HEAD w/ patches (0001 and 0002): 655.154 ms\n>\n> There is huge improvement in a many-subtransactions case.\n\nI have run the same test and found around 12.53x improvement(the\nmedian of five executions):\nHEAD | HEAD+ v2-0001+ v2-0002 + v2-0003 patch\n29197ms | 2329ms\n\nI had also run the regression test that you had shared at [1], there\nwas a very very slight dip in this case around it takes around 0.31x\nmore time:\nHEAD | HEAD + v2-0001+ v2-0002 + v2-0003 patch\n4459ms | 4473ms\n\nThe machine has Total Memory of 755.536 GB, 120 CPUs and RHEL 7\nOperating System. Also find the detailed info of the performance\nmachine attached.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoB-7mPpKnLmBNfzfavG8AiTwEgAdVMuv%3DjzmAp9ex7eyQ%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.",
"msg_date": "Fri, 2 Feb 2024 10:29:01 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 2:18 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> I have started to read your patches. Here are my initial comments.\n> At least, all subscription tests were passed on my env.\n\nThank you for the review comments!\n\n>\n> A comment for 0001:\n>\n> 01.\n> ```\n> +static void\n> +bh_enlarge_node_array(binaryheap *heap)\n> +{\n> + if (heap->bh_size < heap->bh_space)\n> + return;\n> +\n> + heap->bh_space *= 2;\n> + heap->bh_nodes = repalloc(heap->bh_nodes,\n> + sizeof(bh_node_type) * heap->bh_space);\n> +}\n> ```\n>\n> I'm not sure it is OK to use repalloc() for enlarging bh_nodes. This data\n> structure public one and arbitrary codes and extensions can directly refer\n> bh_nodes. But if the array is repalloc()'d, the pointer would be updated so that\n> their referring would be a dangling pointer.\n\nHmm I'm not sure this is the case that we need to really worry about,\nand cannot come up with a good use case where extensions refer to\nbh_nodes directly rather than binaryheap. In PostgreSQL codes, many\nNodes already have pointers and are exposed.\n\n> I think the internal of the structure should be a private one in this case.\n>\n> Comments for 0002:\n>\n> 02.\n> ```\n> +#include \"utils/palloc.h\"\n> ```\n>\n> Is it really needed? I'm not sure who referrs it.\n\nSeems not, will remove.\n\n>\n> 03.\n> ```\n> typedef struct bh_nodeidx_entry\n> {\n> bh_node_type key;\n> char status;\n> int idx;\n> } bh_nodeidx_entry;\n> ```\n>\n> Sorry if it is a stupid question. Can you tell me how \"status\" is used?\n> None of binaryheap and reorderbuffer components refer it.\n\nIt's required by simplehash.h\n\n>\n> 04.\n> ```\n> extern binaryheap *binaryheap_allocate(int capacity,\n> binaryheap_comparator compare,\n> - void *arg);\n> + bool indexed, void *arg);\n> ```\n>\n> I felt pre-existing API should not be changed. How about adding\n> binaryheap_allocate_extended() or something which can specify the `bool indexed`?\n> binaryheap_allocate() sets heap->bh_indexed to false.\n\nI'm really not sure it's worth inventing a\nbinaryheap_allocate_extended() function just for preserving API\ncompatibility. I think it's generally a good idea to have\nxxx_extended() function to increase readability and usability, for\nexample, for the case where the same (kind of default) arguments are\npassed in most cases and the function is called from many places.\nHowever, we have a handful binaryheap_allocate() callers, and I\nbelieve that it would not hurt the existing callers.\n\n>\n> 05.\n> ```\n> +extern void binaryheap_update_up(binaryheap *heap, bh_node_type d);\n> +extern void binaryheap_update_down(binaryheap *heap, bh_node_type d);\n> ```\n>\n> IIUC, callers must consider whether the node should be shift up/down and use\n> appropriate function, right? I felt it may not be user-friendly.\n\nRight, I couldn't come up with a better interface.\n\nAnother idea I've considered was that the caller provides a callback\nfunction where it can compare the old and new keys. For example, in\nreorderbuffer case, we call like:\n\nbinaryheap_update(rb->txn_heap, PointerGetDatum(txn),\nReorderBufferTXNUpdateCompare, (void *) &old_size);\n\nThen in ReorderBufferTXNUpdateCompare(),\nReorderBufferTXN *txn = (ReorderBufferTXN *) a;Size old_size = *(Size *) b;\n(compare txn->size to \"b\" ...)\n\nHowever it seems complicated...\n\n>\n> Comments for 0003:\n>\n> 06.\n> ```\n> This commit changes the eviction algorithm in ReorderBuffer to use\n> max-heap with transaction size,a nd use two strategies depending on\n> the number of transactions being decoded.\n> ```\n>\n> s/a nd/ and/\n>\n> 07.\n> ```\n> It could be too expensive to pudate max-heap while preserving the heap\n> property each time the transaction's memory counter is updated, as it\n> could happen very frquently. So when the number of transactions being\n> decoded is small, we add the transactions to max-heap but don't\n> preserve the heap property, which is O(1). We heapify the max-heap\n> just before picking the largest transaction, which is O(n). This\n> strategy minimizes the overheads of updating the transaction's memory\n> counter.\n> ```\n>\n> s/pudate/update/\n\nWill fix them.\n\n>\n> 08.\n> IIUC, if more than 1024 transactions are running but they have small amount of\n> changes, the performance may be degraded, right? Do you have a result in sucha\n> a case?\n\nI've run a benchmark test that I shared before[1]. Here are results of\ndecoding a transaction that has 1M subtransaction each of which has 1\nINSERT:\n\nHEAD:\n1810.192 ms\n\nHEAD w/ patch:\n2001.094 ms\n\nI set a large enough value to logical_decoding_work_mem not to evict\nany transactions. I can see about about 10% performance regression in\nthis case.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAfKTgrBrLq96GcTv9d6k97zaQcDM-rxfKEt4GSe0qnaQ%40mail.gmail.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Feb 2024 15:42:28 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jan 31, 2024 at 5:32 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 30 Jan 2024 at 13:37, Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Jan 26, 2024 at 5:36 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > The individual transactions shouldn't cross\n> > > > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > > > which doesn't seems to be the case.\n> > > > > > >\n> > > > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > > > maintain the transactions consuming more than 10% of\n> > > > > > > logical_decoding_work_mem,\n> > > > > > >\n> > > > > >\n> > > > > > How can we have multiple transactions in the list consuming more than\n> > > > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > > > before any xact reaches logical_decoding_work_mem?\n> > > > >\n> > > > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > > > possible that the list has three transactions each of which are\n> > > > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > > > still 30MB (less than logical_decoding_work_mem).\n> > > > >\n> > > >\n> > > > Thanks for the clarification. I misunderstood the list to have\n> > > > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > > > thing to note is that maintaining these lists by default can also have\n> > > > some overhead unless the list of open transactions crosses a certain\n> > > > threshold.\n> > > >\n> > >\n> > > On further analysis, I realized that the approach discussed here might\n> > > not be the way to go. The idea of dividing transactions into several\n> > > subgroups is to divide a large number of entries into multiple\n> > > sub-groups so we can reduce the complexity to search for the\n> > > particular entry. Since we assume that there are no big differences in\n> > > entries' sizes within a sub-group, we can pick the entry to evict in\n> > > O(1). However, what we really need to avoid here is that we end up\n> > > increasing the number of times to evict entries because serializing an\n> > > entry to the disk is more costly than searching an entry on memory in\n> > > general.\n> > >\n> > > I think that it's no problem in a large-entries subgroup but when it\n> > > comes to the smallest-entries subgroup, like for entries consuming\n> > > less than 5% of the limit, it could end up evicting many entries. For\n> > > example, there would be a huge difference between serializing 1 entry\n> > > consuming 5% of the memory limit and serializing 5000 entries\n> > > consuming 0.001% of the memory limit. Even if we can select 5000\n> > > entries quickly, I think the latter would be slower in total. The more\n> > > subgroups we create, the more the algorithm gets complex and the\n> > > overheads could cause. So I think we need to search for the largest\n> > > entry in order to minimize the number of evictions anyway.\n> > >\n> > > Looking for data structures and algorithms, I think binaryheap with\n> > > some improvements could be promising. I mentioned before why we cannot\n> > > use the current binaryheap[1]. The missing pieces are efficient ways\n> > > to remove the arbitrary entry and to update the arbitrary entry's key.\n> > > The current binaryheap provides binaryheap_remove_node(), which is\n> > > O(log n), but it requires the entry's position in the binaryheap. We\n> > > can know the entry's position just after binaryheap_add_unordered()\n> > > but it might be changed after heapify. Searching the node's position\n> > > is O(n). So the improvement idea is to add a hash table to the\n> > > binaryheap so that it can track the positions for each entry so that\n> > > we can remove the arbitrary entry in O(log n) and also update the\n> > > arbitrary entry's key in O(log n). This is known as the indexed\n> > > priority queue. I've attached the patch for that (0001 and 0002).\n> > >\n> > > That way, in terms of reorderbuffer, we can update and remove the\n> > > transaction's memory usage in O(log n) (in worst case and O(1) in\n> > > average) and then pick the largest transaction in O(1). Since we might\n> > > need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> > > we need to maintain the binaryheap anyway.\n> >\n> > Since if the number of transactions being decoded is small, updating\n> > max-heap for each memory counter update could lead to some\n> > regressions, I've measured it with the case where updating memory\n> > counter happens frequently:\n> >\n> > setup script:\n> > create table test (c int);\n> > select pg_create_logical_replication_slot('s', 'test_decoding');\n> > insert into test select generate_series(1, 8000000);\n> >\n> > benchmark script:\n> > set work_mem to '3GB';\n> > set logical_decoding_work_mem to '5GB';\n> > select count(*) from pg_logical_slot_peek_changes('s', null, null);\n> >\n> > Here are results (the median of five executions):\n> >\n> > * HEAD\n> > 5274.765 ms\n> >\n> > * HEAD + 0001-0003 patch\n> > 5532.203 ms\n> >\n> > There were approximately 5% performance regressions.\n> >\n> > An improvement idea is that we use two strategies for updating\n> > max-heap depending on the number of transactions. That is, if the\n> > number of transactions being decoded is small, we add a transaction to\n> > max-heap by binaryheap_add_unordered(), which is O(1), and heapify it\n> > just before picking the largest transactions, which is O(n). That way,\n> > we can minimize the overhead of updating the memory counter. Once the\n> > number of transactions being decoded exceeds the threshold, say 1024,\n> > we use another strategy. We call binaryheap_update_up/down() when\n> > updating the memory counter to preserve heap property, which is O(log\n> > n), and pick the largest transaction in O(1). This strategy minimizes\n> > the cost of picking the largest transactions instead of paying some\n> > costs to update the memory counters.\n> >\n> > I've experimented with this idea and run the same tests:\n> >\n> > * HEAD + new patches (0001 - 0003)\n> > 5277.524 ms\n> >\n> > The number looks good. I've attached these patches. Feedback is very welcome.\n>\n> Few comments:\n\nThank you for the review comments!\n\n> 1) Here we are changing memtrack_state to\n> REORDER_BUFFER_MEM_TRACK_NORMAL immediately once the size is less than\n> REORDE_BUFFER_MEM_TRACK_THRESHOLD. In this scenario we will be\n> building the heap many times if there are transactions getting added\n> and removed. How about we wait for txn_heap to become less than 95% of\n> REORDE_BUFFER_MEM_TRACK_THRESHOLD to avoid building the heap many\n> times in this scenario.\n\nBut until we call ReorderBufferLargestTXN() next time, we will have\ndecoded some changes, added or removed transactions, which modifies\nthe transaction size. Is it okay to do the memory counter updates in\nO(log n) during that? I guess your idea works well in the case where\nuntil we call ReorderBufferLargestTXN() next time, only a few\ntransactions' memory counters have been updated.\n\nI realized that the state could never switch to NORMAL in cases where\nthe number of transactions got lower than the threshold but the total\nmemory usage doesn't exceed the limit. I'll fix it.\n\n>\n> 2) I felt init variable is not needed, we can directly check txn->size\n> instead like it is done in the else case:\n> + bool init = (txn->size == 0);\n> +\n> txn->size += sz;\n> rb->size += sz;\n>\n> /* Update the total size in the top transaction. */\n> toptxn->total_size += sz;\n> +\n> + /* Update the transaction in the max-heap */\n> + if (init)\n> + {\n> + /* Add the transaction to the max-heap */\n> + if (rb->memtrack_state ==\n> REORDER_BUFFER_MEM_TRACK_NORMAL)\n> + binaryheap_add_unordered(rb->txn_heap,\n> PointerGetDatum(txn));\n> + else\n> + binaryheap_add(rb->txn_heap,\n> PointerGetDatum(txn));\n> + }\n> + else if (rb->memtrack_state ==\n> REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n> + {\n> + /*\n> + * If we're maintaining max-heap even while\n> updating the memory counter,\n> + * we reflect the updates to the max-heap.\n> + */\n> + binaryheap_update_up(rb->txn_heap,\n> PointerGetDatum(txn));\n> + }\n\nOkay, we can replace it with \"(txn->size - sz) == 0\".\n\n>\n> 3) we can add some comments for this:\n> +typedef enum ReorderBufferMemTrackState\n> +{\n> + REORDER_BUFFER_MEM_TRACK_NORMAL,\n> + REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n> +} ReorderBufferMemTrackState;\n> +\n>\n> 4) This should be added to typedefs.list:\n> +typedef enum ReorderBufferMemTrackState\n> +{\n> + REORDER_BUFFER_MEM_TRACK_NORMAL,\n> + REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n> +} ReorderBufferMemTrackState;\n> +\n\nWill add them.\n\n>\n> 5)Few typos:\n> 5.a) enlareable should be enlargeable\n> [PATCH v2 1/4] Make binaryheap enlareable.\n>\n> 5.b) subtranasctions should be subtransactions:\n> On the other hand, when the number of transactions being decoded is\n> fairly large, such as when a transaction has many subtranasctions,\n>\n> 5.c) evaludate should be evaluate:\n> XXX: updating the transaction's memory counter and the max-heap is now\n> O(log n), so we need to evaludate it. If there are some regression, we\n>\n> 5.d) pudate should be update:\n> It could be too expensive to pudate max-heap while preserving the heap\n> property each time the transaction's memory counter is updated, as it\n>\n> 5.e) frquently should be frequently:\n> could happen very frquently. So when the number of transactions being\n> decoded is small, we add the transactions to max-heap but don't\n>\n\nThanks, will fix them.\n\n> 6) This should be added to typedefs.list:\n> +/*\n> + * Struct for A hash table element to store the node's index in the bh_nodes\n> + * array.\n> + */\n> +typedef struct bh_nodeidx_entry\n> +{\n> + bh_node_type key;\n> + char status;\n> + int idx;\n> +} bh_nodeidx_entry;\n\nRight, will add it.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Feb 2024 16:53:37 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 1:59 PM Shubham Khanna\n<[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 2:07 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > >\n> > > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > The individual transactions shouldn't cross\n> > > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > > which doesn't seems to be the case.\n> > > > > >\n> > > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > > maintain the transactions consuming more than 10% of\n> > > > > > logical_decoding_work_mem,\n> > > > > >\n> > > > >\n> > > > > How can we have multiple transactions in the list consuming more than\n> > > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > > before any xact reaches logical_decoding_work_mem?\n> > > >\n> > > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > > possible that the list has three transactions each of which are\n> > > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > > still 30MB (less than logical_decoding_work_mem).\n> > > >\n> > >\n> > > Thanks for the clarification. I misunderstood the list to have\n> > > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > > thing to note is that maintaining these lists by default can also have\n> > > some overhead unless the list of open transactions crosses a certain\n> > > threshold.\n> > >\n> >\n> > On further analysis, I realized that the approach discussed here might\n> > not be the way to go. The idea of dividing transactions into several\n> > subgroups is to divide a large number of entries into multiple\n> > sub-groups so we can reduce the complexity to search for the\n> > particular entry. Since we assume that there are no big differences in\n> > entries' sizes within a sub-group, we can pick the entry to evict in\n> > O(1). However, what we really need to avoid here is that we end up\n> > increasing the number of times to evict entries because serializing an\n> > entry to the disk is more costly than searching an entry on memory in\n> > general.\n> >\n> > I think that it's no problem in a large-entries subgroup but when it\n> > comes to the smallest-entries subgroup, like for entries consuming\n> > less than 5% of the limit, it could end up evicting many entries. For\n> > example, there would be a huge difference between serializing 1 entry\n> > consuming 5% of the memory limit and serializing 5000 entries\n> > consuming 0.001% of the memory limit. Even if we can select 5000\n> > entries quickly, I think the latter would be slower in total. The more\n> > subgroups we create, the more the algorithm gets complex and the\n> > overheads could cause. So I think we need to search for the largest\n> > entry in order to minimize the number of evictions anyway.\n> >\n> > Looking for data structures and algorithms, I think binaryheap with\n> > some improvements could be promising. I mentioned before why we cannot\n> > use the current binaryheap[1]. The missing pieces are efficient ways\n> > to remove the arbitrary entry and to update the arbitrary entry's key.\n> > The current binaryheap provides binaryheap_remove_node(), which is\n> > O(log n), but it requires the entry's position in the binaryheap. We\n> > can know the entry's position just after binaryheap_add_unordered()\n> > but it might be changed after heapify. Searching the node's position\n> > is O(n). So the improvement idea is to add a hash table to the\n> > binaryheap so that it can track the positions for each entry so that\n> > we can remove the arbitrary entry in O(log n) and also update the\n> > arbitrary entry's key in O(log n). This is known as the indexed\n> > priority queue. I've attached the patch for that (0001 and 0002).\n> >\n> > That way, in terms of reorderbuffer, we can update and remove the\n> > transaction's memory usage in O(log n) (in worst case and O(1) in\n> > average) and then pick the largest transaction in O(1). Since we might\n> > need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> > we need to maintain the binaryheap anyway. I've attached the patch for\n> > that (0003).\n> >\n> > Here are test script for many sub-transactions case:\n> >\n> > create table test (c int);\n> > create or replace function testfn (cnt int) returns void as $$\n> > begin\n> > for i in 1..cnt loop\n> > begin\n> > insert into test values (i);\n> > exception when division_by_zero then\n> > raise notice 'caught error';\n> > return;\n> > end;\n> > end loop;\n> > end;\n> > $$\n> > language plpgsql;\n> > select pg_create_logical_replication_slot('s', 'test_decoding');\n> > select testfn(50000);\n> > set logical_decoding_work_mem to '4MB';\n> > select count(*) from pg_logical_slot_peek_changes('s', null, null)\";\n> >\n> > and here are results:\n> >\n> > * HEAD: 16877.281 ms\n> > * HEAD w/ patches (0001 and 0002): 655.154 ms\n> >\n> > There is huge improvement in a many-subtransactions case.\n>\n> I have run the same test and found around 12.53x improvement(the\n> median of five executions):\n> HEAD | HEAD+ v2-0001+ v2-0002 + v2-0003 patch\n> 29197ms | 2329ms\n>\n> I had also run the regression test that you had shared at [1], there\n> was a very very slight dip in this case around it takes around 0.31x\n> more time:\n> HEAD | HEAD + v2-0001+ v2-0002 + v2-0003 patch\n> 4459ms | 4473ms\n\nThank you for doing a benchmark test with the latest patches!\n\nI'm going to submit the new version patches next week.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Feb 2024 17:16:11 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> Thank you for the review comments!\r\n> \r\n> >\r\n> > A comment for 0001:\r\n> >\r\n> > 01.\r\n> > ```\r\n> > +static void\r\n> > +bh_enlarge_node_array(binaryheap *heap)\r\n> > +{\r\n> > + if (heap->bh_size < heap->bh_space)\r\n> > + return;\r\n> > +\r\n> > + heap->bh_space *= 2;\r\n> > + heap->bh_nodes = repalloc(heap->bh_nodes,\r\n> > + sizeof(bh_node_type) * heap->bh_space);\r\n> > +}\r\n> > ```\r\n> >\r\n> > I'm not sure it is OK to use repalloc() for enlarging bh_nodes. This data\r\n> > structure public one and arbitrary codes and extensions can directly refer\r\n> > bh_nodes. But if the array is repalloc()'d, the pointer would be updated so that\r\n> > their referring would be a dangling pointer.\r\n> \r\n> Hmm I'm not sure this is the case that we need to really worry about,\r\n> and cannot come up with a good use case where extensions refer to\r\n> bh_nodes directly rather than binaryheap. In PostgreSQL codes, many\r\n> Nodes already have pointers and are exposed.\r\n\r\nActually, me neither. I could not come up with the use-case - I just said the possibility.\r\nIf it is not a real issue, we can ignore.\r\n\r\n> \r\n> >\r\n> > 04.\r\n> > ```\r\n> > extern binaryheap *binaryheap_allocate(int capacity,\r\n> > binaryheap_comparator compare,\r\n> > - void *arg);\r\n> > + bool indexed, void *arg);\r\n> > ```\r\n> >\r\n> > I felt pre-existing API should not be changed. How about adding\r\n> > binaryheap_allocate_extended() or something which can specify the `bool\r\n> indexed`?\r\n> > binaryheap_allocate() sets heap->bh_indexed to false.\r\n> \r\n> I'm really not sure it's worth inventing a\r\n> binaryheap_allocate_extended() function just for preserving API\r\n> compatibility. I think it's generally a good idea to have\r\n> xxx_extended() function to increase readability and usability, for\r\n> example, for the case where the same (kind of default) arguments are\r\n> passed in most cases and the function is called from many places.\r\n> However, we have a handful binaryheap_allocate() callers, and I\r\n> believe that it would not hurt the existing callers.\r\n\r\nI kept (external) extensions which uses binaryheap APIs in my mind.\r\nI thought we could avoid to raise costs for updating their codes. But I could\r\nunderstand the change is small, so ... up to you.\r\n\r\n> > 05.\r\n> > ```\r\n> > +extern void binaryheap_update_up(binaryheap *heap, bh_node_type d);\r\n> > +extern void binaryheap_update_down(binaryheap *heap, bh_node_type d);\r\n> > ```\r\n> >\r\n> > IIUC, callers must consider whether the node should be shift up/down and use\r\n> > appropriate function, right? I felt it may not be user-friendly.\r\n> \r\n> Right, I couldn't come up with a better interface.\r\n> \r\n> Another idea I've considered was that the caller provides a callback\r\n> function where it can compare the old and new keys. For example, in\r\n> reorderbuffer case, we call like:\r\n> \r\n> binaryheap_update(rb->txn_heap, PointerGetDatum(txn),\r\n> ReorderBufferTXNUpdateCompare, (void *) &old_size);\r\n> \r\n> Then in ReorderBufferTXNUpdateCompare(),\r\n> ReorderBufferTXN *txn = (ReorderBufferTXN *) a;Size old_size = *(Size *) b;\r\n> (compare txn->size to \"b\" ...)\r\n> \r\n> However it seems complicated...\r\n> \r\n\r\nI considered similar approach: accept old node as an argument of a compare function.\r\nBut it requires further memory allocation. Do someone have better idea?\r\n\r\n> \r\n> >\r\n> > 08.\r\n> > IIUC, if more than 1024 transactions are running but they have small amount of\r\n> > changes, the performance may be degraded, right? Do you have a result in\r\n> sucha\r\n> > a case?\r\n> \r\n> I've run a benchmark test that I shared before[1]. Here are results of\r\n> decoding a transaction that has 1M subtransaction each of which has 1\r\n> INSERT:\r\n> \r\n> HEAD:\r\n> 1810.192 ms\r\n> \r\n> HEAD w/ patch:\r\n> 2001.094 ms\r\n> \r\n> I set a large enough value to logical_decoding_work_mem not to evict\r\n> any transactions. I can see about about 10% performance regression in\r\n> this case.\r\n\r\nThanks for running. I think this workload is the worst and an extreme case which\r\nwould not be occurred on the real system (Such a system should be fixed), so we\r\ncan say that the regression is up to -10%. I felt it could be negligible but how\r\ndo other think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 6 Feb 2024 05:45:07 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 5:16 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Feb 2, 2024 at 1:59 PM Shubham Khanna\n> <[email protected]> wrote:\n> >\n> > On Fri, Jan 26, 2024 at 2:07 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Wed, Dec 20, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Wed, Dec 20, 2023 at 6:49 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Dec 19, 2023 at 8:02 PM Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > > On Tue, Dec 19, 2023 at 8:31 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Sun, Dec 17, 2023 at 11:40 AM Amit Kapila <[email protected]> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > The individual transactions shouldn't cross\n> > > > > > > > 'logical_decoding_work_mem'. I got a bit confused by your proposal to\n> > > > > > > > maintain the lists: \"...splitting it into two lists: transactions\n> > > > > > > > consuming 5% < and 5% >= of the memory limit, and checking the 5% >=\n> > > > > > > > list preferably.\". In the previous sentence, what did you mean by\n> > > > > > > > transactions consuming 5% >= of the memory limit? I got the impression\n> > > > > > > > that you are saying to maintain them in a separate transaction list\n> > > > > > > > which doesn't seems to be the case.\n> > > > > > >\n> > > > > > > I wanted to mean that there are three lists in total: the first one\n> > > > > > > maintain the transactions consuming more than 10% of\n> > > > > > > logical_decoding_work_mem,\n> > > > > > >\n> > > > > >\n> > > > > > How can we have multiple transactions in the list consuming more than\n> > > > > > 10% of logical_decoding_work_mem? Shouldn't we perform serialization\n> > > > > > before any xact reaches logical_decoding_work_mem?\n> > > > >\n> > > > > Well, suppose logical_decoding_work_mem is set to 64MB, transactions\n> > > > > consuming more than 6.4MB are added to the list. So for example, it's\n> > > > > possible that the list has three transactions each of which are\n> > > > > consuming 10MB while the total memory usage in the reorderbuffer is\n> > > > > still 30MB (less than logical_decoding_work_mem).\n> > > > >\n> > > >\n> > > > Thanks for the clarification. I misunderstood the list to have\n> > > > transactions greater than 70.4 MB (64 + 6.4) in your example. But one\n> > > > thing to note is that maintaining these lists by default can also have\n> > > > some overhead unless the list of open transactions crosses a certain\n> > > > threshold.\n> > > >\n> > >\n> > > On further analysis, I realized that the approach discussed here might\n> > > not be the way to go. The idea of dividing transactions into several\n> > > subgroups is to divide a large number of entries into multiple\n> > > sub-groups so we can reduce the complexity to search for the\n> > > particular entry. Since we assume that there are no big differences in\n> > > entries' sizes within a sub-group, we can pick the entry to evict in\n> > > O(1). However, what we really need to avoid here is that we end up\n> > > increasing the number of times to evict entries because serializing an\n> > > entry to the disk is more costly than searching an entry on memory in\n> > > general.\n> > >\n> > > I think that it's no problem in a large-entries subgroup but when it\n> > > comes to the smallest-entries subgroup, like for entries consuming\n> > > less than 5% of the limit, it could end up evicting many entries. For\n> > > example, there would be a huge difference between serializing 1 entry\n> > > consuming 5% of the memory limit and serializing 5000 entries\n> > > consuming 0.001% of the memory limit. Even if we can select 5000\n> > > entries quickly, I think the latter would be slower in total. The more\n> > > subgroups we create, the more the algorithm gets complex and the\n> > > overheads could cause. So I think we need to search for the largest\n> > > entry in order to minimize the number of evictions anyway.\n> > >\n> > > Looking for data structures and algorithms, I think binaryheap with\n> > > some improvements could be promising. I mentioned before why we cannot\n> > > use the current binaryheap[1]. The missing pieces are efficient ways\n> > > to remove the arbitrary entry and to update the arbitrary entry's key.\n> > > The current binaryheap provides binaryheap_remove_node(), which is\n> > > O(log n), but it requires the entry's position in the binaryheap. We\n> > > can know the entry's position just after binaryheap_add_unordered()\n> > > but it might be changed after heapify. Searching the node's position\n> > > is O(n). So the improvement idea is to add a hash table to the\n> > > binaryheap so that it can track the positions for each entry so that\n> > > we can remove the arbitrary entry in O(log n) and also update the\n> > > arbitrary entry's key in O(log n). This is known as the indexed\n> > > priority queue. I've attached the patch for that (0001 and 0002).\n> > >\n> > > That way, in terms of reorderbuffer, we can update and remove the\n> > > transaction's memory usage in O(log n) (in worst case and O(1) in\n> > > average) and then pick the largest transaction in O(1). Since we might\n> > > need to call ReorderBufferSerializeTXN() even in non-streaming case,\n> > > we need to maintain the binaryheap anyway. I've attached the patch for\n> > > that (0003).\n> > >\n> > > Here are test script for many sub-transactions case:\n> > >\n> > > create table test (c int);\n> > > create or replace function testfn (cnt int) returns void as $$\n> > > begin\n> > > for i in 1..cnt loop\n> > > begin\n> > > insert into test values (i);\n> > > exception when division_by_zero then\n> > > raise notice 'caught error';\n> > > return;\n> > > end;\n> > > end loop;\n> > > end;\n> > > $$\n> > > language plpgsql;\n> > > select pg_create_logical_replication_slot('s', 'test_decoding');\n> > > select testfn(50000);\n> > > set logical_decoding_work_mem to '4MB';\n> > > select count(*) from pg_logical_slot_peek_changes('s', null, null)\";\n> > >\n> > > and here are results:\n> > >\n> > > * HEAD: 16877.281 ms\n> > > * HEAD w/ patches (0001 and 0002): 655.154 ms\n> > >\n> > > There is huge improvement in a many-subtransactions case.\n> >\n> > I have run the same test and found around 12.53x improvement(the\n> > median of five executions):\n> > HEAD | HEAD+ v2-0001+ v2-0002 + v2-0003 patch\n> > 29197ms | 2329ms\n> >\n> > I had also run the regression test that you had shared at [1], there\n> > was a very very slight dip in this case around it takes around 0.31x\n> > more time:\n> > HEAD | HEAD + v2-0001+ v2-0002 + v2-0003 patch\n> > 4459ms | 4473ms\n>\n> Thank you for doing a benchmark test with the latest patches!\n>\n> I'm going to submit the new version patches next week.\n>\n\nI've attached the new version patch set.\n\nRegards,\n\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Feb 2024 15:05:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nThanks for making v3 patchset. I have also benchmarked the case [1].\r\nBelow results are the average of 5th, there are almost the same result\r\neven when median is used for the comparison. On my env, the regression\r\ncannot be seen.\r\n\r\nHEAD (1e285a5)\tHEAD + v3 patches\tdifference\r\n10910.722 ms\t\t10714.540 ms\t\taround 1.8%\r\n\r\nAlso, here are mino comments for v3 set.\r\n\r\n01.\r\nbh_nodeidx_entry and ReorderBufferMemTrackState is missing in typedefs.list.\r\n\r\n02. ReorderBufferTXNSizeCompare\r\nShould we assert {ta, tb} are not NULL?\r\n\r\n[1]: https://www.postgresql.org/message-id/CAD21AoB-7mPpKnLmBNfzfavG8AiTwEgAdVMuv%3DjzmAp9ex7eyQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n\r\n",
"msg_date": "Thu, 8 Feb 2024 09:33:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Feb 6, 2024 at 5:06 PM Masahiko Sawada <[email protected]>\nwrote:\n\n>\n> I've attached the new version patch set.\n>\n> Regards,\n>\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\n\nThanks for the patch. I reviewed that patch and did minimal testing and it\nseems to show the speed up as claimed. Some minor comments:\npatch 0001:\n\n+static void\n+bh_enlarge_node_array(binaryheap *heap)\n+{\n+ if (heap->bh_size < heap->bh_space)\n+ return;\n\nwhy not check \"if (heap->bh_size >= heap->bh_space)\" outside this function\nto avoid calling this function when not necessary? This check was there in\ncode before the patch.\n\npatch 0003:\n\n+/*\n+ * The threshold of the number of transactions in the max-heap\n(rb->txn_heap)\n+ * to switch the state.\n+ */\n+#define REORDE_BUFFER_MEM_TRACK_THRESHOLD 1024\n\nTypo: I think you meant REORDER_ and not REORDE_\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Tue, Feb 6, 2024 at 5:06 PM Masahiko Sawada <[email protected]> wrote:\n\nI've attached the new version patch set.\n\nRegards,\n\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.comThanks for the patch. I reviewed that patch and did minimal testing and it seems to show the speed up as claimed. Some minor comments:patch 0001:+static void+bh_enlarge_node_array(binaryheap *heap)+{+\tif (heap->bh_size < heap->bh_space)+\t\treturn;why not check \"if (heap->bh_size >= heap->bh_space)\" outside this function to avoid calling this function when not necessary? This check was there in code before the patch.patch 0003:+/*+ * The threshold of the number of transactions in the max-heap (rb->txn_heap)+ * to switch the state.+ */+#define REORDE_BUFFER_MEM_TRACK_THRESHOLD 1024Typo: I think you meant REORDER_ and not REORDE_regards,Ajin CherianFujitsu Australia",
"msg_date": "Fri, 9 Feb 2024 21:35:42 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Feb 8, 2024 at 6:33 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> Thanks for making v3 patchset. I have also benchmarked the case [1].\n> Below results are the average of 5th, there are almost the same result\n> even when median is used for the comparison. On my env, the regression\n> cannot be seen.\n>\n> HEAD (1e285a5) HEAD + v3 patches difference\n> 10910.722 ms 10714.540 ms around 1.8%\n>\n\nThank you for doing the performance test!\n\n> Also, here are mino comments for v3 set.\n>\n> 01.\n> bh_nodeidx_entry and ReorderBufferMemTrackState is missing in typedefs.list.\n\nWill add them.\n\n>\n> 02. ReorderBufferTXNSizeCompare\n> Should we assert {ta, tb} are not NULL?\n\nNot sure we really need it as other binaryheap users don't have such checks.\n\nOn Tue, Feb 6, 2024 at 2:45 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > I've run a benchmark test that I shared before[1]. Here are results of\n> > decoding a transaction that has 1M subtransaction each of which has 1\n> > INSERT:\n> >\n> > HEAD:\n> > 1810.192 ms\n> >\n> > HEAD w/ patch:\n> > 2001.094 ms\n> >\n> > I set a large enough value to logical_decoding_work_mem not to evict\n> > any transactions. I can see about about 10% performance regression in\n> > this case.\n>\n> Thanks for running. I think this workload is the worst and an extreme case which\n> would not be occurred on the real system (Such a system should be fixed), so we\n> can say that the regression is up to -10%. I felt it could be negligible but how\n> do other think?\n\nI think this performance regression is not acceptable. In this\nworkload, one transaction has 10k subtransactions and the logical\ndecoding becomes quite slow if logical_decoding_work_mem is not big\nenough. Therefore, it's a legitimate and common approach to increase\nlogical_decoding_work_mem to speedup the decoding. However, with thie\npatch, the decoding becomes slower than today. It's a bad idea in\ngeneral to optimize an extreme case while sacrificing the normal (or\nmore common) cases.\n\nTherefore, I've improved the algorithm so that we don't touch the\nmax-heap at all if the number of transactions is small enough. I've\nrun benchmark test with two workloads:\n\nworkload-1, decode single transaction with 800k tuples (normal.sql):\n\n* without spill\nHEAD: 13235.136 ms\nv3 patch: 14320.082 ms\nv4 patch: 13300.665 ms\n\n* with spill\nHEAD: 22970.204 ms\nv3 patch: 23625.649 ms\nv4 patch: 23304.366\n\nworkload-2, decode one transaction with 100k subtransaction (many-subtxn.sql):\n\n* without spill\nHEAD: 345.718 ms\nv3 patch: 409.686 ms\nv4 patch: 353.026 ms\n\n* with spill\nHEAD: 136718.313 ms\nv3 patch: 2675.539 ms\nv4 patch: 2734.981 ms\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 10 Feb 2024 00:20:57 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Feb 9, 2024 at 7:35 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Tue, Feb 6, 2024 at 5:06 PM Masahiko Sawada <[email protected]> wrote:\n>>\n>>\n>> I've attached the new version patch set.\n>>\n>> Regards,\n>>\n>>\n>> --\n>> Masahiko Sawada\n>> Amazon Web Services: https://aws.amazon.com\n>\n>\n> Thanks for the patch. I reviewed that patch and did minimal testing and it seems to show the speed up as claimed. Some minor comments:\n\nThank you for the comments!\n\n> patch 0001:\n>\n> +static void\n> +bh_enlarge_node_array(binaryheap *heap)\n> +{\n> + if (heap->bh_size < heap->bh_space)\n> + return;\n>\n> why not check \"if (heap->bh_size >= heap->bh_space)\" outside this function to avoid calling this function when not necessary? This check was there in code before the patch.\n\nAgreed.\n\n>\n> patch 0003:\n>\n> +/*\n> + * The threshold of the number of transactions in the max-heap (rb->txn_heap)\n> + * to switch the state.\n> + */\n> +#define REORDE_BUFFER_MEM_TRACK_THRESHOLD 1024\n>\n> Typo: I think you meant REORDER_ and not REORDE_\n\nFixed.\n\nThese comments are addressed in the v4 patch set I just shared[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDhuybyryVkmVkgPY8uVrjGLYchL8EY8-rBm1hbZJpwLw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 10 Feb 2024 00:22:37 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Sat, Feb 10, 2024 at 2:23 AM Masahiko Sawada <[email protected]>\nwrote:\n\n> On Fri, Feb 9, 2024 at 7:35 PM Ajin Cherian <[email protected]> wrote:\n> >\n> >\n> >\n> > On Tue, Feb 6, 2024 at 5:06 PM Masahiko Sawada <[email protected]>\n> wrote:\n> >>\n> >>\n> >> I've attached the new version patch set.\n> >>\n> >> Regards,\n> >>\n> >>\n> >> --\n> >> Masahiko Sawada\n> >> Amazon Web Services: https://aws.amazon.com\n> >\n> >\n> > Thanks for the patch. I reviewed that patch and did minimal testing and\n> it seems to show the speed up as claimed. Some minor comments:\n>\n> Thank you for the comments!\n>\n> > patch 0001:\n> >\n> > +static void\n> > +bh_enlarge_node_array(binaryheap *heap)\n> > +{\n> > + if (heap->bh_size < heap->bh_space)\n> > + return;\n> >\n> > why not check \"if (heap->bh_size >= heap->bh_space)\" outside this\n> function to avoid calling this function when not necessary? This check was\n> there in code before the patch.\n>\n> Agreed.\n>\n> >\n> > patch 0003:\n> >\n> > +/*\n> > + * The threshold of the number of transactions in the max-heap\n> (rb->txn_heap)\n> > + * to switch the state.\n> > + */\n> > +#define REORDE_BUFFER_MEM_TRACK_THRESHOLD 1024\n> >\n> > Typo: I think you meant REORDER_ and not REORDE_\n>\n> Fixed.\n>\n> These comments are addressed in the v4 patch set I just shared[1].\n>\n>\n> These changes look good to me. I've done some tests with a few varying\nlevels of subtransaction and I could see that the patch was at least 5%\nbetter in all of them.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Sat, Feb 10, 2024 at 2:23 AM Masahiko Sawada <[email protected]> wrote:On Fri, Feb 9, 2024 at 7:35 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Tue, Feb 6, 2024 at 5:06 PM Masahiko Sawada <[email protected]> wrote:\n>>\n>>\n>> I've attached the new version patch set.\n>>\n>> Regards,\n>>\n>>\n>> --\n>> Masahiko Sawada\n>> Amazon Web Services: https://aws.amazon.com\n>\n>\n> Thanks for the patch. I reviewed that patch and did minimal testing and it seems to show the speed up as claimed. Some minor comments:\n\nThank you for the comments!\n\n> patch 0001:\n>\n> +static void\n> +bh_enlarge_node_array(binaryheap *heap)\n> +{\n> + if (heap->bh_size < heap->bh_space)\n> + return;\n>\n> why not check \"if (heap->bh_size >= heap->bh_space)\" outside this function to avoid calling this function when not necessary? This check was there in code before the patch.\n\nAgreed.\n\n>\n> patch 0003:\n>\n> +/*\n> + * The threshold of the number of transactions in the max-heap (rb->txn_heap)\n> + * to switch the state.\n> + */\n> +#define REORDE_BUFFER_MEM_TRACK_THRESHOLD 1024\n>\n> Typo: I think you meant REORDER_ and not REORDE_\n\nFixed.\n\nThese comments are addressed in the v4 patch set I just shared[1].\nThese changes look good to me. I've done some tests with a few varying levels of subtransaction and I could see that the patch was at least 5% better in all of them. regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 14 Feb 2024 21:55:32 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, 9 Feb 2024 at 20:51, Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Feb 8, 2024 at 6:33 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Sawada-san,\n> >\n> > Thanks for making v3 patchset. I have also benchmarked the case [1].\n> > Below results are the average of 5th, there are almost the same result\n> > even when median is used for the comparison. On my env, the regression\n> > cannot be seen.\n> >\n> > HEAD (1e285a5) HEAD + v3 patches difference\n> > 10910.722 ms 10714.540 ms around 1.8%\n> >\n>\n> Thank you for doing the performance test!\n>\n> > Also, here are mino comments for v3 set.\n> >\n> > 01.\n> > bh_nodeidx_entry and ReorderBufferMemTrackState is missing in typedefs.list.\n>\n> Will add them.\n>\n> >\n> > 02. ReorderBufferTXNSizeCompare\n> > Should we assert {ta, tb} are not NULL?\n>\n> Not sure we really need it as other binaryheap users don't have such checks.\n>\n> On Tue, Feb 6, 2024 at 2:45 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > I've run a benchmark test that I shared before[1]. Here are results of\n> > > decoding a transaction that has 1M subtransaction each of which has 1\n> > > INSERT:\n> > >\n> > > HEAD:\n> > > 1810.192 ms\n> > >\n> > > HEAD w/ patch:\n> > > 2001.094 ms\n> > >\n> > > I set a large enough value to logical_decoding_work_mem not to evict\n> > > any transactions. I can see about about 10% performance regression in\n> > > this case.\n> >\n> > Thanks for running. I think this workload is the worst and an extreme case which\n> > would not be occurred on the real system (Such a system should be fixed), so we\n> > can say that the regression is up to -10%. I felt it could be negligible but how\n> > do other think?\n>\n> I think this performance regression is not acceptable. In this\n> workload, one transaction has 10k subtransactions and the logical\n> decoding becomes quite slow if logical_decoding_work_mem is not big\n> enough. Therefore, it's a legitimate and common approach to increase\n> logical_decoding_work_mem to speedup the decoding. However, with thie\n> patch, the decoding becomes slower than today. It's a bad idea in\n> general to optimize an extreme case while sacrificing the normal (or\n> more common) cases.\n>\n\nSince this same function is used by pg_dump sorting TopoSort functions\nalso, we can just verify once if there is no performance impact with\nlarge number of objects during dump sorting:\n binaryheap *\n binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n {\n- int sz;\n binaryheap *heap;\n\n- sz = offsetof(binaryheap, bh_nodes) + sizeof(bh_node_type) * capacity;\n- heap = (binaryheap *) palloc(sz);\n+ heap = (binaryheap *) palloc(sizeof(binaryheap));\n heap->bh_space = capacity;\n heap->bh_compare = compare;\n heap->bh_arg = arg;\n\n heap->bh_size = 0;\n heap->bh_has_heap_property = true;\n+ heap->bh_nodes = (bh_node_type *) palloc(sizeof(bh_node_type)\n* capacity);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 23 Feb 2024 14:54:05 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi,\n\nI did a basic review and testing of this patch today. Overall I think\nthe patch is in very good shape - I agree with the tradeoffs it makes,\nand I like the approach in general. I do have a couple minor comments\nabout the code, and then maybe a couple thoughts about the approach.\n\n\nFirst, some comments - I'll put them here, but I also kept them in\n\"review\" commits, because that makes it easier to show the exact place\nin the code the comment is about.\n\n1) binaryheap_allocate got a new \"indexed\" argument, but the comment is\nnot updated to document it\n\n2) I think it's preferable to use descriptive argument names for\nbh_set_node. I don't think there's a good reason to keep it short.\n\n3) In a couple places we have code like this:\n\n if (heap->bh_indexed)\n bh_nodeidx_delete(heap->bh_nodeidx, result);\n\nMaybe it'd be better to have the if condition in bh_nodeidx_delete, so\nthat it can be called without it.\n\n4) Could we check the \"found\" flag in bh_set_node, somehow? I mean, we\neither expect to find the node (update of already tracked transaction)\nor not (when inserting it). The life cycle may be non-trivial (node\nadded, updated and removed, ...), would be useful assert I think.\n\n5) Do we actually need the various mem_freed local variables in a couple\nplaces, when we expect the value to be equal to txn->size (there's even\nassert enforcing that)?\n\n6) ReorderBufferCleanupTXN has a comment about maybe not using the same\nthreshold both to enable & disable usage of the binaryheap. I agree with\nthat, otherwise we could easily end up \"trashing\" if we add/remove\ntransactions right around the threshold. I think 90-95% for disabling\nthe heap would work fine.\n\n7) The code disabling binaryheap (based on the threshold) is copied in a\ncouple places, perhaps it should be a separate function called from\nthose places.\n\n8) Similarly to (3), maybe ReorderBufferTXNMemoryUpdate should do the\nmemory size check internally, to make the calls simpler.\n\n9) The ReorderBufferChangeMemoryUpdate / ReorderBufferTXNMemoryUpdate\nsplit maybe not very clear. It's not clear to me why it's divided like\nthis, or why we can't simply call ReorderBufferTXNMemoryUpdate directly.\n\n\nperformance\n-----------\n\nI did some benchmarks, to see the behavior in simple good/bad cases (see\nthe attached scripts.tgz). \"large\" is one large transaction inserting 1M\nrows, small is 64k single-row inserts, and subxacts is the original case\nwith ~100k subxacts. Finally, subxacts-small is many transactions with\n128 subxacts each (the main transactions are concurrent).\n\nThe results are pretty good, I think:\n\n test master patched\n -----------------------------------------------------\n large 2587 2459 95%\n small 956 856 89%\n subxacts 138915 2911 2%\n subxacts-small 13632 13187 97%\n\nThis is timing (ms) with logical_work_mem=4MB. I also tried with 64MB,\nwhere the subxact timing goes way down, but the overall conclusions do\nnot change.\n\nI was a bit surprised I haven't seen any clear regression, but in the\nend that's a good thing, right? There's a couple results in this thread\nshowing ~10% regression, but I've been unable to reproduce those.\nPerhaps the newer patch versions fix that, I guess.\n\nAnyway, I think that at some point we'd have to accept that some cases\nmay have slight regression. I think that's inherent for almost any\nheuristics - there's always going to be some rare case that defeats it.\nWhat's important is that the case needs to be rare and/or the impact\nvery limited. And I think that's true here.\n\n\noverall design\n--------------\n\nAs for the design, I agree with the approach of using a binaryheap to\ntrack transactions by size. When going over the thread history,\ndescribing the initial approach with only keeping \"large\" transactions\nabove some threshold (e.g. 10%), I was really concerned that'll either\nlead to abrupt changes in behavior (when transactions move just around\nthe 10%), or won't help with many common cases (with most transactions\nbeing below the limit).\n\nI was going to suggest some sort of \"binning\" - keeping lists for\ntransactions of similar size (e.g. <1kB, 1-2kB, 2-4kB, 4-8kB, ...) and\nevicting transactions from a list, i.e. based on approximate size. But\nif the indexed binary heap seems to be cheap enough, I think it's a\nbetter solution.\n\nThe one thing I'm a bit concerned about is the threshold used to start\nusing binary heap - these thresholds with binary decisions may easily\nlead to a \"cliff\" and robustness issues, i.e. abrupt change in behavior\nwith significant runtime change (e.g. you add/remove one transaction and\nthe code takes a much more expensive path). The value (1024) seems\nrather arbitrary, I wonder if there's something to justify that choice.\n\nIn any case, I agree it'd be good to have some dampening factor, to\nreduce the risk of trashing because of adding/removing a single\ntransaction to the decoding.\n\n\nrelated stuff / GenerationContext\n---------------------------------\n\nIt's not the fault of this patch, but this reminds me I have some doubts\nabout how the eviction interferes with using the GenerationContext for\nsome of the data. I suspect we can easily get into a situation where we\nevict the largest transaction, but that doesn't actually reduce the\nmemory usage at all, because the memory context blocks are shared with\nsome other transactions and don't get 100% empty (so we can't release\nthem). But it's actually worse, because GenerationContext does not even\nreuse this memory. So do we even gain anything by the eviction?\n\nWhen the earlier patch versions also considered age of the transaction,\nto try evicting the older ones first, I think that was interesting. I\nthink we may want to do something like this even with the binary heap.\n\n\nrelated stuff / increase of logical_decoding_work_mem\n-----------------------------------------------------\n\nIn the thread, one of the \"alternatives to spilling\" suggested in the\nthread was to enable streaming, but I think there's often a much more\nefficient alternative - increase the amount of memory, so that we don't\nactually need to spill.\n\nFor example, a system may be doing a lot of eviction / spilling with\nlogical_decoding_work_mem=64MB, but setting 128MB may completely\neliminate that. Of course, if there are large transactions, this may not\nbe possible (the GUC would have to exceed RAM). But I don't think that's\nvery common, the incidents that I've observed were often resolved by\nbumping the logical_decoding_work_mem by a little bit.\n\nI wonder if there's something we might do to help users to tune this. We\nshould be able to measure the \"peak\" memory usage (how much memory we'd\nneed to not spill), so maybe we could log that as a WARNING, similarly\nto checkpoints - there we only log \"checkpoints too frequent, tune WAL\nlimits\", but perhaps we might do more here? Or maybe we could add the\nwatermark to the system catalog?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 23 Feb 2024 17:29:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Sat, Feb 24, 2024 at 1:29 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I did a basic review and testing of this patch today. Overall I think\n> the patch is in very good shape - I agree with the tradeoffs it makes,\n> and I like the approach in general. I do have a couple minor comments\n> about the code, and then maybe a couple thoughts about the approach.\n\nThank you for the review comments and tests!\n\n>\n>\n> First, some comments - I'll put them here, but I also kept them in\n> \"review\" commits, because that makes it easier to show the exact place\n> in the code the comment is about.\n>\n> 1) binaryheap_allocate got a new \"indexed\" argument, but the comment is\n> not updated to document it\n\nFixed.\n\n>\n> 2) I think it's preferable to use descriptive argument names for\n> bh_set_node. I don't think there's a good reason to keep it short.\n\nAgreed.\n\n>\n> 3) In a couple places we have code like this:\n>\n> if (heap->bh_indexed)\n> bh_nodeidx_delete(heap->bh_nodeidx, result);\n>\n> Maybe it'd be better to have the if condition in bh_nodeidx_delete, so\n> that it can be called without it.\n\nFixed.\n\n>\n> 4) Could we check the \"found\" flag in bh_set_node, somehow? I mean, we\n> either expect to find the node (update of already tracked transaction)\n> or not (when inserting it). The life cycle may be non-trivial (node\n> added, updated and removed, ...), would be useful assert I think.\n\nAgreed.\n\n>\n> 5) Do we actually need the various mem_freed local variables in a couple\n> places, when we expect the value to be equal to txn->size (there's even\n> assert enforcing that)?\n\nYou're right.\n\n>\n> 6) ReorderBufferCleanupTXN has a comment about maybe not using the same\n> threshold both to enable & disable usage of the binaryheap. I agree with\n> that, otherwise we could easily end up \"trashing\" if we add/remove\n> transactions right around the threshold. I think 90-95% for disabling\n> the heap would work fine.\n\nAgreeehd.\n\n>\n> 7) The code disabling binaryheap (based on the threshold) is copied in a\n> couple places, perhaps it should be a separate function called from\n> those places.\n\nFixed.\n\n>\n> 8) Similarly to (3), maybe ReorderBufferTXNMemoryUpdate should do the\n> memory size check internally, to make the calls simpler.\n\nAgreed.\n\n>\n> 9) The ReorderBufferChangeMemoryUpdate / ReorderBufferTXNMemoryUpdate\n> split maybe not very clear. It's not clear to me why it's divided like\n> this, or why we can't simply call ReorderBufferTXNMemoryUpdate directly.\n\nI think that now we have two use cases: updating the memory counter\nafter freeing individual change, and updating the memory counter after\nfreeing all changes of the transaction (i.e., making the counter to\n0). In the former case, we need to check if the change is\nREORDER_BUFFER_CHANGE_INTERNAL_TUPLECID but we don't need to pass the\ntransaction as the change has its transaction. On the other hand, in\nthe latter case, we don't need the change but need to pass the\ntransaction. If we do both things in one function, the function would\nhave two arguments: change and txn, and the callers set either one\nthat they know. I've updated the patch accordingly.\n\nBTW it might be worth considering to create a separate patch for the\nupdates around ReorderBufferChangeMemoryUpdate() that batches the\nmemory counter updates as it seems an independent change from the\nmax-heap stuff.\n\n>\n> performance\n> -----------\n>\n> I did some benchmarks, to see the behavior in simple good/bad cases (see\n> the attached scripts.tgz). \"large\" is one large transaction inserting 1M\n> rows, small is 64k single-row inserts, and subxacts is the original case\n> with ~100k subxacts. Finally, subxacts-small is many transactions with\n> 128 subxacts each (the main transactions are concurrent).\n>\n> The results are pretty good, I think:\n>\n> test master patched\n> -----------------------------------------------------\n> large 2587 2459 95%\n> small 956 856 89%\n> subxacts 138915 2911 2%\n> subxacts-small 13632 13187 97%\n\nThank you for doing the performance test. I ran the same script you\nshared on my machine just in case and got similar results:\n\n master patched\nlarge: 2831 2827\nsmall: 1226 1222\nsubxacts: 134076 2744\nsubxacts-small: 23384 23127\n\nIn my case, the differences seem to be within a noise range.\n\n>\n> This is timing (ms) with logical_work_mem=4MB. I also tried with 64MB,\n> where the subxact timing goes way down, but the overall conclusions do\n> not change.\n>\n> I was a bit surprised I haven't seen any clear regression, but in the\n> end that's a good thing, right? There's a couple results in this thread\n> showing ~10% regression, but I've been unable to reproduce those.\n> Perhaps the newer patch versions fix that, I guess.\n\nYes, the 10% regression is fixed in the v4 patch. We don't update the\nmax-heap at all until the number of transactions reaches the threshold\nso I think there is mostly 0 overhead in normal cases.\n\n>\n> Anyway, I think that at some point we'd have to accept that some cases\n> may have slight regression. I think that's inherent for almost any\n> heuristics - there's always going to be some rare case that defeats it.\n> What's important is that the case needs to be rare and/or the impact\n> very limited. And I think that's true here.\n\nAgreed.\n\n>\n>\n> overall design\n> --------------\n>\n> As for the design, I agree with the approach of using a binaryheap to\n> track transactions by size. When going over the thread history,\n> describing the initial approach with only keeping \"large\" transactions\n> above some threshold (e.g. 10%), I was really concerned that'll either\n> lead to abrupt changes in behavior (when transactions move just around\n> the 10%), or won't help with many common cases (with most transactions\n> being below the limit).\n>\n> I was going to suggest some sort of \"binning\" - keeping lists for\n> transactions of similar size (e.g. <1kB, 1-2kB, 2-4kB, 4-8kB, ...) and\n> evicting transactions from a list, i.e. based on approximate size. But\n> if the indexed binary heap seems to be cheap enough, I think it's a\n> better solution.\n\nI've also considered the binning idea. But it was not clear to me how\nit works well in a case where all transactions belong to the\nparticular class. For example, if we need to free up 1MB memory, we\ncould end up evicting 2000 transactions consuming 50 bytes instead of\n100 transactions consuming 1000 bytes, resulting in that we end up\nserializing more transactions. Also, I'm concerned about the cost of\nmaintaining the binning lists.\n\n>\n> The one thing I'm a bit concerned about is the threshold used to start\n> using binary heap - these thresholds with binary decisions may easily\n> lead to a \"cliff\" and robustness issues, i.e. abrupt change in behavior\n> with significant runtime change (e.g. you add/remove one transaction and\n> the code takes a much more expensive path). The value (1024) seems\n> rather arbitrary, I wonder if there's something to justify that choice.\n\nTrue. 1024 seems small to me. In my environment, I started to see a\nbig difference from around 40000 transactions. But it varies depending\non environments and workloads.\n\nI think that this performance problem we're addressing doesn't\nnormally happen as long as all transactions being decoded are\ntop-level transactions. Otherwise, we also need to improve\nReorderBufferLargestStreamableTopTXN(). Given this fact, I think\nmax_connections = 1024 is a possible value in some systems, and I've\nobserved such systems sometimes. On the other hand, I've observed >\n5000 in just a few cases, and having more than 5000 transactions in\nReorderBuffer seems unlikely to happen without subtransactions. I\nthink we can say it's an extreme case, the number is still an\narbitrary number though.\n\nOr probably we can compute the threshold based on max_connections,\ne.g., max_connections * 10. That way, we can ensure that users won't\nincur the max-heap maintenance costs as long as they don't use\nsubtransactions.\n\n>\n> In any case, I agree it'd be good to have some dampening factor, to\n> reduce the risk of trashing because of adding/removing a single\n> transaction to the decoding.\n>\n>\n> related stuff / GenerationContext\n> ---------------------------------\n>\n> It's not the fault of this patch, but this reminds me I have some doubts\n> about how the eviction interferes with using the GenerationContext for\n> some of the data. I suspect we can easily get into a situation where we\n> evict the largest transaction, but that doesn't actually reduce the\n> memory usage at all, because the memory context blocks are shared with\n> some other transactions and don't get 100% empty (so we can't release\n> them). But it's actually worse, because GenerationContext does not even\n> reuse this memory. So do we even gain anything by the eviction?\n>\n> When the earlier patch versions also considered age of the transaction,\n> to try evicting the older ones first, I think that was interesting. I\n> think we may want to do something like this even with the binary heap.\n\nThank you for raising this issue. This is one of the highest priority\nitems in my backlog. We've seen cases where the logical decoding uses\nmuch more memory than logical_decoding_work_mem value[1][2] (e.g. it\nused 4GB memory even though the logical_decoding_work_mem was 256kB).\nI think that the problem would still happen even with this improvement\non the eviction.\n\nI believe these are separate problems we can address, and evicting\nlarge transactions first would still be the right strategy. We might\nwant to improve how we store changes in memory contexts. For example,\nit might be worth having per-transaction memory context so that we can\nactually free memory blocks by the eviction. We can discuss it in a\nseparate thread.\n\n>\n>\n> related stuff / increase of logical_decoding_work_mem\n> -----------------------------------------------------\n>\n> In the thread, one of the \"alternatives to spilling\" suggested in the\n> thread was to enable streaming, but I think there's often a much more\n> efficient alternative - increase the amount of memory, so that we don't\n> actually need to spill.\n\nAgreed.\n\n>\n> For example, a system may be doing a lot of eviction / spilling with\n> logical_decoding_work_mem=64MB, but setting 128MB may completely\n> eliminate that. Of course, if there are large transactions, this may not\n> be possible (the GUC would have to exceed RAM). But I don't think that's\n> very common, the incidents that I've observed were often resolved by\n> bumping the logical_decoding_work_mem by a little bit.\n>\n> I wonder if there's something we might do to help users to tune this. We\n> should be able to measure the \"peak\" memory usage (how much memory we'd\n> need to not spill), so maybe we could log that as a WARNING, similarly\n> to checkpoints - there we only log \"checkpoints too frequent, tune WAL\n> limits\", but perhaps we might do more here? Or maybe we could add the\n> watermark to the system catalog?\n\nInteresting ideas.\n\nThe statistics such as spill_count shown in pg_stat_replication_slots\nview could already give hints to users to increase the\nlogical_decoding_work_mem. In addition to that, it's an interesting\nidea to have the high water mark in the view.\n\n\nI've attached updated patches.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAMnUB3oYugXCBLSkih%2BqNsWQPciEwos6g_AMbnz_peNoxfHwyw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/17974-f8c9d353a62f414d%40postgresql.org\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Feb 2024 15:46:53 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 6:24 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 9 Feb 2024 at 20:51, Masahiko Sawada <[email protected]> wrote:\n> >\n> >\n> > I think this performance regression is not acceptable. In this\n> > workload, one transaction has 10k subtransactions and the logical\n> > decoding becomes quite slow if logical_decoding_work_mem is not big\n> > enough. Therefore, it's a legitimate and common approach to increase\n> > logical_decoding_work_mem to speedup the decoding. However, with thie\n> > patch, the decoding becomes slower than today. It's a bad idea in\n> > general to optimize an extreme case while sacrificing the normal (or\n> > more common) cases.\n> >\n>\n> Since this same function is used by pg_dump sorting TopoSort functions\n> also, we can just verify once if there is no performance impact with\n> large number of objects during dump sorting:\n\nOkay. I've run the pg_dump regression tests with --timer flag (note\nthat pg_dump doesn't use indexed binary heap):\n\nmaster:\n[16:00:25] t/001_basic.pl ................ ok 151 ms ( 0.00 usr\n0.00 sys + 0.09 cusr 0.06 csys = 0.15 CPU)\n[16:00:25] t/002_pg_dump.pl .............. ok 10157 ms ( 0.23 usr\n0.01 sys + 1.48 cusr 0.37 csys = 2.09 CPU)\n[16:00:36] t/003_pg_dump_with_server.pl .. ok 504 ms ( 0.00 usr\n0.01 sys + 0.10 cusr 0.07 csys = 0.18 CPU)\n[16:00:36] t/004_pg_dump_parallel.pl ..... ok 1044 ms ( 0.00 usr\n0.00 sys + 0.12 cusr 0.08 csys = 0.20 CPU)\n[16:00:37] t/005_pg_dump_filterfile.pl ... ok 2390 ms ( 0.00 usr\n0.00 sys + 0.34 cusr 0.19 csys = 0.53 CPU)\n[16:00:40] t/010_dump_connstr.pl ......... ok 4813 ms ( 0.01 usr\n0.00 sys + 2.13 cusr 0.45 csys = 2.59 CPU)\n\npatched:\n[15:59:47] t/001_basic.pl ................ ok 150 ms ( 0.00 usr\n0.00 sys + 0.08 cusr 0.07 csys = 0.15 CPU)\n[15:59:47] t/002_pg_dump.pl .............. ok 10057 ms ( 0.23 usr\n0.02 sys + 1.49 cusr 0.36 csys = 2.10 CPU)\n[15:59:57] t/003_pg_dump_with_server.pl .. ok 509 ms ( 0.00 usr\n0.00 sys + 0.09 cusr 0.08 csys = 0.17 CPU)\n[15:59:58] t/004_pg_dump_parallel.pl ..... ok 1048 ms ( 0.01 usr\n0.00 sys + 0.11 cusr 0.11 csys = 0.23 CPU)\n[15:59:59] t/005_pg_dump_filterfile.pl ... ok 2398 ms ( 0.00 usr\n0.00 sys + 0.34 cusr 0.20 csys = 0.54 CPU)\n[16:00:01] t/010_dump_connstr.pl ......... ok 4762 ms ( 0.01 usr\n0.00 sys + 2.15 cusr 0.42 csys = 2.58 CPU)\n\nThere is no noticeable difference between the two results.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:02:26 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "\n\nOn 2/26/24 07:46, Masahiko Sawada wrote:\n> On Sat, Feb 24, 2024 at 1:29 AM Tomas Vondra\n> <[email protected]> wrote:\n>>...\n>>\n>> overall design\n>> --------------\n>>\n>> As for the design, I agree with the approach of using a binaryheap to\n>> track transactions by size. When going over the thread history,\n>> describing the initial approach with only keeping \"large\" transactions\n>> above some threshold (e.g. 10%), I was really concerned that'll either\n>> lead to abrupt changes in behavior (when transactions move just around\n>> the 10%), or won't help with many common cases (with most transactions\n>> being below the limit).\n>>\n>> I was going to suggest some sort of \"binning\" - keeping lists for\n>> transactions of similar size (e.g. <1kB, 1-2kB, 2-4kB, 4-8kB, ...) and\n>> evicting transactions from a list, i.e. based on approximate size. But\n>> if the indexed binary heap seems to be cheap enough, I think it's a\n>> better solution.\n> \n> I've also considered the binning idea. But it was not clear to me how\n> it works well in a case where all transactions belong to the\n> particular class. For example, if we need to free up 1MB memory, we\n> could end up evicting 2000 transactions consuming 50 bytes instead of\n> 100 transactions consuming 1000 bytes, resulting in that we end up\n> serializing more transactions. Also, I'm concerned about the cost of\n> maintaining the binning lists.\n> \n\nI don't think the list maintenance would be very costly - in particular,\nthe lists would not need to be sorted by size. You're right in some\nextreme cases we might evict the smallest transactions in the list. I\nthink on average we'd evict transactions with average size, which seems\nOK for this use case.\n\nAnyway, I don't think we need to be distracted with this. I mentioned it\nmerely to show it was considered, but the heap seems to work well\nenough, and in the end is even simpler because the complexity is hidden\noutside reorderbuffer.\n\n>>\n>> The one thing I'm a bit concerned about is the threshold used to start\n>> using binary heap - these thresholds with binary decisions may easily\n>> lead to a \"cliff\" and robustness issues, i.e. abrupt change in behavior\n>> with significant runtime change (e.g. you add/remove one transaction and\n>> the code takes a much more expensive path). The value (1024) seems\n>> rather arbitrary, I wonder if there's something to justify that choice.\n> \n> True. 1024 seems small to me. In my environment, I started to see a\n> big difference from around 40000 transactions. But it varies depending\n> on environments and workloads.\n> \n> I think that this performance problem we're addressing doesn't\n> normally happen as long as all transactions being decoded are\n> top-level transactions. Otherwise, we also need to improve\n> ReorderBufferLargestStreamableTopTXN(). Given this fact, I think\n> max_connections = 1024 is a possible value in some systems, and I've\n> observed such systems sometimes. On the other hand, I've observed >\n> 5000 in just a few cases, and having more than 5000 transactions in\n> ReorderBuffer seems unlikely to happen without subtransactions. I\n> think we can say it's an extreme case, the number is still an\n> arbitrary number though.\n> \n> Or probably we can compute the threshold based on max_connections,\n> e.g., max_connections * 10. That way, we can ensure that users won't\n> incur the max-heap maintenance costs as long as they don't use\n> subtransactions.\n> \n\nTying this to max_connections seems like an interesting option. It'd\nmake this adaptive to a system. I haven't thought about the exact value\n(m_c * 10), but it seems better than arbitrary hard-coded values.\n\n>>\n>> In any case, I agree it'd be good to have some dampening factor, to\n>> reduce the risk of trashing because of adding/removing a single\n>> transaction to the decoding.\n>>\n>>\n>> related stuff / GenerationContext\n>> ---------------------------------\n>>\n>> It's not the fault of this patch, but this reminds me I have some doubts\n>> about how the eviction interferes with using the GenerationContext for\n>> some of the data. I suspect we can easily get into a situation where we\n>> evict the largest transaction, but that doesn't actually reduce the\n>> memory usage at all, because the memory context blocks are shared with\n>> some other transactions and don't get 100% empty (so we can't release\n>> them). But it's actually worse, because GenerationContext does not even\n>> reuse this memory. So do we even gain anything by the eviction?\n>>\n>> When the earlier patch versions also considered age of the transaction,\n>> to try evicting the older ones first, I think that was interesting. I\n>> think we may want to do something like this even with the binary heap.\n> \n> Thank you for raising this issue. This is one of the highest priority\n> items in my backlog. We've seen cases where the logical decoding uses\n> much more memory than logical_decoding_work_mem value[1][2] (e.g. it\n> used 4GB memory even though the logical_decoding_work_mem was 256kB).\n> I think that the problem would still happen even with this improvement\n> on the eviction.\n> \n> I believe these are separate problems we can address, and evicting\n> large transactions first would still be the right strategy. We might\n> want to improve how we store changes in memory contexts. For example,\n> it might be worth having per-transaction memory context so that we can\n> actually free memory blocks by the eviction. We can discuss it in a\n> separate thread.\n> \n\nYes, I think using per-transaction context for large transactions might\nwork. I don't think we want too many contexts, so we'd start with the\nshared context, and then at some point (when the transaction exceeds say\n5% of the memory limit) we'd switch it to a separate one.\n\nBut that's a matter for a separate patch, so let's discuss elsewhere.\n\n>>\n>> For example, a system may be doing a lot of eviction / spilling with\n>> logical_decoding_work_mem=64MB, but setting 128MB may completely\n>> eliminate that. Of course, if there are large transactions, this may not\n>> be possible (the GUC would have to exceed RAM). But I don't think that's\n>> very common, the incidents that I've observed were often resolved by\n>> bumping the logical_decoding_work_mem by a little bit.\n>>\n>> I wonder if there's something we might do to help users to tune this. We\n>> should be able to measure the \"peak\" memory usage (how much memory we'd\n>> need to not spill), so maybe we could log that as a WARNING, similarly\n>> to checkpoints - there we only log \"checkpoints too frequent, tune WAL\n>> limits\", but perhaps we might do more here? Or maybe we could add the\n>> watermark to the system catalog?\n> \n> Interesting ideas.\n> \n> The statistics such as spill_count shown in pg_stat_replication_slots\n> view could already give hints to users to increase the\n> logical_decoding_work_mem. In addition to that, it's an interesting\n> idea to have the high water mark in the view.\n> \n\nThe spill statistics are useful, but I'm not sure it can answer the main\nquestion:\n\n How high would the memory limit need to be to not spill?\n\nMaybe there's something we can measure / log to help with this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Feb 2024 10:43:31 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 6:43 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 2/26/24 07:46, Masahiko Sawada wrote:\n> > On Sat, Feb 24, 2024 at 1:29 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>...\n> >>\n> >> overall design\n> >> --------------\n> >>\n> >> As for the design, I agree with the approach of using a binaryheap to\n> >> track transactions by size. When going over the thread history,\n> >> describing the initial approach with only keeping \"large\" transactions\n> >> above some threshold (e.g. 10%), I was really concerned that'll either\n> >> lead to abrupt changes in behavior (when transactions move just around\n> >> the 10%), or won't help with many common cases (with most transactions\n> >> being below the limit).\n> >>\n> >> I was going to suggest some sort of \"binning\" - keeping lists for\n> >> transactions of similar size (e.g. <1kB, 1-2kB, 2-4kB, 4-8kB, ...) and\n> >> evicting transactions from a list, i.e. based on approximate size. But\n> >> if the indexed binary heap seems to be cheap enough, I think it's a\n> >> better solution.\n> >\n> > I've also considered the binning idea. But it was not clear to me how\n> > it works well in a case where all transactions belong to the\n> > particular class. For example, if we need to free up 1MB memory, we\n> > could end up evicting 2000 transactions consuming 50 bytes instead of\n> > 100 transactions consuming 1000 bytes, resulting in that we end up\n> > serializing more transactions. Also, I'm concerned about the cost of\n> > maintaining the binning lists.\n> >\n>\n> I don't think the list maintenance would be very costly - in particular,\n> the lists would not need to be sorted by size. You're right in some\n> extreme cases we might evict the smallest transactions in the list. I\n> think on average we'd evict transactions with average size, which seems\n> OK for this use case.\n>\n> Anyway, I don't think we need to be distracted with this. I mentioned it\n> merely to show it was considered, but the heap seems to work well\n> enough, and in the end is even simpler because the complexity is hidden\n> outside reorderbuffer.\n>\n> >>\n> >> The one thing I'm a bit concerned about is the threshold used to start\n> >> using binary heap - these thresholds with binary decisions may easily\n> >> lead to a \"cliff\" and robustness issues, i.e. abrupt change in behavior\n> >> with significant runtime change (e.g. you add/remove one transaction and\n> >> the code takes a much more expensive path). The value (1024) seems\n> >> rather arbitrary, I wonder if there's something to justify that choice.\n> >\n> > True. 1024 seems small to me. In my environment, I started to see a\n> > big difference from around 40000 transactions. But it varies depending\n> > on environments and workloads.\n> >\n> > I think that this performance problem we're addressing doesn't\n> > normally happen as long as all transactions being decoded are\n> > top-level transactions. Otherwise, we also need to improve\n> > ReorderBufferLargestStreamableTopTXN(). Given this fact, I think\n> > max_connections = 1024 is a possible value in some systems, and I've\n> > observed such systems sometimes. On the other hand, I've observed >\n> > 5000 in just a few cases, and having more than 5000 transactions in\n> > ReorderBuffer seems unlikely to happen without subtransactions. I\n> > think we can say it's an extreme case, the number is still an\n> > arbitrary number though.\n> >\n> > Or probably we can compute the threshold based on max_connections,\n> > e.g., max_connections * 10. That way, we can ensure that users won't\n> > incur the max-heap maintenance costs as long as they don't use\n> > subtransactions.\n> >\n>\n> Tying this to max_connections seems like an interesting option. It'd\n> make this adaptive to a system. I haven't thought about the exact value\n> (m_c * 10), but it seems better than arbitrary hard-coded values.\n\nI've updated the patch accordingly, using MaxConnections for now. I've\nalso updated some comments and commit messages and added typedef.list\nchanges.\n\n>\n> >>\n> >> In any case, I agree it'd be good to have some dampening factor, to\n> >> reduce the risk of trashing because of adding/removing a single\n> >> transaction to the decoding.\n> >>\n> >>\n> >> related stuff / GenerationContext\n> >> ---------------------------------\n> >>\n> >> It's not the fault of this patch, but this reminds me I have some doubts\n> >> about how the eviction interferes with using the GenerationContext for\n> >> some of the data. I suspect we can easily get into a situation where we\n> >> evict the largest transaction, but that doesn't actually reduce the\n> >> memory usage at all, because the memory context blocks are shared with\n> >> some other transactions and don't get 100% empty (so we can't release\n> >> them). But it's actually worse, because GenerationContext does not even\n> >> reuse this memory. So do we even gain anything by the eviction?\n> >>\n> >> When the earlier patch versions also considered age of the transaction,\n> >> to try evicting the older ones first, I think that was interesting. I\n> >> think we may want to do something like this even with the binary heap.\n> >\n> > Thank you for raising this issue. This is one of the highest priority\n> > items in my backlog. We've seen cases where the logical decoding uses\n> > much more memory than logical_decoding_work_mem value[1][2] (e.g. it\n> > used 4GB memory even though the logical_decoding_work_mem was 256kB).\n> > I think that the problem would still happen even with this improvement\n> > on the eviction.\n> >\n> > I believe these are separate problems we can address, and evicting\n> > large transactions first would still be the right strategy. We might\n> > want to improve how we store changes in memory contexts. For example,\n> > it might be worth having per-transaction memory context so that we can\n> > actually free memory blocks by the eviction. We can discuss it in a\n> > separate thread.\n> >\n>\n> Yes, I think using per-transaction context for large transactions might\n> work. I don't think we want too many contexts, so we'd start with the\n> shared context, and then at some point (when the transaction exceeds say\n> 5% of the memory limit) we'd switch it to a separate one.\n>\n> But that's a matter for a separate patch, so let's discuss elsewhere.\n\n+1\n\n>\n> >>\n> >> For example, a system may be doing a lot of eviction / spilling with\n> >> logical_decoding_work_mem=64MB, but setting 128MB may completely\n> >> eliminate that. Of course, if there are large transactions, this may not\n> >> be possible (the GUC would have to exceed RAM). But I don't think that's\n> >> very common, the incidents that I've observed were often resolved by\n> >> bumping the logical_decoding_work_mem by a little bit.\n> >>\n> >> I wonder if there's something we might do to help users to tune this. We\n> >> should be able to measure the \"peak\" memory usage (how much memory we'd\n> >> need to not spill), so maybe we could log that as a WARNING, similarly\n> >> to checkpoints - there we only log \"checkpoints too frequent, tune WAL\n> >> limits\", but perhaps we might do more here? Or maybe we could add the\n> >> watermark to the system catalog?\n> >\n> > Interesting ideas.\n> >\n> > The statistics such as spill_count shown in pg_stat_replication_slots\n> > view could already give hints to users to increase the\n> > logical_decoding_work_mem. In addition to that, it's an interesting\n> > idea to have the high water mark in the view.\n> >\n>\n> The spill statistics are useful, but I'm not sure it can answer the main\n> question:\n>\n> How high would the memory limit need to be to not spill?\n>\n> Maybe there's something we can measure / log to help with this.\n\nRight. I like the idea of the high watermark. The\npg_stat_replication_slots would be the place to store such\ninformation. Since the reorder buffer evicts or streams transactions\nanyway based on the logical_decoding_work_mem, probably we need to\ncompute the maximum amount of data in the reorder buffer at one point\nin time while assuming no transactions were evicted and streamed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Feb 2024 23:23:42 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, 26 Feb 2024 at 12:33, Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Feb 23, 2024 at 6:24 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 9 Feb 2024 at 20:51, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > >\n> > > I think this performance regression is not acceptable. In this\n> > > workload, one transaction has 10k subtransactions and the logical\n> > > decoding becomes quite slow if logical_decoding_work_mem is not big\n> > > enough. Therefore, it's a legitimate and common approach to increase\n> > > logical_decoding_work_mem to speedup the decoding. However, with thie\n> > > patch, the decoding becomes slower than today. It's a bad idea in\n> > > general to optimize an extreme case while sacrificing the normal (or\n> > > more common) cases.\n> > >\n> >\n> > Since this same function is used by pg_dump sorting TopoSort functions\n> > also, we can just verify once if there is no performance impact with\n> > large number of objects during dump sorting:\n>\n> Okay. I've run the pg_dump regression tests with --timer flag (note\n> that pg_dump doesn't use indexed binary heap):\n>\n> master:\n> [16:00:25] t/001_basic.pl ................ ok 151 ms ( 0.00 usr\n> 0.00 sys + 0.09 cusr 0.06 csys = 0.15 CPU)\n> [16:00:25] t/002_pg_dump.pl .............. ok 10157 ms ( 0.23 usr\n> 0.01 sys + 1.48 cusr 0.37 csys = 2.09 CPU)\n> [16:00:36] t/003_pg_dump_with_server.pl .. ok 504 ms ( 0.00 usr\n> 0.01 sys + 0.10 cusr 0.07 csys = 0.18 CPU)\n> [16:00:36] t/004_pg_dump_parallel.pl ..... ok 1044 ms ( 0.00 usr\n> 0.00 sys + 0.12 cusr 0.08 csys = 0.20 CPU)\n> [16:00:37] t/005_pg_dump_filterfile.pl ... ok 2390 ms ( 0.00 usr\n> 0.00 sys + 0.34 cusr 0.19 csys = 0.53 CPU)\n> [16:00:40] t/010_dump_connstr.pl ......... ok 4813 ms ( 0.01 usr\n> 0.00 sys + 2.13 cusr 0.45 csys = 2.59 CPU)\n>\n> patched:\n> [15:59:47] t/001_basic.pl ................ ok 150 ms ( 0.00 usr\n> 0.00 sys + 0.08 cusr 0.07 csys = 0.15 CPU)\n> [15:59:47] t/002_pg_dump.pl .............. ok 10057 ms ( 0.23 usr\n> 0.02 sys + 1.49 cusr 0.36 csys = 2.10 CPU)\n> [15:59:57] t/003_pg_dump_with_server.pl .. ok 509 ms ( 0.00 usr\n> 0.00 sys + 0.09 cusr 0.08 csys = 0.17 CPU)\n> [15:59:58] t/004_pg_dump_parallel.pl ..... ok 1048 ms ( 0.01 usr\n> 0.00 sys + 0.11 cusr 0.11 csys = 0.23 CPU)\n> [15:59:59] t/005_pg_dump_filterfile.pl ... ok 2398 ms ( 0.00 usr\n> 0.00 sys + 0.34 cusr 0.20 csys = 0.54 CPU)\n> [16:00:01] t/010_dump_connstr.pl ......... ok 4762 ms ( 0.01 usr\n> 0.00 sys + 2.15 cusr 0.42 csys = 2.58 CPU)\n>\n> There is no noticeable difference between the two results.\n\nThanks for verifying it, I have also run in my environment and found\nno noticeable difference between them:\nHead:\n[07:29:41] t/001_basic.pl ................ ok 332 ms\n[07:29:41] t/002_pg_dump.pl .............. ok 11029 ms\n[07:29:52] t/003_pg_dump_with_server.pl .. ok 705 ms\n[07:29:53] t/004_pg_dump_parallel.pl ..... ok 1198 ms\n[07:29:54] t/005_pg_dump_filterfile.pl ... ok 2822 ms\n[07:29:57] t/010_dump_connstr.pl ......... ok 5582 ms\n\nWith Patch:\n[07:42:16] t/001_basic.pl ................ ok 328 ms\n[07:42:17] t/002_pg_dump.pl .............. ok 11044 ms\n[07:42:28] t/003_pg_dump_with_server.pl .. ok 719 ms\n[07:42:29] t/004_pg_dump_parallel.pl ..... ok 1188 ms\n[07:42:30] t/005_pg_dump_filterfile.pl ... ok 2816 ms\n[07:42:33] t/010_dump_connstr.pl ......... ok 5609 ms\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 27 Feb 2024 20:25:58 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <[email protected]> wrote:\n>\n\nA few comments on 0003:\n===================\n1.\n+/*\n+ * Threshold of the total number of top-level and sub transactions\nthat controls\n+ * whether we switch the memory track state. While the MAINTAIN_HEAP state is\n+ * effective when there are many transactions being decoded, in many systems\n+ * there is generally no need to use it as long as all transactions\nbeing decoded\n+ * are top-level transactions. Therefore, we use MaxConnections as\nthe threshold\n+ * so we can prevent switch to the state unless we use subtransactions.\n+ */\n+#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n\nThe comment seems to imply that MAINTAIN_HEAP is useful for large\nnumber of transactions but ReorderBufferLargestTXN() switches to this\nstate even when there is one transaction. So, basically we use the\nbinary_heap technique to get the largest even when we have one\ntransaction but we don't maintain that heap unless we have\nREORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are\nin-progress. This means there is some additional work when (build and\nreset heap each time when we pick largest xact) we have fewer\ntransactions in the system but that may not be impacting us because of\nother costs involved like serializing all the changes. I think once we\ncan try to stress test this by setting\ndebug_logical_replication_streaming to 'immediate' to see if the new\nmechanism has any overhead.\n\n2. Can we somehow measure the additional memory that will be consumed\nby each backend/walsender to maintain transactions? Because I think\nthis also won't be accounted for logical_decoding_work_mem, so if this\nis large, there could be a chance of more complaints on us for not\nhonoring logical_decoding_work_mem.\n\n3.\n@@ -3707,11 +3817,14 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n\n ReorderBufferSerializeChange(rb, txn, fd, change);\n dlist_delete(&change->node);\n- ReorderBufferReturnChange(rb, change, true);\n+ ReorderBufferReturnChange(rb, change, false);\n\n spilled++;\n }\n\n+ /* Update the memory counter */\n+ ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n\nIn ReorderBufferSerializeTXN(), we already use a size variable for\ntxn->size, we can probably use that for the sake of consistency.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Feb 2024 11:39:55 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 3:10 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <[email protected]> wrote:\n> >\n>\n\nThank you for the comments!\n\n> A few comments on 0003:\n> ===================\n> 1.\n> +/*\n> + * Threshold of the total number of top-level and sub transactions\n> that controls\n> + * whether we switch the memory track state. While the MAINTAIN_HEAP state is\n> + * effective when there are many transactions being decoded, in many systems\n> + * there is generally no need to use it as long as all transactions\n> being decoded\n> + * are top-level transactions. Therefore, we use MaxConnections as\n> the threshold\n> + * so we can prevent switch to the state unless we use subtransactions.\n> + */\n> +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n>\n> The comment seems to imply that MAINTAIN_HEAP is useful for large\n> number of transactions but ReorderBufferLargestTXN() switches to this\n> state even when there is one transaction. So, basically we use the\n> binary_heap technique to get the largest even when we have one\n> transaction but we don't maintain that heap unless we have\n> REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are\n> in-progress.\n\nRight.\n\n> This means there is some additional work when (build and\n> reset heap each time when we pick largest xact) we have fewer\n> transactions in the system but that may not be impacting us because of\n> other costs involved like serializing all the changes. I think once we\n> can try to stress test this by setting\n> debug_logical_replication_streaming to 'immediate' to see if the new\n> mechanism has any overhead.\n\nAgreed.\n\nI've done performance tests that decodes 10k small transactions\n(pgbench transactions) with debug_logical_replication_streaming =\n'immediate':\n\nmaster: 6263.022 ms\npatched: 6403.873 ms\n\nI don't see noticeable regressions.\n\n>\n> 2. Can we somehow measure the additional memory that will be consumed\n> by each backend/walsender to maintain transactions? Because I think\n> this also won't be accounted for logical_decoding_work_mem, so if this\n> is large, there could be a chance of more complaints on us for not\n> honoring logical_decoding_work_mem.\n\nGood point.\n\nWe initialize the binaryheap with MaxConnections * 2 entries and the\nbinaryheap entries are pointers. So we use additional (8 * 100 * 2)\nbytes with the default max_connections setting even when there is one\ntransaction, and could use more memory when adding more transactions.\n\nI think there is still room for considering how to determine the\nthreshold and the number of initial entries. Using MaxConnections\nseems to work but it always uses the current MaxConnections value\ninstead of the value that was set at a time when WAL records were\nwritten. As for the initial number of entries in binaryheap, I think\nwe can the threshold value as the initial number of entries instead of\n(threshold * 2). Or we might want to use the same value, 1000, as the\none we use for buffer->by_txn hash table.\n\n>\n> 3.\n> @@ -3707,11 +3817,14 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n>\n> ReorderBufferSerializeChange(rb, txn, fd, change);\n> dlist_delete(&change->node);\n> - ReorderBufferReturnChange(rb, change, true);\n> + ReorderBufferReturnChange(rb, change, false);\n>\n> spilled++;\n> }\n>\n> + /* Update the memory counter */\n> + ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n>\n> In ReorderBufferSerializeTXN(), we already use a size variable for\n> txn->size, we can probably use that for the sake of consistency.\n\nAgreed, will fix it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:24:20 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi, Here are some review comments for v7-0001\n\n1.\n/*\n * binaryheap_free\n *\n * Releases memory used by the given binaryheap.\n */\nvoid\nbinaryheap_free(binaryheap *heap)\n{\npfree(heap);\n}\n\n\nShouldn't the above function (not modified by the patch) also firstly\nfree the memory allocated for the heap->bh_nodes?\n\n~~~\n\n2.\n+/*\n+ * Make sure there is enough space for nodes.\n+ */\n+static void\n+bh_enlarge_node_array(binaryheap *heap)\n+{\n+ heap->bh_space *= 2;\n+ heap->bh_nodes = repalloc(heap->bh_nodes,\n+ sizeof(bh_node_type) * heap->bh_space);\n+}\n\nStrictly speaking, this function doesn't really \"Make sure\" of\nanything because the caller does the check whether we need more space.\nAll that happens here is allocating more space. Maybe this function\ncomment should say something like \"Double the space allocated for\nnodes.\"\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 5 Mar 2024 13:25:00 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, 28 Feb 2024 at 11:40, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <[email protected]> wrote:\n> >\n>\n> A few comments on 0003:\n> ===================\n> 1.\n> +/*\n> + * Threshold of the total number of top-level and sub transactions\n> that controls\n> + * whether we switch the memory track state. While the MAINTAIN_HEAP state is\n> + * effective when there are many transactions being decoded, in many systems\n> + * there is generally no need to use it as long as all transactions\n> being decoded\n> + * are top-level transactions. Therefore, we use MaxConnections as\n> the threshold\n> + * so we can prevent switch to the state unless we use subtransactions.\n> + */\n> +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n>\n> The comment seems to imply that MAINTAIN_HEAP is useful for large\n> number of transactions but ReorderBufferLargestTXN() switches to this\n> state even when there is one transaction. So, basically we use the\n> binary_heap technique to get the largest even when we have one\n> transaction but we don't maintain that heap unless we have\n> REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are\n> in-progress. This means there is some additional work when (build and\n> reset heap each time when we pick largest xact) we have fewer\n> transactions in the system but that may not be impacting us because of\n> other costs involved like serializing all the changes. I think once we\n> can try to stress test this by setting\n> debug_logical_replication_streaming to 'immediate' to see if the new\n> mechanism has any overhead.\n\nI ran the test with a transaction having many inserts:\n\n | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n------- |-----------|------------|------------|--------------|----------------|----------------\nHead | 26.31 | 48.84 | 93.65 | 480.05 | 4808.29 | 47020.16\nPatch | 26.35 | 50.8 | 97.99 | 484.8 | 4856.95 | 48108.89\n\nThe same test with debug_logical_replication_streaming= 'immediate'\n\n | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n------- |-----------|------------|------------|--------------|----------------|----------------\nHead | 59.29 | 115.84 | 227.21 | 1156.08 | 11367.42 | 113986.14\nPatch | 62.45 | 120.48 | 240.56 | 1185.12 | 11855.37 | 119921.81\n\nThe execution time is in milliseconds. The column header indicates the\nnumber of inserts in the transaction.\nIn this case I noticed that the test execution with patch was taking\nslightly more time.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 5 Mar 2024 08:50:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi, here are some review comments for v7-0002\n\n======\nCommit Message\n\n1.\nThis commit adds a hash table to binaryheap in order to track of\npositions of each nodes in the binaryheap. That way, by using newly\nadded functions such as binaryheap_update_up() etc., both updating a\nkey and removing a node can be done in O(1) on an average and O(log n)\nin worst case. This is known as the indexed binary heap. The caller\ncan specify to use the indexed binaryheap by passing indexed = true.\n\n~\n\n/to track of positions of each nodes/to track the position of each node/\n\n~~~\n\n2.\nThere is no user of it but it will be used by a upcoming patch.\n\n~\n\nThe current code does not use the new indexing logic, but it will be\nused by an upcoming patch.\n\n======\nsrc/common/binaryheap.c\n\n3.\n+/*\n+ * Define parameters for hash table code generation. The interface is *also*\"\n+ * declared in binaryheaph.h (to generate the types, which are externally\n+ * visible).\n+ */\n\nTypo: *also*\"\n\n~~~\n\n4.\n+#define SH_PREFIX bh_nodeidx\n+#define SH_ELEMENT_TYPE bh_nodeidx_entry\n+#define SH_KEY_TYPE bh_node_type\n+#define SH_KEY key\n+#define SH_HASH_KEY(tb, key) \\\n+ hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n+#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n+#define SH_SCOPE extern\n+#ifdef FRONTEND\n+#define SH_RAW_ALLOCATOR pg_malloc0\n+#endif\n+#define SH_DEFINE\n+#include \"lib/simplehash.h\"\n\n4a.\nThe comment in simplehash.h says\n * The following parameters are only relevant when SH_DEFINE is defined:\n * - SH_KEY - ...\n * - SH_EQUAL(table, a, b) - ...\n * - SH_HASH_KEY(table, key) - ...\n * - SH_STORE_HASH - ...\n * - SH_GET_HASH(tb, a) - ...\n\nSo maybe it is nicer to reorder the #defines in that same order?\n\nSUGGESTION:\n+#define SH_PREFIX bh_nodeidx\n+#define SH_ELEMENT_TYPE bh_nodeidx_entry\n+#define SH_KEY_TYPE bh_node_type\n+#define SH_SCOPE extern\n+#ifdef FRONTEND\n+#define SH_RAW_ALLOCATOR pg_malloc0\n+#endif\n\n+#define SH_DEFINE\n+#define SH_KEY key\n+#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n+#define SH_HASH_KEY(tb, key) \\\n+ hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n+#include \"lib/simplehash.h\"\n\n~~\n\n4b.\nThe comment in simplehash.h says that \"it's preferable, if possible,\nto store the element's hash in the element's data type\", so should\nSH_STORE_HASH and SH_GET_HASH also be defined here?\n\n~~~\n\n5.\n+ *\n+ * If 'indexed' is true, we create a hash table to track of each node's\n+ * index in the heap, enabling to perform some operations such as removing\n+ * the node from the heap.\n */\n binaryheap *\n-binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n+binaryheap_allocate(int capacity, binaryheap_comparator compare,\n+ bool indexed, void *arg)\n\nBEFORE\n... enabling to perform some operations such as removing the node from the heap.\n\nSUGGESTION\n... to help make operations such as removing nodes more efficient.\n\n~~~\n\n6.\n+ heap->bh_indexed = indexed;\n+ if (heap->bh_indexed)\n+ {\n+#ifdef FRONTEND\n+ heap->bh_nodeidx = bh_nodeidx_create(capacity, NULL);\n+#else\n+ heap->bh_nodeidx = bh_nodeidx_create(CurrentMemoryContext, capacity,\n+ NULL);\n+#endif\n+ }\n+\n\nThe heap allocation just uses palloc instead of palloc0 so it might be\nbetter to assign \"heap->bh_nodeidx = NULL;\" up-front, just so you will\nnever get a situation where bh_indexed is false but bh_nodeidx has\nsome (garbage) value.\n\n~~~\n\n7.\n+/*\n+ * Set the given node at the 'index', and updates its position accordingly.\n+ *\n+ * Return true if the node's index is already tracked.\n+ */\n+static bool\n+bh_set_node(binaryheap *heap, bh_node_type node, int index)\n\n7a.\nI felt the 1st sentence should be more like:\n\nSUGGESTION\nSet the given node at the 'index' and track it if required.\n\n~\n\n7b.\nIMO the parameters would be better the other way around (e.g. 'index'\nbefore the 'node') because that's what the assignments look like:\n\n\nheap->bh_nodes[heap->bh_size] = d;\n\nbecomes:\nbh_set_node(heap, heap->bh_size, d);\n\n~~~\n\n8.\n+static bool\n+bh_set_node(binaryheap *heap, bh_node_type node, int index)\n+{\n+ bh_nodeidx_entry *ent;\n+ bool found = false;\n+\n+ /* Set the node to the nodes array */\n+ heap->bh_nodes[index] = node;\n+\n+ if (heap->bh_indexed)\n+ {\n+ /* Remember its index in the nodes array */\n+ ent = bh_nodeidx_insert(heap->bh_nodeidx, node, &found);\n+ ent->idx = index;\n+ }\n+\n+ return found;\n+}\n\n8a.\nThat 'ent' declaration can be moved to the inner block scope, so it is\ncloser to where it is needed.\n\n~\n\n8b.\n+ /* Remember its index in the nodes array */\n\nThe comment is worded a bit ambiguously. IMO a simpler comment would\nbe: \"/* Keep track of the node index. */\"\n\n~~~\n\n9.\n+static void\n+bh_delete_nodeidx(binaryheap *heap, bh_node_type node)\n+{\n+ if (!heap->bh_indexed)\n+ return;\n+\n+ (void) bh_nodeidx_delete(heap->bh_nodeidx, node);\n+}\n\nSince there is only 1 statement IMO it is simpler to write this\nfunction like below:\n\nif (heap->bh_indexed)\n (void) bh_nodeidx_delete(heap->bh_nodeidx, node);\n\n~~~\n\n10.\n+/*\n+ * Replace the node at 'idx' with the given node 'replaced_by'. Also\n+ * update their positions accordingly.\n+ */\n+static void\n+bh_replace_node(binaryheap *heap, int idx, bh_node_type replaced_by)\n\n10a.\nWould 'node' or 'new_node' or 'replacement' be a better name than 'replaced_by'?\n\n~\n\n10b.\nI noticed that the index param is called 'idx' here but in other\nfunctions, it is called 'index'. I think either is good (I prefer\n'idx') but at least everywhere should use the same name for\nconsistency.\n\n~~~\n\n11.\n+static void\n+bh_replace_node(binaryheap *heap, int idx, bh_node_type replaced_by)\n+{\n+ /* Remove overwritten node's index */\n+ bh_delete_nodeidx(heap, heap->bh_nodes[idx]);\n+\n+ /* Replace it with the given new node */\n+ if (idx < heap->bh_size)\n+ {\n+ bool found PG_USED_FOR_ASSERTS_ONLY;\n+\n+ found = bh_set_node(heap, replaced_by, idx);\n+\n+ /* The overwritten node's index must already be tracked */\n+ Assert(!heap->bh_indexed || found);\n+ }\n+}\n\nI did not understand the condition.\ne.g. Can you explain when is idx NOT less than heap->bh_size?\ne.g. If this condition failed then nothing gets replaced (??)\n\n~~~\n\n======\nsrc/include/lib/binaryheap.h\n\n12.\n+/*\n+ * Struct for A hash table element to store the node's index in the bh_nodes\n+ * array.\n+ */\n+typedef struct bh_nodeidx_entry\n\n/for A hash table/for a hash table/\n\n~~~\n\n13.\n+/* define parameters necessary to generate the hash table interface */\n\nSuggest uppercase \"Define\" and add a period.\n\n~~~\n\n14.\n+\n+ /*\n+ * If bh_indexed is true, the bh_nodeidx is used to track of each node's\n+ * index in bh_nodes. This enables the caller to perform\n+ * binaryheap_remove_node_ptr(), binaryheap_update_up/down in O(log n).\n+ */\n+ bool bh_indexed;\n+ bh_nodeidx_hash *bh_nodeidx;\n } binaryheap;\n\nI'm wondering why the separate 'bh_indexed' is necessary at all. Can't\nyou just use the bh_nodeidx value? E.g. If bh_nodeidx == NULL then it\nmeans there is no index tracking, otherwise there is.\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:28:00 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 11:25 AM Peter Smith <[email protected]> wrote:\n>\n> Hi, Here are some review comments for v7-0001\n\nThank you for reviewing the patch.\n\n>\n> 1.\n> /*\n> * binaryheap_free\n> *\n> * Releases memory used by the given binaryheap.\n> */\n> void\n> binaryheap_free(binaryheap *heap)\n> {\n> pfree(heap);\n> }\n>\n>\n> Shouldn't the above function (not modified by the patch) also firstly\n> free the memory allocated for the heap->bh_nodes?\n>\n> ~~~\n>\n> 2.\n> +/*\n> + * Make sure there is enough space for nodes.\n> + */\n> +static void\n> +bh_enlarge_node_array(binaryheap *heap)\n> +{\n> + heap->bh_space *= 2;\n> + heap->bh_nodes = repalloc(heap->bh_nodes,\n> + sizeof(bh_node_type) * heap->bh_space);\n> +}\n>\n> Strictly speaking, this function doesn't really \"Make sure\" of\n> anything because the caller does the check whether we need more space.\n> All that happens here is allocating more space. Maybe this function\n> comment should say something like \"Double the space allocated for\n> nodes.\"\n\nAgreed with the above two points. I'll fix them in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 09:22:58 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 12:20 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 28 Feb 2024 at 11:40, Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> >\n> > A few comments on 0003:\n> > ===================\n> > 1.\n> > +/*\n> > + * Threshold of the total number of top-level and sub transactions\n> > that controls\n> > + * whether we switch the memory track state. While the MAINTAIN_HEAP state is\n> > + * effective when there are many transactions being decoded, in many systems\n> > + * there is generally no need to use it as long as all transactions\n> > being decoded\n> > + * are top-level transactions. Therefore, we use MaxConnections as\n> > the threshold\n> > + * so we can prevent switch to the state unless we use subtransactions.\n> > + */\n> > +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n> >\n> > The comment seems to imply that MAINTAIN_HEAP is useful for large\n> > number of transactions but ReorderBufferLargestTXN() switches to this\n> > state even when there is one transaction. So, basically we use the\n> > binary_heap technique to get the largest even when we have one\n> > transaction but we don't maintain that heap unless we have\n> > REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are\n> > in-progress. This means there is some additional work when (build and\n> > reset heap each time when we pick largest xact) we have fewer\n> > transactions in the system but that may not be impacting us because of\n> > other costs involved like serializing all the changes. I think once we\n> > can try to stress test this by setting\n> > debug_logical_replication_streaming to 'immediate' to see if the new\n> > mechanism has any overhead.\n>\n> I ran the test with a transaction having many inserts:\n>\n> | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n> ------- |-----------|------------|------------|--------------|----------------|----------------\n> Head | 26.31 | 48.84 | 93.65 | 480.05 | 4808.29 | 47020.16\n> Patch | 26.35 | 50.8 | 97.99 | 484.8 | 4856.95 | 48108.89\n>\n> The same test with debug_logical_replication_streaming= 'immediate'\n>\n> | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n> ------- |-----------|------------|------------|--------------|----------------|----------------\n> Head | 59.29 | 115.84 | 227.21 | 1156.08 | 11367.42 | 113986.14\n> Patch | 62.45 | 120.48 | 240.56 | 1185.12 | 11855.37 | 119921.81\n>\n> The execution time is in milliseconds. The column header indicates the\n> number of inserts in the transaction.\n> In this case I noticed that the test execution with patch was taking\n> slightly more time.\n>\n\nThank you for testing! With 10M records, I can see 2% regression in\nthe 'buffered' case and 5% regression in the 'immediate' case.\n\nI think that in general it makes sense to postpone using a max-heap\nuntil the number of transactions is higher than the threshold. I've\nimplemented this idea and here are the results on my environment (with\n10M records and debug_logical_replication_streaming = 'immediate'):\n\nHEAD:\n68937.887 ms\n69450.174 ms\n68808.248 ms\n\nv7 patch:\n71280.783 ms\n71673.101 ms\n71330.898 ms\n\nv8 patch:\n68918.259 ms\n68822.330 ms\n68972.452 ms\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 7 Mar 2024 12:14:26 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 3:28 PM Peter Smith <[email protected]> wrote:\n>\n> Hi, here are some review comments for v7-0002\n>\n> ======\n> Commit Message\n>\n> 1.\n> This commit adds a hash table to binaryheap in order to track of\n> positions of each nodes in the binaryheap. That way, by using newly\n> added functions such as binaryheap_update_up() etc., both updating a\n> key and removing a node can be done in O(1) on an average and O(log n)\n> in worst case. This is known as the indexed binary heap. The caller\n> can specify to use the indexed binaryheap by passing indexed = true.\n>\n> ~\n>\n> /to track of positions of each nodes/to track the position of each node/\n>\n> ~~~\n>\n> 2.\n> There is no user of it but it will be used by a upcoming patch.\n>\n> ~\n>\n> The current code does not use the new indexing logic, but it will be\n> used by an upcoming patch.\n\nFixed.\n\n>\n> ======\n> src/common/binaryheap.c\n>\n> 3.\n> +/*\n> + * Define parameters for hash table code generation. The interface is *also*\"\n> + * declared in binaryheaph.h (to generate the types, which are externally\n> + * visible).\n> + */\n>\n> Typo: *also*\"\n\nFixed.\n\n>\n> ~~~\n>\n> 4.\n> +#define SH_PREFIX bh_nodeidx\n> +#define SH_ELEMENT_TYPE bh_nodeidx_entry\n> +#define SH_KEY_TYPE bh_node_type\n> +#define SH_KEY key\n> +#define SH_HASH_KEY(tb, key) \\\n> + hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n> +#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n> +#define SH_SCOPE extern\n> +#ifdef FRONTEND\n> +#define SH_RAW_ALLOCATOR pg_malloc0\n> +#endif\n> +#define SH_DEFINE\n> +#include \"lib/simplehash.h\"\n>\n> 4a.\n> The comment in simplehash.h says\n> * The following parameters are only relevant when SH_DEFINE is defined:\n> * - SH_KEY - ...\n> * - SH_EQUAL(table, a, b) - ...\n> * - SH_HASH_KEY(table, key) - ...\n> * - SH_STORE_HASH - ...\n> * - SH_GET_HASH(tb, a) - ...\n>\n> So maybe it is nicer to reorder the #defines in that same order?\n>\n> SUGGESTION:\n> +#define SH_PREFIX bh_nodeidx\n> +#define SH_ELEMENT_TYPE bh_nodeidx_entry\n> +#define SH_KEY_TYPE bh_node_type\n> +#define SH_SCOPE extern\n> +#ifdef FRONTEND\n> +#define SH_RAW_ALLOCATOR pg_malloc0\n> +#endif\n>\n> +#define SH_DEFINE\n> +#define SH_KEY key\n> +#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n> +#define SH_HASH_KEY(tb, key) \\\n> + hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n> +#include \"lib/simplehash.h\"\n\nI'm really not sure it helps increase readability. For instance, for\nme it's readable if SH_DEFINE and SH_DECLARE come to the last before\n#include since it's more obvious whether we want to declare, define or\nboth. Other simplehash.h users also do so.\n\n>\n> ~~\n>\n> 4b.\n> The comment in simplehash.h says that \"it's preferable, if possible,\n> to store the element's hash in the element's data type\", so should\n> SH_STORE_HASH and SH_GET_HASH also be defined here?\n\nGood catch. I've used these macros.\n\n>\n> ~~~\n>\n> 5.\n> + *\n> + * If 'indexed' is true, we create a hash table to track of each node's\n> + * index in the heap, enabling to perform some operations such as removing\n> + * the node from the heap.\n> */\n> binaryheap *\n> -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> + bool indexed, void *arg)\n>\n> BEFORE\n> ... enabling to perform some operations such as removing the node from the heap.\n>\n> SUGGESTION\n> ... to help make operations such as removing nodes more efficient.\n>\n\nBut these operations literally require the indexed binary heap as we\nhave an assertion:\n\nvoid\nbinaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n{\n bh_nodeidx_entry *ent;\n\n Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n Assert(heap->bh_indexed);\n\n> ~~~\n>\n> 6.\n> + heap->bh_indexed = indexed;\n> + if (heap->bh_indexed)\n> + {\n> +#ifdef FRONTEND\n> + heap->bh_nodeidx = bh_nodeidx_create(capacity, NULL);\n> +#else\n> + heap->bh_nodeidx = bh_nodeidx_create(CurrentMemoryContext, capacity,\n> + NULL);\n> +#endif\n> + }\n> +\n>\n> The heap allocation just uses palloc instead of palloc0 so it might be\n> better to assign \"heap->bh_nodeidx = NULL;\" up-front, just so you will\n> never get a situation where bh_indexed is false but bh_nodeidx has\n> some (garbage) value.\n\nFixed.\n\n>\n> ~~~\n>\n> 7.\n> +/*\n> + * Set the given node at the 'index', and updates its position accordingly.\n> + *\n> + * Return true if the node's index is already tracked.\n> + */\n> +static bool\n> +bh_set_node(binaryheap *heap, bh_node_type node, int index)\n>\n> 7a.\n> I felt the 1st sentence should be more like:\n>\n> SUGGESTION\n> Set the given node at the 'index' and track it if required.\n\nFixed.\n\n>\n> ~\n>\n> 7b.\n> IMO the parameters would be better the other way around (e.g. 'index'\n> before the 'node') because that's what the assignments look like:\n>\n>\n> heap->bh_nodes[heap->bh_size] = d;\n>\n> becomes:\n> bh_set_node(heap, heap->bh_size, d);\n>\n\nI think it assumes heap->bh_nodes is an array. But if we change it in\nthe future, it will no longer make sense. I think it would make more\nsense if we define the parameters in an order like \"we set the 'node'\nat 'index'\". What do you think?\n\n> ~~~\n>\n> 8.\n> +static bool\n> +bh_set_node(binaryheap *heap, bh_node_type node, int index)\n> +{\n> + bh_nodeidx_entry *ent;\n> + bool found = false;\n> +\n> + /* Set the node to the nodes array */\n> + heap->bh_nodes[index] = node;\n> +\n> + if (heap->bh_indexed)\n> + {\n> + /* Remember its index in the nodes array */\n> + ent = bh_nodeidx_insert(heap->bh_nodeidx, node, &found);\n> + ent->idx = index;\n> + }\n> +\n> + return found;\n> +}\n>\n> 8a.\n> That 'ent' declaration can be moved to the inner block scope, so it is\n> closer to where it is needed.\n>\n> ~\n>\n> 8b.\n> + /* Remember its index in the nodes array */\n>\n> The comment is worded a bit ambiguously. IMO a simpler comment would\n> be: \"/* Keep track of the node index. */\"\n>\n> ~~~\n\nFixed.\n\n>\n> 9.\n> +static void\n> +bh_delete_nodeidx(binaryheap *heap, bh_node_type node)\n> +{\n> + if (!heap->bh_indexed)\n> + return;\n> +\n> + (void) bh_nodeidx_delete(heap->bh_nodeidx, node);\n> +}\n>\n> Since there is only 1 statement IMO it is simpler to write this\n> function like below:\n>\n> if (heap->bh_indexed)\n> (void) bh_nodeidx_delete(heap->bh_nodeidx, node);\n\nFixed.\n\n>\n> ~~~\n>\n> 10.\n> +/*\n> + * Replace the node at 'idx' with the given node 'replaced_by'. Also\n> + * update their positions accordingly.\n> + */\n> +static void\n> +bh_replace_node(binaryheap *heap, int idx, bh_node_type replaced_by)\n>\n> 10a.\n> Would 'node' or 'new_node' or 'replacement' be a better name than 'replaced_by'?\n\nFixed.\n\n>\n> ~\n>\n> 10b.\n> I noticed that the index param is called 'idx' here but in other\n> functions, it is called 'index'. I think either is good (I prefer\n> 'idx') but at least everywhere should use the same name for\n> consistency.\n\nFixed.\n\n>\n> ~~~\n>\n> 11.\n> +static void\n> +bh_replace_node(binaryheap *heap, int idx, bh_node_type replaced_by)\n> +{\n> + /* Remove overwritten node's index */\n> + bh_delete_nodeidx(heap, heap->bh_nodes[idx]);\n> +\n> + /* Replace it with the given new node */\n> + if (idx < heap->bh_size)\n> + {\n> + bool found PG_USED_FOR_ASSERTS_ONLY;\n> +\n> + found = bh_set_node(heap, replaced_by, idx);\n> +\n> + /* The overwritten node's index must already be tracked */\n> + Assert(!heap->bh_indexed || found);\n> + }\n> +}\n>\n> I did not understand the condition.\n> e.g. Can you explain when is idx NOT less than heap->bh_size?\n> e.g. If this condition failed then nothing gets replaced (??)\n\nIt was for a case like where we call binaryheap_remote_node(heap, 0)\nwhere the heap has only one entry, resulting in setting the root node\nagain. I updated the bh_replace_node() to return if the node doesn't\nnot need to be moved.\n\n>\n> ~~~\n>\n> ======\n> src/include/lib/binaryheap.h\n>\n> 12.\n> +/*\n> + * Struct for A hash table element to store the node's index in the bh_nodes\n> + * array.\n> + */\n> +typedef struct bh_nodeidx_entry\n>\n> /for A hash table/for a hash table/\n>\n> ~~~\n>\n> 13.\n> +/* define parameters necessary to generate the hash table interface */\n>\n> Suggest uppercase \"Define\" and add a period.\n\nFixed.\n\n>\n> ~~~\n>\n> 14.\n> +\n> + /*\n> + * If bh_indexed is true, the bh_nodeidx is used to track of each node's\n> + * index in bh_nodes. This enables the caller to perform\n> + * binaryheap_remove_node_ptr(), binaryheap_update_up/down in O(log n).\n> + */\n> + bool bh_indexed;\n> + bh_nodeidx_hash *bh_nodeidx;\n> } binaryheap;\n>\n> I'm wondering why the separate 'bh_indexed' is necessary at all. Can't\n> you just use the bh_nodeidx value? E.g. If bh_nodeidx == NULL then it\n> means there is no index tracking, otherwise there is.\n>\n\nGood point. I added a macro binaryheap_indexed() to check it for\nbetter readability.\n\nThe above comments are incorporated into the latest v8 patch set that\nI've just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBYjJmz7q_%3DZ%2BeXJgm0FScyu3_iGFshPAvnq78B2KL3qQ%40mail.gmail.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 12:16:08 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 2:16 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Mar 5, 2024 at 3:28 PM Peter Smith <[email protected]> wrote:\n> >\n\n> > 4a.\n> > The comment in simplehash.h says\n> > * The following parameters are only relevant when SH_DEFINE is defined:\n> > * - SH_KEY - ...\n> > * - SH_EQUAL(table, a, b) - ...\n> > * - SH_HASH_KEY(table, key) - ...\n> > * - SH_STORE_HASH - ...\n> > * - SH_GET_HASH(tb, a) - ...\n> >\n> > So maybe it is nicer to reorder the #defines in that same order?\n> >\n> > SUGGESTION:\n> > +#define SH_PREFIX bh_nodeidx\n> > +#define SH_ELEMENT_TYPE bh_nodeidx_entry\n> > +#define SH_KEY_TYPE bh_node_type\n> > +#define SH_SCOPE extern\n> > +#ifdef FRONTEND\n> > +#define SH_RAW_ALLOCATOR pg_malloc0\n> > +#endif\n> >\n> > +#define SH_DEFINE\n> > +#define SH_KEY key\n> > +#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n> > +#define SH_HASH_KEY(tb, key) \\\n> > + hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n> > +#include \"lib/simplehash.h\"\n>\n> I'm really not sure it helps increase readability. For instance, for\n> me it's readable if SH_DEFINE and SH_DECLARE come to the last before\n> #include since it's more obvious whether we want to declare, define or\n> both. Other simplehash.h users also do so.\n>\n\nOK.\n\n> > 5.\n> > + *\n> > + * If 'indexed' is true, we create a hash table to track of each node's\n> > + * index in the heap, enabling to perform some operations such as removing\n> > + * the node from the heap.\n> > */\n> > binaryheap *\n> > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > + bool indexed, void *arg)\n> >\n> > BEFORE\n> > ... enabling to perform some operations such as removing the node from the heap.\n> >\n> > SUGGESTION\n> > ... to help make operations such as removing nodes more efficient.\n> >\n>\n> But these operations literally require the indexed binary heap as we\n> have an assertion:\n>\n> void\n> binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> {\n> bh_nodeidx_entry *ent;\n>\n> Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> Assert(heap->bh_indexed);\n>\n\nI didn’t quite understand -- the operations mentioned are \"operations\nsuch as removing the node\", but binaryheap_remove_node() also removes\na node from the heap. So I still felt the comment wording of the patch\nis not quite correct.\n\nNow, if the removal of a node from an indexed heap can *only* be done\nusing binaryheap_remove_node_ptr() then:\n- the other removal functions (binaryheap_remove_*) probably need some\ncomments to make sure nobody is tempted to call them directly for an\nindexed heap.\n- maybe some refactoring and assertions are needed to ensure those\n*cannot* be called directly for an indexed heap.\n\n> > 7b.\n> > IMO the parameters would be better the other way around (e.g. 'index'\n> > before the 'node') because that's what the assignments look like:\n> >\n> >\n> > heap->bh_nodes[heap->bh_size] = d;\n> >\n> > becomes:\n> > bh_set_node(heap, heap->bh_size, d);\n> >\n>\n> I think it assumes heap->bh_nodes is an array. But if we change it in\n> the future, it will no longer make sense. I think it would make more\n> sense if we define the parameters in an order like \"we set the 'node'\n> at 'index'\". What do you think?\n\nYMMV. The patch code is also OK by me if you prefer it.\n\n//////////\n\nAnd, here are some review comments for v8-0002.\n\n======\n1. delete_nodeidx\n\n+/*\n+ * Remove the node's index from the hash table if the heap is indexed.\n+ */\n+static bool\n+delete_nodeidx(binaryheap *heap, bh_node_type node)\n+{\n+ if (!binaryheap_indexed(heap))\n+ return false;\n+\n+ return bh_nodeidx_delete(heap->bh_nodeidx, node);\n+}\n\n1a.\nIn v8 this function was changed to now return bool, so, I think the\nfunction comment should explain the meaning of that return value.\n\n~\n\n1b.\nI felt the function body is better expressed positively: \"If this then\ndo that\", instead of \"If not this then do nothing otherwise do that\"\n\nSUGGESTION\nif (binaryheap_indexed(heap))\n return bh_nodeidx_delete(heap->bh_nodeidx, node);\n\nreturn false;\n\n~~~\n\n2.\n+static void\n+replace_node(binaryheap *heap, int index, bh_node_type new_node)\n+{\n+ bool found PG_USED_FOR_ASSERTS_ONLY;\n+\n+ /* Quick return if not necessary to move */\n+ if (heap->bh_nodes[index] == new_node)\n+ return;\n+\n+ /*\n+ * Remove overwritten node's index. The overwritten node's position must\n+ * have been tracked, if enabled.\n+ */\n+ found = delete_nodeidx(heap, heap->bh_nodes[index]);\n+ Assert(!binaryheap_indexed(heap) || found);\n+\n+ /*\n+ * Replace it with the given new node. This node's position must also be\n+ * tracked as we assume to replace the node by the existing node.\n+ */\n+ found = set_node(heap, new_node, index);\n+ Assert(!binaryheap_indexed(heap) || found);\n+}\n\n2a.\n/Remove overwritten/Remove the overwritten/\n/replace the node by the existing node/replace the node with the existing node/\n\n~\n\n2b.\nIt might be helpful to declare another local var...\nbh_node_type cur_node = heap->bh_nodes[index];\n\n... because I think it will be more readable to say:\n+ if (cur_node == new_node)\n+ return;\n\nand\n\n+ found = delete_nodeidx(heap, cur_node);\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 8 Mar 2024 14:58:07 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Here are some review comments for v8-0003\n\n======\n0. GENERAL -- why the state enum?\n\nThis patch introduced a new ReorderBufferMemTrackState with 2 states\n(REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP,\nREORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n\nIt's the same as having a boolean flag OFF/ON, so I didn't see any\nbenefit of the enum instead of a simple boolean flag like\n'track_txn_sizes'.\n\nNOTE: Below in this post (see #11) I would like to propose another\nidea, which can simplify much further, eliminating the need for the\nstate boolean. If adopted that will impact lots of these other review\ncomments.\n\n======\nCommit Message\n\n1.\nPreviously, when selecting the transaction to evict during logical\ndecoding, we check all transactions to find the largest\ntransaction. Which could lead to a significant replication lag\nespecially in case where there are many subtransactions.\n\n~\n\n/Which could/This could/\n\n/in case/in the case/\n\n======\n.../replication/logical/reorderbuffer.c\n\n2.\n * We use a max-heap with transaction size as the key to efficiently find\n * the largest transaction. The max-heap state is managed in two states:\n * REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP and\nREORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP.\n\n/The max-heap state is managed in two states:/The max-heap is managed\nin two states:/\n\n~~~\n\n3.\n+/*\n+ * Threshold of the total number of top-level and sub transactions\nthat controls\n+ * whether we switch the memory track state. While using max-heap to select\n+ * the largest transaction is effective when there are many transactions being\n+ * decoded, in many systems there is generally no need to use it as long as all\n+ * transactions being decoded are top-level transactions. Therefore, we use\n+ * MaxConnections as the threshold* so we can prevent switch to the\nstate unless\n+ * we use subtransactions.\n+ */\n+#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n\n3a.\n/memory track state./memory tracking state./\n\n/While using max-heap/Although using max-heap/\n\n\"in many systems\" (are these words adding anything?)\n\n/threshold*/threshold/\n\n/so we can prevent switch/so we can prevent switching/\n\n~\n\n3b.\nThere's nothing really in this name to indicate the units of the\nthreshold. Consider if there is some more informative name for this\nmacro: e.g.\nMAXHEAP_TX_COUNT_THRESHOLD (?)\n\n~~~\n\n4.\n+ /*\n+ * Don't start with a lower number than\n+ * REORDER_BUFFER_MEM_TRACK_THRESHOLD, since we add at least\n+ * REORDER_BUFFER_MEM_TRACK_THRESHOLD entries at once.\n+ */\n+ buffer->memtrack_state = REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP;\n+ buffer->txn_heap = binaryheap_allocate(REORDER_BUFFER_MEM_TRACK_THRESHOLD * 2,\n+ ReorderBufferTXNSizeCompare,\n+ true, NULL);\n+\n\nIIUC the comment intends to say:\n\nAllocate the initial heap size greater than THRESHOLD because the\ntxn_heap will not be used until the threshold is exceeded.\n\nAlso, maybe the comment should make a point of saying \"Note: the\nbinary heap is INDEXED for faster manipulations\". or something\nsimilar.\n\n~~~\n\n5.\n static void\n ReorderBufferChangeMemoryUpdate(ReorderBuffer *rb,\n ReorderBufferChange *change,\n+ ReorderBufferTXN *txn,\n bool addition, Size sz)\n {\n- ReorderBufferTXN *txn;\n ReorderBufferTXN *toptxn;\n\n- Assert(change->txn);\n-\n\nThere seems some trick now where the passed 'change' could be NULL,\nwhich was not possible before. e.g., when change is NULL then 'txn' is\nnot NULL, and vice versa. Some explanation about this logic and the\nmeaning of these parameters should be written in this function\ncomment.\n\n~\n\n6.\n+ txn = txn != NULL ? txn : change->txn;\n\nIMO it's more natural to code the ternary using the same order as the\nparameters:\n\ne.g. txn = change ? change->txn : txn;\n\n~~~\n\n7.\n/*\n * Build the max-heap and switch the state. We will run a heap assembly step\n * at the end, which is more efficient.\n */\nstatic void\nReorderBufferBuildMaxHeap(ReorderBuffer *rb)\n\n/We will run a heap assembly step at the end, which is more\nefficient./The heap assembly step is deferred until the end, for\nefficiency./\n\n~~~\n\n8. ReorderBufferLargestTXN\n\n+ if (hash_get_num_entries(rb->by_txn) < REORDER_BUFFER_MEM_TRACK_THRESHOLD)\n+ {\n+ HASH_SEQ_STATUS hash_seq;\n+ ReorderBufferTXNByIdEnt *ent;\n+\n+ hash_seq_init(&hash_seq, rb->by_txn);\n+ while ((ent = hash_seq_search(&hash_seq)) != NULL)\n+ {\n+ ReorderBufferTXN *txn = ent->txn;\n+\n+ /* if the current transaction is larger, remember it */\n+ if ((!largest) || (txn->size > largest->size))\n+ largest = txn;\n+ }\n+\n+ Assert(largest);\n+ }\n\nThat Assert(largest) seems redundant because there is anyway another\nAssert(largest) immediately after this code.\n\n~~~\n\n9.\n+ /* Get the largest transaction from the max-heap */\n+ if (rb->memtrack_state == REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n+ {\n+ Assert(binaryheap_size(rb->txn_heap) > 0);\n+ largest = (ReorderBufferTXN *)\n+ DatumGetPointer(binaryheap_first(rb->txn_heap));\n }\nAssert(binaryheap_size(rb->txn_heap) > 0); seemed like slightly less\nreadable way of saying:\n\nAssert(!binaryheap_empty(rb->txn_heap));\n\n~~~\n\n10.\n+\n+/*\n+ * Compare between sizes of two transactions. This is for a binary heap\n+ * comparison function.\n+ */\n+static int\n+ReorderBufferTXNSizeCompare(Datum a, Datum b, void *arg)\n\n10a.\n/Compare between sizes of two transactions./Compare two transactions by size./\n\n~~~\n\n10b.\nIMO this comparator function belongs just before the\nReorderBufferAllocate() function since that is the only place where it\nis used.\n\n======\nsrc/include/replication/reorderbuffer.h\n\n11.\n+/* State of how to track the memory usage of each transaction being decoded */\n+typedef enum ReorderBufferMemTrackState\n+{\n+ /*\n+ * We don't update max-heap while updating the memory counter. The\n+ * max-heap is built before use.\n+ */\n+ REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP,\n+\n+ /*\n+ * We also update the max-heap when updating the memory counter so the\n+ * heap property is always preserved.\n+ */\n+ REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n+} ReorderBufferMemTrackState;\n+\n\nIn my GENERAL review comment #0, I suggested the removal of this\nentire enum. e.g. It could be replaced with a boolean field\n'track_txn_sizes'\n\nTBH, I think there is a better way to handle this \"state\". IIUC\n- the txn_heap is always allocated up-front.\n- you only \"build\" it when > threshold and\n- when it drops < 0.9 x threshold you reset it.\n\nTherefore, AFAICT you do not need to maintain any “switch states” at\nall; you simply need to check binaryheap_empty(txn_heap), right?\n* If the heap is empty…. It means you are NOT tracking, so don’t use it\n* If the heap is NOT empty …. It means you ARE tracking, so use it.\n\n~\n\nUsing my idea to remove the state flag will have the side effect of\nsimplifying many other parts of this patch. For example\n\nBEFORE\n+static void\n+ReorderBufferMaybeChangeNoMaxHeap(ReorderBuffer *rb)\n+{\n+ if (rb->memtrack_state == REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP)\n+ return;\n+\n...\n+ if (binaryheap_size(rb->txn_heap) < REORDER_BUFFER_MEM_TRACK_THRESHOLD * 0.9)\n+ {\n+ rb->memtrack_state = REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP;\n+ binaryheap_reset(rb->txn_heap);\n+ }\n+}\n\nAFTER\n+static void\n+ReorderBufferMaybeChangeNoMaxHeap(ReorderBuffer *rb)\n+{\n+ if (binaryheap_empty(rb->txn_heap))\n+ return;\n+\n...\n+ if (binaryheap_size(rb->txn_heap) < REORDER_BUFFER_MEM_TRACK_THRESHOLD * 0.9)\n+ binaryheap_reset(rb->txn_heap);\n+}\n\n~~~\n\n12. struct ReorderBuffer\n\n+ /* Max-heap for sizes of all top-level and sub transactions */\n+ ReorderBufferMemTrackState memtrack_state;\n+ binaryheap *txn_heap;\n+\n\n12a.\nWhy is this being referred to in the commit message and code comments\nas \"max-heap\" when the field is not called by that same name? Won't it\nbe better to give the field a better name -- e.g. \"txn_maxheap\" or\nsimilar?\n\n~\n\n12b.\nThis comment should also say that the heap is ordered by tx size --\n(e.g. the comparator is ReorderBufferTXNSizeCompare)\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Mar 2024 17:04:13 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 12:58 PM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Mar 7, 2024 at 2:16 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Mar 5, 2024 at 3:28 PM Peter Smith <[email protected]> wrote:\n> > >\n>\n> > > 4a.\n> > > The comment in simplehash.h says\n> > > * The following parameters are only relevant when SH_DEFINE is defined:\n> > > * - SH_KEY - ...\n> > > * - SH_EQUAL(table, a, b) - ...\n> > > * - SH_HASH_KEY(table, key) - ...\n> > > * - SH_STORE_HASH - ...\n> > > * - SH_GET_HASH(tb, a) - ...\n> > >\n> > > So maybe it is nicer to reorder the #defines in that same order?\n> > >\n> > > SUGGESTION:\n> > > +#define SH_PREFIX bh_nodeidx\n> > > +#define SH_ELEMENT_TYPE bh_nodeidx_entry\n> > > +#define SH_KEY_TYPE bh_node_type\n> > > +#define SH_SCOPE extern\n> > > +#ifdef FRONTEND\n> > > +#define SH_RAW_ALLOCATOR pg_malloc0\n> > > +#endif\n> > >\n> > > +#define SH_DEFINE\n> > > +#define SH_KEY key\n> > > +#define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(bh_node_type)) == 0)\n> > > +#define SH_HASH_KEY(tb, key) \\\n> > > + hash_bytes((const unsigned char *) &key, sizeof(bh_node_type))\n> > > +#include \"lib/simplehash.h\"\n> >\n> > I'm really not sure it helps increase readability. For instance, for\n> > me it's readable if SH_DEFINE and SH_DECLARE come to the last before\n> > #include since it's more obvious whether we want to declare, define or\n> > both. Other simplehash.h users also do so.\n> >\n>\n> OK.\n>\n> > > 5.\n> > > + *\n> > > + * If 'indexed' is true, we create a hash table to track of each node's\n> > > + * index in the heap, enabling to perform some operations such as removing\n> > > + * the node from the heap.\n> > > */\n> > > binaryheap *\n> > > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > > + bool indexed, void *arg)\n> > >\n> > > BEFORE\n> > > ... enabling to perform some operations such as removing the node from the heap.\n> > >\n> > > SUGGESTION\n> > > ... to help make operations such as removing nodes more efficient.\n> > >\n> >\n> > But these operations literally require the indexed binary heap as we\n> > have an assertion:\n> >\n> > void\n> > binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> > {\n> > bh_nodeidx_entry *ent;\n> >\n> > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > Assert(heap->bh_indexed);\n> >\n>\n> I didn’t quite understand -- the operations mentioned are \"operations\n> such as removing the node\", but binaryheap_remove_node() also removes\n> a node from the heap. So I still felt the comment wording of the patch\n> is not quite correct.\n\nNow I understand your point. That's a valid point.\n\n>\n> Now, if the removal of a node from an indexed heap can *only* be done\n> using binaryheap_remove_node_ptr() then:\n> - the other removal functions (binaryheap_remove_*) probably need some\n> comments to make sure nobody is tempted to call them directly for an\n> indexed heap.\n> - maybe some refactoring and assertions are needed to ensure those\n> *cannot* be called directly for an indexed heap.\n>\n\nIf the 'index' is true, the caller can not only use the existing\nfunctions but also newly added functions such as\nbinaryheap_remove_node_ptr() and binaryheap_update_up() etc. How about\nsomething like below?\n\n * If 'indexed' is true, we create a hash table to track each node's\n * index in the heap, enabling to perform some operations such as\n * binaryheap_remove_node_ptr() etc.\n\n>\n> And, here are some review comments for v8-0002.\n>\n> ======\n> 1. delete_nodeidx\n>\n> +/*\n> + * Remove the node's index from the hash table if the heap is indexed.\n> + */\n> +static bool\n> +delete_nodeidx(binaryheap *heap, bh_node_type node)\n> +{\n> + if (!binaryheap_indexed(heap))\n> + return false;\n> +\n> + return bh_nodeidx_delete(heap->bh_nodeidx, node);\n> +}\n>\n> 1a.\n> In v8 this function was changed to now return bool, so, I think the\n> function comment should explain the meaning of that return value.\n>\n> ~\n>\n> 1b.\n> I felt the function body is better expressed positively: \"If this then\n> do that\", instead of \"If not this then do nothing otherwise do that\"\n>\n> SUGGESTION\n> if (binaryheap_indexed(heap))\n> return bh_nodeidx_delete(heap->bh_nodeidx, node);\n>\n> return false;\n>\n> ~~~\n>\n> 2.\n> +static void\n> +replace_node(binaryheap *heap, int index, bh_node_type new_node)\n> +{\n> + bool found PG_USED_FOR_ASSERTS_ONLY;\n> +\n> + /* Quick return if not necessary to move */\n> + if (heap->bh_nodes[index] == new_node)\n> + return;\n> +\n> + /*\n> + * Remove overwritten node's index. The overwritten node's position must\n> + * have been tracked, if enabled.\n> + */\n> + found = delete_nodeidx(heap, heap->bh_nodes[index]);\n> + Assert(!binaryheap_indexed(heap) || found);\n> +\n> + /*\n> + * Replace it with the given new node. This node's position must also be\n> + * tracked as we assume to replace the node by the existing node.\n> + */\n> + found = set_node(heap, new_node, index);\n> + Assert(!binaryheap_indexed(heap) || found);\n> +}\n>\n> 2a.\n> /Remove overwritten/Remove the overwritten/\n> /replace the node by the existing node/replace the node with the existing node/\n>\n> ~\n>\n> 2b.\n> It might be helpful to declare another local var...\n> bh_node_type cur_node = heap->bh_nodes[index];\n>\n> ... because I think it will be more readable to say:\n> + if (cur_node == new_node)\n> + return;\n>\n> and\n>\n> + found = delete_nodeidx(heap, cur_node);\n\nAs for changes around delete_nodeidx(), I've changed the\ndelete_nodeidx() to return nothing as it would not be helpful much and\nseems confusing. I've simplified replace_node() logic accordingly.\n\nI'll update 0003 patch to address your comment and submit the updated\nversion patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 12 Mar 2024 14:23:08 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 4:23 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Mar 8, 2024 at 12:58 PM Peter Smith <[email protected]> wrote:\n> >\n...\n> > > > 5.\n> > > > + *\n> > > > + * If 'indexed' is true, we create a hash table to track of each node's\n> > > > + * index in the heap, enabling to perform some operations such as removing\n> > > > + * the node from the heap.\n> > > > */\n> > > > binaryheap *\n> > > > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > > > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > > > + bool indexed, void *arg)\n> > > >\n> > > > BEFORE\n> > > > ... enabling to perform some operations such as removing the node from the heap.\n> > > >\n> > > > SUGGESTION\n> > > > ... to help make operations such as removing nodes more efficient.\n> > > >\n> > >\n> > > But these operations literally require the indexed binary heap as we\n> > > have an assertion:\n> > >\n> > > void\n> > > binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> > > {\n> > > bh_nodeidx_entry *ent;\n> > >\n> > > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > > Assert(heap->bh_indexed);\n> > >\n> >\n> > I didn’t quite understand -- the operations mentioned are \"operations\n> > such as removing the node\", but binaryheap_remove_node() also removes\n> > a node from the heap. So I still felt the comment wording of the patch\n> > is not quite correct.\n>\n> Now I understand your point. That's a valid point.\n>\n> >\n> > Now, if the removal of a node from an indexed heap can *only* be done\n> > using binaryheap_remove_node_ptr() then:\n> > - the other removal functions (binaryheap_remove_*) probably need some\n> > comments to make sure nobody is tempted to call them directly for an\n> > indexed heap.\n> > - maybe some refactoring and assertions are needed to ensure those\n> > *cannot* be called directly for an indexed heap.\n> >\n>\n> If the 'index' is true, the caller can not only use the existing\n> functions but also newly added functions such as\n> binaryheap_remove_node_ptr() and binaryheap_update_up() etc. How about\n> something like below?\n>\n\nYou said: \"can not only use the existing functions but also...\"\n\nHmm. Is that right? IIUC those existing \"remove\" functions should NOT\nbe called directly if the heap was \"indexed\" because they'll delete\nthe node from the heap OK, but any corresponding index for that\ndeleted node will be left lying around -- i.e. everything gets out of\nsync. This was the reason for my original concern.\n\n> * If 'indexed' is true, we create a hash table to track each node's\n> * index in the heap, enabling to perform some operations such as\n> * binaryheap_remove_node_ptr() etc.\n>\n\nYeah, something like that... I'll wait for the next patch version\nbefore commenting further.\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Mar 2024 12:15:06 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 10:15 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Mar 12, 2024 at 4:23 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Mar 8, 2024 at 12:58 PM Peter Smith <[email protected]> wrote:\n> > >\n> ...\n> > > > > 5.\n> > > > > + *\n> > > > > + * If 'indexed' is true, we create a hash table to track of each node's\n> > > > > + * index in the heap, enabling to perform some operations such as removing\n> > > > > + * the node from the heap.\n> > > > > */\n> > > > > binaryheap *\n> > > > > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > > > > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > > > > + bool indexed, void *arg)\n> > > > >\n> > > > > BEFORE\n> > > > > ... enabling to perform some operations such as removing the node from the heap.\n> > > > >\n> > > > > SUGGESTION\n> > > > > ... to help make operations such as removing nodes more efficient.\n> > > > >\n> > > >\n> > > > But these operations literally require the indexed binary heap as we\n> > > > have an assertion:\n> > > >\n> > > > void\n> > > > binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> > > > {\n> > > > bh_nodeidx_entry *ent;\n> > > >\n> > > > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > > > Assert(heap->bh_indexed);\n> > > >\n> > >\n> > > I didn’t quite understand -- the operations mentioned are \"operations\n> > > such as removing the node\", but binaryheap_remove_node() also removes\n> > > a node from the heap. So I still felt the comment wording of the patch\n> > > is not quite correct.\n> >\n> > Now I understand your point. That's a valid point.\n> >\n> > >\n> > > Now, if the removal of a node from an indexed heap can *only* be done\n> > > using binaryheap_remove_node_ptr() then:\n> > > - the other removal functions (binaryheap_remove_*) probably need some\n> > > comments to make sure nobody is tempted to call them directly for an\n> > > indexed heap.\n> > > - maybe some refactoring and assertions are needed to ensure those\n> > > *cannot* be called directly for an indexed heap.\n> > >\n> >\n> > If the 'index' is true, the caller can not only use the existing\n> > functions but also newly added functions such as\n> > binaryheap_remove_node_ptr() and binaryheap_update_up() etc. How about\n> > something like below?\n> >\n>\n> You said: \"can not only use the existing functions but also...\"\n>\n> Hmm. Is that right? IIUC those existing \"remove\" functions should NOT\n> be called directly if the heap was \"indexed\" because they'll delete\n> the node from the heap OK, but any corresponding index for that\n> deleted node will be left lying around -- i.e. everything gets out of\n> sync. This was the reason for my original concern.\n>\n\nAll existing binaryheap functions should be available even if the\nbinaryheap is 'indexed'. For instance, with the patch,\nbinaryheap_remote_node() is:\n\nvoid\nbinaryheap_remove_node(binaryheap *heap, int n)\n{\n int cmp;\n\n Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n Assert(n >= 0 && n < heap->bh_size);\n\n /* compare last node to the one that is being removed */\n cmp = heap->bh_compare(heap->bh_nodes[--heap->bh_size],\n heap->bh_nodes[n],\n heap->bh_arg);\n\n /* remove the last node, placing it in the vacated entry */\n replace_node(heap, n, heap->bh_nodes[heap->bh_size]);\n\n /* sift as needed to preserve the heap property */\n if (cmp > 0)\n sift_up(heap, n);\n else if (cmp < 0)\n sift_down(heap, n);\n}\n\nThe replace_node(), sift_up() and sift_down() update node's index as\nwell if the binaryheap is indexed. When deleting the node from the\nbinaryheap, it will also delete its index from the hash table.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 10:47:33 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 12:48 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 10:15 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, Mar 12, 2024 at 4:23 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Fri, Mar 8, 2024 at 12:58 PM Peter Smith <[email protected]> wrote:\n> > > >\n> > ...\n> > > > > > 5.\n> > > > > > + *\n> > > > > > + * If 'indexed' is true, we create a hash table to track of each node's\n> > > > > > + * index in the heap, enabling to perform some operations such as removing\n> > > > > > + * the node from the heap.\n> > > > > > */\n> > > > > > binaryheap *\n> > > > > > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > > > > > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > > > > > + bool indexed, void *arg)\n> > > > > >\n> > > > > > BEFORE\n> > > > > > ... enabling to perform some operations such as removing the node from the heap.\n> > > > > >\n> > > > > > SUGGESTION\n> > > > > > ... to help make operations such as removing nodes more efficient.\n> > > > > >\n> > > > >\n> > > > > But these operations literally require the indexed binary heap as we\n> > > > > have an assertion:\n> > > > >\n> > > > > void\n> > > > > binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> > > > > {\n> > > > > bh_nodeidx_entry *ent;\n> > > > >\n> > > > > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > > > > Assert(heap->bh_indexed);\n> > > > >\n> > > >\n> > > > I didn’t quite understand -- the operations mentioned are \"operations\n> > > > such as removing the node\", but binaryheap_remove_node() also removes\n> > > > a node from the heap. So I still felt the comment wording of the patch\n> > > > is not quite correct.\n> > >\n> > > Now I understand your point. That's a valid point.\n> > >\n> > > >\n> > > > Now, if the removal of a node from an indexed heap can *only* be done\n> > > > using binaryheap_remove_node_ptr() then:\n> > > > - the other removal functions (binaryheap_remove_*) probably need some\n> > > > comments to make sure nobody is tempted to call them directly for an\n> > > > indexed heap.\n> > > > - maybe some refactoring and assertions are needed to ensure those\n> > > > *cannot* be called directly for an indexed heap.\n> > > >\n> > >\n> > > If the 'index' is true, the caller can not only use the existing\n> > > functions but also newly added functions such as\n> > > binaryheap_remove_node_ptr() and binaryheap_update_up() etc. How about\n> > > something like below?\n> > >\n> >\n> > You said: \"can not only use the existing functions but also...\"\n> >\n> > Hmm. Is that right? IIUC those existing \"remove\" functions should NOT\n> > be called directly if the heap was \"indexed\" because they'll delete\n> > the node from the heap OK, but any corresponding index for that\n> > deleted node will be left lying around -- i.e. everything gets out of\n> > sync. This was the reason for my original concern.\n> >\n>\n> All existing binaryheap functions should be available even if the\n> binaryheap is 'indexed'. For instance, with the patch,\n> binaryheap_remote_node() is:\n>\n> void\n> binaryheap_remove_node(binaryheap *heap, int n)\n> {\n> int cmp;\n>\n> Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> Assert(n >= 0 && n < heap->bh_size);\n>\n> /* compare last node to the one that is being removed */\n> cmp = heap->bh_compare(heap->bh_nodes[--heap->bh_size],\n> heap->bh_nodes[n],\n> heap->bh_arg);\n>\n> /* remove the last node, placing it in the vacated entry */\n> replace_node(heap, n, heap->bh_nodes[heap->bh_size]);\n>\n> /* sift as needed to preserve the heap property */\n> if (cmp > 0)\n> sift_up(heap, n);\n> else if (cmp < 0)\n> sift_down(heap, n);\n> }\n>\n> The replace_node(), sift_up() and sift_down() update node's index as\n> well if the binaryheap is indexed. When deleting the node from the\n> binaryheap, it will also delete its index from the hash table.\n>\n\nI see now. Thanks for the information.\n\n~~~\n\nSome more review comments for v8-0002\n\n======\n\n1.\n+/*\n+ * Remove the node's index from the hash table if the heap is indexed.\n+ */\n+static bool\n+delete_nodeidx(binaryheap *heap, bh_node_type node)\n+{\n+ if (!binaryheap_indexed(heap))\n+ return false;\n+\n+ return bh_nodeidx_delete(heap->bh_nodeidx, node);\n+}\n\nI wasn't sure if having this function was a good idea. Yes, it makes\ncode more readable, but I felt the heap code ought to be as efficient\nas possible so maybe it is better for the index check to be done at\nthe caller, instead of incurring any overhead of function calls that\nmight do nothing.\n\nSUGGESTION\nif (binaryheap_indexed(heap))\n found = bh_nodeidx_delete(heap->bh_nodeidx, node);\n\n~~~\n\n2.\n+/*\n+ * binaryheap_update_up\n+ *\n+ * Sift the given node up after the node's key is updated. The caller must\n+ * ensure that the given node is in the heap. O(log n) worst case.\n+ *\n+ * This function can be used only if the heap is indexed.\n+ */\n+void\n+binaryheap_update_up(binaryheap *heap, bh_node_type d)\n+{\n+ bh_nodeidx_entry *ent;\n+\n+ Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n+ Assert(binaryheap_indexed(heap));\n+\n+ ent = bh_nodeidx_lookup(heap->bh_nodeidx, d);\n+ Assert(ent);\n+ Assert(ent->index >= 0 && ent->index < heap->bh_size);\n+\n+ sift_up(heap, ent->index);\n+}\n+\n+/*\n+ * binaryheap_update_down\n+ *\n+ * Sift the given node down after the node's key is updated. The caller must\n+ * ensure that the given node is in the heap. O(log n) worst case.\n+ *\n+ * This function can be used only if the heap is indexed.\n+ */\n+void\n+binaryheap_update_down(binaryheap *heap, bh_node_type d)\n+{\n+ bh_nodeidx_entry *ent;\n+\n+ Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n+ Assert(binaryheap_indexed(heap));\n+\n+ ent = bh_nodeidx_lookup(heap->bh_nodeidx, d);\n+ Assert(ent);\n+ Assert(ent->index >= 0 && ent->index < heap->bh_size);\n+\n+ sift_down(heap, ent->index);\n+}\n\nSince those functions are almost identical, wouldn't it be better to\ncombine them, passing the sift direction?\n\nSUGGESTION\nbinaryheap_resift(binaryheap *heap, bh_node_type d, bool sift_dir_up)\n{\n ...\n\n if (sift_dir_up)\n sift_up(heap, ent->index);\n else\n sift_down(heap, ent->index);\n}\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Mar 2024 13:22:59 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 3:04 PM Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for v8-0003\n>\n> ======\n> 0. GENERAL -- why the state enum?\n>\n> This patch introduced a new ReorderBufferMemTrackState with 2 states\n> (REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP,\n> REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n>\n> It's the same as having a boolean flag OFF/ON, so I didn't see any\n> benefit of the enum instead of a simple boolean flag like\n> 'track_txn_sizes'.\n>\n> NOTE: Below in this post (see #11) I would like to propose another\n> idea, which can simplify much further, eliminating the need for the\n> state boolean. If adopted that will impact lots of these other review\n> comments.\n\nGood point! We used to use three states in the earlier version patch\nbut now that we have only two we don't necessarily need to use an\nenum. I've used your idea.\n\n>\n> ======\n> Commit Message\n>\n> 1.\n> Previously, when selecting the transaction to evict during logical\n> decoding, we check all transactions to find the largest\n> transaction. Which could lead to a significant replication lag\n> especially in case where there are many subtransactions.\n>\n> ~\n>\n> /Which could/This could/\n>\n> /in case/in the case/\n\nFixed.\n\n>\n> ======\n> .../replication/logical/reorderbuffer.c\n>\n> 2.\n> * We use a max-heap with transaction size as the key to efficiently find\n> * the largest transaction. The max-heap state is managed in two states:\n> * REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP and\n> REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP.\n>\n> /The max-heap state is managed in two states:/The max-heap is managed\n> in two states:/\n\nThis part is removed.\n\n>\n> ~~~\n>\n> 3.\n> +/*\n> + * Threshold of the total number of top-level and sub transactions\n> that controls\n> + * whether we switch the memory track state. While using max-heap to select\n> + * the largest transaction is effective when there are many transactions being\n> + * decoded, in many systems there is generally no need to use it as long as all\n> + * transactions being decoded are top-level transactions. Therefore, we use\n> + * MaxConnections as the threshold* so we can prevent switch to the\n> state unless\n> + * we use subtransactions.\n> + */\n> +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n>\n> 3a.\n> /memory track state./memory tracking state./\n>\n> /While using max-heap/Although using max-heap/\n>\n> \"in many systems\" (are these words adding anything?)\n>\n> /threshold*/threshold/\n>\n> /so we can prevent switch/so we can prevent switching/\n>\n\nFixed.\n\n> ~\n>\n> 3b.\n> There's nothing really in this name to indicate the units of the\n> threshold. Consider if there is some more informative name for this\n> macro: e.g.\n> MAXHEAP_TX_COUNT_THRESHOLD (?)\n\nFixed.\n\n>\n> ~~~\n>\n> 4.\n> + /*\n> + * Don't start with a lower number than\n> + * REORDER_BUFFER_MEM_TRACK_THRESHOLD, since we add at least\n> + * REORDER_BUFFER_MEM_TRACK_THRESHOLD entries at once.\n> + */\n> + buffer->memtrack_state = REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP;\n> + buffer->txn_heap = binaryheap_allocate(REORDER_BUFFER_MEM_TRACK_THRESHOLD * 2,\n> + ReorderBufferTXNSizeCompare,\n> + true, NULL);\n> +\n>\n> IIUC the comment intends to say:\n>\n> Allocate the initial heap size greater than THRESHOLD because the\n> txn_heap will not be used until the threshold is exceeded.\n>\n> Also, maybe the comment should make a point of saying \"Note: the\n> binary heap is INDEXED for faster manipulations\". or something\n> similar.\n>\n\nFixed.\n\n> ~~~\n>\n> 5.\n> static void\n> ReorderBufferChangeMemoryUpdate(ReorderBuffer *rb,\n> ReorderBufferChange *change,\n> + ReorderBufferTXN *txn,\n> bool addition, Size sz)\n> {\n> - ReorderBufferTXN *txn;\n> ReorderBufferTXN *toptxn;\n>\n> - Assert(change->txn);\n> -\n>\n> There seems some trick now where the passed 'change' could be NULL,\n> which was not possible before. e.g., when change is NULL then 'txn' is\n> not NULL, and vice versa. Some explanation about this logic and the\n> meaning of these parameters should be written in this function\n> comment.\n\nAdded comments.\n\n>\n> ~\n>\n> 6.\n> + txn = txn != NULL ? txn : change->txn;\n>\n> IMO it's more natural to code the ternary using the same order as the\n> parameters:\n>\n> e.g. txn = change ? change->txn : txn;\n\nI see your point. I changed it to:\n\n if (txn == NULL)\n txn = change->txn;\n\nso we don't change txn if it's not NULL.\n\n>\n> ~~~\n>\n> 7.\n> /*\n> * Build the max-heap and switch the state. We will run a heap assembly step\n> * at the end, which is more efficient.\n> */\n> static void\n> ReorderBufferBuildMaxHeap(ReorderBuffer *rb)\n>\n> /We will run a heap assembly step at the end, which is more\n> efficient./The heap assembly step is deferred until the end, for\n> efficiency./\n\nFixed.\n\n>\n> ~~~\n>\n> 8. ReorderBufferLargestTXN\n>\n> + if (hash_get_num_entries(rb->by_txn) < REORDER_BUFFER_MEM_TRACK_THRESHOLD)\n> + {\n> + HASH_SEQ_STATUS hash_seq;\n> + ReorderBufferTXNByIdEnt *ent;\n> +\n> + hash_seq_init(&hash_seq, rb->by_txn);\n> + while ((ent = hash_seq_search(&hash_seq)) != NULL)\n> + {\n> + ReorderBufferTXN *txn = ent->txn;\n> +\n> + /* if the current transaction is larger, remember it */\n> + if ((!largest) || (txn->size > largest->size))\n> + largest = txn;\n> + }\n> +\n> + Assert(largest);\n> + }\n>\n> That Assert(largest) seems redundant because there is anyway another\n> Assert(largest) immediately after this code.\n\nRemoved.\n\n>\n> ~~~\n>\n> 9.\n> + /* Get the largest transaction from the max-heap */\n> + if (rb->memtrack_state == REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP)\n> + {\n> + Assert(binaryheap_size(rb->txn_heap) > 0);\n> + largest = (ReorderBufferTXN *)\n> + DatumGetPointer(binaryheap_first(rb->txn_heap));\n> }\n> Assert(binaryheap_size(rb->txn_heap) > 0); seemed like slightly less\n> readable way of saying:\n>\n> Assert(!binaryheap_empty(rb->txn_heap));\n\nFixed.\n\n>\n> ~~~\n>\n> 10.\n> +\n> +/*\n> + * Compare between sizes of two transactions. This is for a binary heap\n> + * comparison function.\n> + */\n> +static int\n> +ReorderBufferTXNSizeCompare(Datum a, Datum b, void *arg)\n>\n> 10a.\n> /Compare between sizes of two transactions./Compare two transactions by size./\n\nFixed.\n\n>\n> ~~~\n>\n> 10b.\n> IMO this comparator function belongs just before the\n> ReorderBufferAllocate() function since that is the only place where it\n> is used.\n\nI think it's better to move close to new max-heap related functions.\n\n>\n> ======\n> src/include/replication/reorderbuffer.h\n>\n> 11.\n> +/* State of how to track the memory usage of each transaction being decoded */\n> +typedef enum ReorderBufferMemTrackState\n> +{\n> + /*\n> + * We don't update max-heap while updating the memory counter. The\n> + * max-heap is built before use.\n> + */\n> + REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP,\n> +\n> + /*\n> + * We also update the max-heap when updating the memory counter so the\n> + * heap property is always preserved.\n> + */\n> + REORDER_BUFFER_MEM_TRACK_MAINTAIN_MAXHEAP,\n> +} ReorderBufferMemTrackState;\n> +\n>\n> In my GENERAL review comment #0, I suggested the removal of this\n> entire enum. e.g. It could be replaced with a boolean field\n> 'track_txn_sizes'\n>\n> TBH, I think there is a better way to handle this \"state\". IIUC\n> - the txn_heap is always allocated up-front.\n> - you only \"build\" it when > threshold and\n> - when it drops < 0.9 x threshold you reset it.\n>\n> Therefore, AFAICT you do not need to maintain any “switch states” at\n> all; you simply need to check binaryheap_empty(txn_heap), right?\n> * If the heap is empty…. It means you are NOT tracking, so don’t use it\n> * If the heap is NOT empty …. It means you ARE tracking, so use it.\n>\n> ~\n>\n> Using my idea to remove the state flag will have the side effect of\n> simplifying many other parts of this patch. For example\n>\n> BEFORE\n> +static void\n> +ReorderBufferMaybeChangeNoMaxHeap(ReorderBuffer *rb)\n> +{\n> + if (rb->memtrack_state == REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP)\n> + return;\n> +\n> ...\n> + if (binaryheap_size(rb->txn_heap) < REORDER_BUFFER_MEM_TRACK_THRESHOLD * 0.9)\n> + {\n> + rb->memtrack_state = REORDER_BUFFER_MEM_TRACK_NO_MAXHEAP;\n> + binaryheap_reset(rb->txn_heap);\n> + }\n> +}\n>\n> AFTER\n> +static void\n> +ReorderBufferMaybeChangeNoMaxHeap(ReorderBuffer *rb)\n> +{\n> + if (binaryheap_empty(rb->txn_heap))\n> + return;\n> +\n> ...\n> + if (binaryheap_size(rb->txn_heap) < REORDER_BUFFER_MEM_TRACK_THRESHOLD * 0.9)\n> + binaryheap_reset(rb->txn_heap);\n> +}\n\nAgreed. I removed the enum and changed the logic.\n\n>\n> ~~~\n>\n> 12. struct ReorderBuffer\n>\n> + /* Max-heap for sizes of all top-level and sub transactions */\n> + ReorderBufferMemTrackState memtrack_state;\n> + binaryheap *txn_heap;\n> +\n>\n> 12a.\n> Why is this being referred to in the commit message and code comments\n> as \"max-heap\" when the field is not called by that same name? Won't it\n> be better to give the field a better name -- e.g. \"txn_maxheap\" or\n> similar?\n\nNot sure it helps increase readability. Other codes where we use\nbinaryheap use neither max nor min in the field name.\n\n>\n> ~\n>\n> 12b.\n> This comment should also say that the heap is ordered by tx size --\n> (e.g. the comparator is ReorderBufferTXNSizeCompare)\n\nIt seems to me the comment \"/* Max-heap for sizes of all top-level and\nsub transactions */\" already mentions that, no? I'm not sure we need\nto refer to the actual function name here.\n\nI've attached new version patches.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 14 Mar 2024 12:02:39 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 11:23 AM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 12:48 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Mar 13, 2024 at 10:15 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Tue, Mar 12, 2024 at 4:23 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Fri, Mar 8, 2024 at 12:58 PM Peter Smith <[email protected]> wrote:\n> > > > >\n> > > ...\n> > > > > > > 5.\n> > > > > > > + *\n> > > > > > > + * If 'indexed' is true, we create a hash table to track of each node's\n> > > > > > > + * index in the heap, enabling to perform some operations such as removing\n> > > > > > > + * the node from the heap.\n> > > > > > > */\n> > > > > > > binaryheap *\n> > > > > > > -binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> > > > > > > +binaryheap_allocate(int capacity, binaryheap_comparator compare,\n> > > > > > > + bool indexed, void *arg)\n> > > > > > >\n> > > > > > > BEFORE\n> > > > > > > ... enabling to perform some operations such as removing the node from the heap.\n> > > > > > >\n> > > > > > > SUGGESTION\n> > > > > > > ... to help make operations such as removing nodes more efficient.\n> > > > > > >\n> > > > > >\n> > > > > > But these operations literally require the indexed binary heap as we\n> > > > > > have an assertion:\n> > > > > >\n> > > > > > void\n> > > > > > binaryheap_remove_node_ptr(binaryheap *heap, bh_node_type d)\n> > > > > > {\n> > > > > > bh_nodeidx_entry *ent;\n> > > > > >\n> > > > > > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > > > > > Assert(heap->bh_indexed);\n> > > > > >\n> > > > >\n> > > > > I didn’t quite understand -- the operations mentioned are \"operations\n> > > > > such as removing the node\", but binaryheap_remove_node() also removes\n> > > > > a node from the heap. So I still felt the comment wording of the patch\n> > > > > is not quite correct.\n> > > >\n> > > > Now I understand your point. That's a valid point.\n> > > >\n> > > > >\n> > > > > Now, if the removal of a node from an indexed heap can *only* be done\n> > > > > using binaryheap_remove_node_ptr() then:\n> > > > > - the other removal functions (binaryheap_remove_*) probably need some\n> > > > > comments to make sure nobody is tempted to call them directly for an\n> > > > > indexed heap.\n> > > > > - maybe some refactoring and assertions are needed to ensure those\n> > > > > *cannot* be called directly for an indexed heap.\n> > > > >\n> > > >\n> > > > If the 'index' is true, the caller can not only use the existing\n> > > > functions but also newly added functions such as\n> > > > binaryheap_remove_node_ptr() and binaryheap_update_up() etc. How about\n> > > > something like below?\n> > > >\n> > >\n> > > You said: \"can not only use the existing functions but also...\"\n> > >\n> > > Hmm. Is that right? IIUC those existing \"remove\" functions should NOT\n> > > be called directly if the heap was \"indexed\" because they'll delete\n> > > the node from the heap OK, but any corresponding index for that\n> > > deleted node will be left lying around -- i.e. everything gets out of\n> > > sync. This was the reason for my original concern.\n> > >\n> >\n> > All existing binaryheap functions should be available even if the\n> > binaryheap is 'indexed'. For instance, with the patch,\n> > binaryheap_remote_node() is:\n> >\n> > void\n> > binaryheap_remove_node(binaryheap *heap, int n)\n> > {\n> > int cmp;\n> >\n> > Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> > Assert(n >= 0 && n < heap->bh_size);\n> >\n> > /* compare last node to the one that is being removed */\n> > cmp = heap->bh_compare(heap->bh_nodes[--heap->bh_size],\n> > heap->bh_nodes[n],\n> > heap->bh_arg);\n> >\n> > /* remove the last node, placing it in the vacated entry */\n> > replace_node(heap, n, heap->bh_nodes[heap->bh_size]);\n> >\n> > /* sift as needed to preserve the heap property */\n> > if (cmp > 0)\n> > sift_up(heap, n);\n> > else if (cmp < 0)\n> > sift_down(heap, n);\n> > }\n> >\n> > The replace_node(), sift_up() and sift_down() update node's index as\n> > well if the binaryheap is indexed. When deleting the node from the\n> > binaryheap, it will also delete its index from the hash table.\n> >\n>\n> I see now. Thanks for the information.\n>\n> ~~~\n>\n> Some more review comments for v8-0002\n>\n> ======\n>\n> 1.\n> +/*\n> + * Remove the node's index from the hash table if the heap is indexed.\n> + */\n> +static bool\n> +delete_nodeidx(binaryheap *heap, bh_node_type node)\n> +{\n> + if (!binaryheap_indexed(heap))\n> + return false;\n> +\n> + return bh_nodeidx_delete(heap->bh_nodeidx, node);\n> +}\n>\n> I wasn't sure if having this function was a good idea. Yes, it makes\n> code more readable, but I felt the heap code ought to be as efficient\n> as possible so maybe it is better for the index check to be done at\n> the caller, instead of incurring any overhead of function calls that\n> might do nothing.\n>\n> SUGGESTION\n> if (binaryheap_indexed(heap))\n> found = bh_nodeidx_delete(heap->bh_nodeidx, node);\n\nI think we can have the function inlined, instead of doing the same\nthings in multiple places. I've changed it in the v9 patch.\n\n> ~~~\n>\n> 2.\n> +/*\n> + * binaryheap_update_up\n> + *\n> + * Sift the given node up after the node's key is updated. The caller must\n> + * ensure that the given node is in the heap. O(log n) worst case.\n> + *\n> + * This function can be used only if the heap is indexed.\n> + */\n> +void\n> +binaryheap_update_up(binaryheap *heap, bh_node_type d)\n> +{\n> + bh_nodeidx_entry *ent;\n> +\n> + Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> + Assert(binaryheap_indexed(heap));\n> +\n> + ent = bh_nodeidx_lookup(heap->bh_nodeidx, d);\n> + Assert(ent);\n> + Assert(ent->index >= 0 && ent->index < heap->bh_size);\n> +\n> + sift_up(heap, ent->index);\n> +}\n> +\n> +/*\n> + * binaryheap_update_down\n> + *\n> + * Sift the given node down after the node's key is updated. The caller must\n> + * ensure that the given node is in the heap. O(log n) worst case.\n> + *\n> + * This function can be used only if the heap is indexed.\n> + */\n> +void\n> +binaryheap_update_down(binaryheap *heap, bh_node_type d)\n> +{\n> + bh_nodeidx_entry *ent;\n> +\n> + Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);\n> + Assert(binaryheap_indexed(heap));\n> +\n> + ent = bh_nodeidx_lookup(heap->bh_nodeidx, d);\n> + Assert(ent);\n> + Assert(ent->index >= 0 && ent->index < heap->bh_size);\n> +\n> + sift_down(heap, ent->index);\n> +}\n>\n> Since those functions are almost identical, wouldn't it be better to\n> combine them, passing the sift direction?\n>\n> SUGGESTION\n> binaryheap_resift(binaryheap *heap, bh_node_type d, bool sift_dir_up)\n> {\n> ...\n>\n> if (sift_dir_up)\n> sift_up(heap, ent->index);\n> else\n> sift_down(heap, ent->index);\n> }\n\nI'm not really sure binaryheap_resift() is a better API than\nbinaryheap_update_up() and _down(). Having different APIs for\ndifferent behavior makes sense to me. On the other hand, I see your\npoint that these two functions have duplicated codes, so I created a\ncommon function for them to remove the duplication.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 12:06:59 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n>\n>\n> I've attached new version patches.\n\nSince the previous patch conflicts with the current HEAD, I've\nattached the rebased patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 26 Mar 2024 13:34:29 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 8:50 AM vignesh C <[email protected]> wrote:\n>\n> On Wed, 28 Feb 2024 at 11:40, Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> >\n> > A few comments on 0003:\n> > ===================\n> > 1.\n> > +/*\n> > + * Threshold of the total number of top-level and sub transactions\n> > that controls\n> > + * whether we switch the memory track state. While the MAINTAIN_HEAP state is\n> > + * effective when there are many transactions being decoded, in many systems\n> > + * there is generally no need to use it as long as all transactions\n> > being decoded\n> > + * are top-level transactions. Therefore, we use MaxConnections as\n> > the threshold\n> > + * so we can prevent switch to the state unless we use subtransactions.\n> > + */\n> > +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections\n> >\n> > The comment seems to imply that MAINTAIN_HEAP is useful for large\n> > number of transactions but ReorderBufferLargestTXN() switches to this\n> > state even when there is one transaction. So, basically we use the\n> > binary_heap technique to get the largest even when we have one\n> > transaction but we don't maintain that heap unless we have\n> > REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are\n> > in-progress. This means there is some additional work when (build and\n> > reset heap each time when we pick largest xact) we have fewer\n> > transactions in the system but that may not be impacting us because of\n> > other costs involved like serializing all the changes. I think once we\n> > can try to stress test this by setting\n> > debug_logical_replication_streaming to 'immediate' to see if the new\n> > mechanism has any overhead.\n>\n> I ran the test with a transaction having many inserts:\n>\n> | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n> ------- |-----------|------------|------------|--------------|----------------|----------------\n> Head | 26.31 | 48.84 | 93.65 | 480.05 | 4808.29 | 47020.16\n> Patch | 26.35 | 50.8 | 97.99 | 484.8 | 4856.95 | 48108.89\n>\n> The same test with debug_logical_replication_streaming= 'immediate'\n>\n> | 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000\n> ------- |-----------|------------|------------|--------------|----------------|----------------\n> Head | 59.29 | 115.84 | 227.21 | 1156.08 | 11367.42 | 113986.14\n> Patch | 62.45 | 120.48 | 240.56 | 1185.12 | 11855.37 | 119921.81\n>\n> The execution time is in milliseconds. The column header indicates the\n> number of inserts in the transaction.\n> In this case I noticed that the test execution with patch was taking\n> slightly more time.\n\nI have ran the tests that Vignesh had reported a issue, the test\nresults with the latest patch is given below:\n\nWithout debug_logical_replication_streaming= 'immediate'\nRecord|10000000 |1000000 |100000 | 20000 | 10000 | 5000\n----------|---------------|-------------|-----------|----------|----------|----------\nHead |47563.759| 4917.057|478.923|97.28 |50.368 |25.917\nPatch |47445.733| 4722.874|472.817|95.15 |48.801 |26.168\n%imp |0.248 | 03.949 |01.274 |02.189|03.111 |-0.968\n\nWith debug_logical_replication_streaming= 'immediate'\nRecord| 10000000 | 1000000 | 100000 | 20000 | 10000 | 5000\n----------|----------------|--------------|-------------|-----------|-----------|----------\nHead |106281.236|10669.992|1073.815|214.287|107.62 |54.947\nPatch |103108.673|10603.139|1064.98 |210.229|106.321|54.218\n%imp | 02.985 | 0.626 |0.822 |01.893 |01.207 |01.326\n\nThe execution time is in milliseconds. The column header indicates the\nnumber of inserts in the transaction. I can notice with the test\nresult that the issue has been resolved with the new patch.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 13:14:44 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, 26 Mar 2024 at 10:05, Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> >\n> > I've attached new version patches.\n>\n> Since the previous patch conflicts with the current HEAD, I've\n> attached the rebased patches.\n\nThanks for the updated patch.\nOne comment:\nI felt we can mention the improvement where we update memory\naccounting info at transaction level instead of per change level which\nis done in ReorderBufferCleanupTXN, ReorderBufferTruncateTXN, and\nReorderBufferSerializeTXN also in the commit message:\n@@ -1527,7 +1573,7 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n /* Check we're not mixing changes from different\ntransactions. */\n Assert(change->txn == txn);\n\n- ReorderBufferReturnChange(rb, change, true);\n+ ReorderBufferReturnChange(rb, change, false);\n }\n\n /*\n@@ -1586,8 +1632,13 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n if (rbtxn_is_serialized(txn))\n ReorderBufferRestoreCleanup(rb, txn);\n\n+ /* Update the memory counter */\n+ ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:38:57 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 2:09 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 26 Mar 2024 at 10:05, Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > >\n> > > I've attached new version patches.\n> >\n> > Since the previous patch conflicts with the current HEAD, I've\n> > attached the rebased patches.\n>\n> Thanks for the updated patch.\n> One comment:\n> I felt we can mention the improvement where we update memory\n> accounting info at transaction level instead of per change level which\n> is done in ReorderBufferCleanupTXN, ReorderBufferTruncateTXN, and\n> ReorderBufferSerializeTXN also in the commit message:\n\nAgreed.\n\nI think the patch is in good shape. I'll push the patch with the\nsuggestion next week, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 15:43:04 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 12:13 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 2:09 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 26 Mar 2024 at 10:05, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > >\n> > > > I've attached new version patches.\n> > >\n> > > Since the previous patch conflicts with the current HEAD, I've\n> > > attached the rebased patches.\n> >\n> > Thanks for the updated patch.\n> > One comment:\n> > I felt we can mention the improvement where we update memory\n> > accounting info at transaction level instead of per change level which\n> > is done in ReorderBufferCleanupTXN, ReorderBufferTruncateTXN, and\n> > ReorderBufferSerializeTXN also in the commit message:\n>\n> Agreed.\n>\n> I think the patch is in good shape. I'll push the patch with the\n> suggestion next week, barring any objections.\n>\n\nFew minor comments:\n1.\n@@ -3636,6 +3801,8 @@ ReorderBufferCheckMemoryLimit(ReorderBuffer *rb)\n Assert(txn->nentries_mem == 0);\n }\n\n+ ReorderBufferMaybeResetMaxHeap(rb);\n+\n\nCan we write a comment about why this reset is required here?\nOtherwise, the reason is not apparent.\n\n2.\nAlthough using max-heap to select the largest\n+ * transaction is effective when there are many transactions being decoded,\n+ * there is generally no need to use it as long as all transactions being\n+ * decoded are top-level transactions. Therefore, we use MaxConnections as the\n+ * threshold so we can prevent switching to the state unless we use\n+ * subtransactions.\n+ */\n+#define MAX_HEAP_TXN_COUNT_THRESHOLD MaxConnections\n\nIsn't using max-heap equally effective in finding the largest\ntransaction whether there are top-level or top-level plus\nsubtransactions? This comment indicates it is only effective when\nthere are subtransactions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Mar 2024 16:07:22 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> Agreed.\r\n> \r\n> I think the patch is in good shape. I'll push the patch with the\r\n> suggestion next week, barring any objections.\r\n\r\nThanks for working on this. Agreed it is committable.\r\nFew minor comments:\r\n\r\n```\r\n+ * Either txn or change must be non-NULL at least. We update the memory\r\n+ * counter of txn if it's non-NULL, otherwise change->txn.\r\n```\r\n\r\nIIUC no one checks the restriction. Should we add Assert() for it, e.g,:\r\nAssert(txn || change)? \r\n\r\n```\r\n+ /* make sure enough space for a new node */\r\n...\r\n+ /* make sure enough space for a new node */\r\n```\r\n\r\nShould be started with upper case?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Fri, 29 Mar 2024 11:48:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 7:37 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 12:13 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Mar 29, 2024 at 2:09 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Tue, 26 Mar 2024 at 10:05, Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > >\n> > > > > I've attached new version patches.\n> > > >\n> > > > Since the previous patch conflicts with the current HEAD, I've\n> > > > attached the rebased patches.\n> > >\n> > > Thanks for the updated patch.\n> > > One comment:\n> > > I felt we can mention the improvement where we update memory\n> > > accounting info at transaction level instead of per change level which\n> > > is done in ReorderBufferCleanupTXN, ReorderBufferTruncateTXN, and\n> > > ReorderBufferSerializeTXN also in the commit message:\n> >\n> > Agreed.\n> >\n> > I think the patch is in good shape. I'll push the patch with the\n> > suggestion next week, barring any objections.\n> >\n>\n> Few minor comments:\n> 1.\n> @@ -3636,6 +3801,8 @@ ReorderBufferCheckMemoryLimit(ReorderBuffer *rb)\n> Assert(txn->nentries_mem == 0);\n> }\n>\n> + ReorderBufferMaybeResetMaxHeap(rb);\n> +\n>\n> Can we write a comment about why this reset is required here?\n> Otherwise, the reason is not apparent.\n\nYes, added.\n\n>\n> 2.\n> Although using max-heap to select the largest\n> + * transaction is effective when there are many transactions being decoded,\n> + * there is generally no need to use it as long as all transactions being\n> + * decoded are top-level transactions. Therefore, we use MaxConnections as the\n> + * threshold so we can prevent switching to the state unless we use\n> + * subtransactions.\n> + */\n> +#define MAX_HEAP_TXN_COUNT_THRESHOLD MaxConnections\n>\n> Isn't using max-heap equally effective in finding the largest\n> transaction whether there are top-level or top-level plus\n> subtransactions? This comment indicates it is only effective when\n> there are subtransactions.\n\nYou're right. Updated the comment.\n\nI've attached the updated patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 1 Apr 2024 11:26:20 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 8:48 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> > Agreed.\n> >\n> > I think the patch is in good shape. I'll push the patch with the\n> > suggestion next week, barring any objections.\n>\n> Thanks for working on this. Agreed it is committable.\n> Few minor comments:\n\nThank you for the comments!\n\n>\n> ```\n> + * Either txn or change must be non-NULL at least. We update the memory\n> + * counter of txn if it's non-NULL, otherwise change->txn.\n> ```\n>\n> IIUC no one checks the restriction. Should we add Assert() for it, e.g,:\n> Assert(txn || change)?\n\nAgreed to add it.\n\n>\n> ```\n> + /* make sure enough space for a new node */\n> ...\n> + /* make sure enough space for a new node */\n> ```\n>\n> Should be started with upper case?\n\nI don't think we need to change it. There are other comments in the\nsame file that are one line and start with lowercase.\n\nI've just submitted the updated patches[1]\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoA6%3D%2BtL%3DbtB_s9N%2BcZK7tKz1W%3DPQyNq72nzjUcdyE%2BwZw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 11:28:12 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 11:26 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 7:37 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Mar 29, 2024 at 12:13 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Fri, Mar 29, 2024 at 2:09 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Tue, 26 Mar 2024 at 10:05, Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Thu, Mar 14, 2024 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > > I've attached new version patches.\n> > > > >\n> > > > > Since the previous patch conflicts with the current HEAD, I've\n> > > > > attached the rebased patches.\n> > > >\n> > > > Thanks for the updated patch.\n> > > > One comment:\n> > > > I felt we can mention the improvement where we update memory\n> > > > accounting info at transaction level instead of per change level which\n> > > > is done in ReorderBufferCleanupTXN, ReorderBufferTruncateTXN, and\n> > > > ReorderBufferSerializeTXN also in the commit message:\n> > >\n> > > Agreed.\n> > >\n> > > I think the patch is in good shape. I'll push the patch with the\n> > > suggestion next week, barring any objections.\n> > >\n> >\n> > Few minor comments:\n> > 1.\n> > @@ -3636,6 +3801,8 @@ ReorderBufferCheckMemoryLimit(ReorderBuffer *rb)\n> > Assert(txn->nentries_mem == 0);\n> > }\n> >\n> > + ReorderBufferMaybeResetMaxHeap(rb);\n> > +\n> >\n> > Can we write a comment about why this reset is required here?\n> > Otherwise, the reason is not apparent.\n>\n> Yes, added.\n>\n> >\n> > 2.\n> > Although using max-heap to select the largest\n> > + * transaction is effective when there are many transactions being decoded,\n> > + * there is generally no need to use it as long as all transactions being\n> > + * decoded are top-level transactions. Therefore, we use MaxConnections as the\n> > + * threshold so we can prevent switching to the state unless we use\n> > + * subtransactions.\n> > + */\n> > +#define MAX_HEAP_TXN_COUNT_THRESHOLD MaxConnections\n> >\n> > Isn't using max-heap equally effective in finding the largest\n> > transaction whether there are top-level or top-level plus\n> > subtransactions? This comment indicates it is only effective when\n> > there are subtransactions.\n>\n> You're right. Updated the comment.\n>\n> I've attached the updated patches.\n>\n\nWhile reviewing the patches, I realized the comment of\nbinearyheap_allocate() should also be updated. So I've attached the\nnew patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 1 Apr 2024 12:42:21 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, 2024-04-01 at 12:42 +0900, Masahiko Sawada wrote:\n> While reviewing the patches, I realized the comment of\n> binearyheap_allocate() should also be updated. So I've attached the\n> new patches.\n\nIn sift_{up|down}, each loop iteration calls set_node(), and each call\nto set_node does a hash lookup. I didn't measure it, but that feels\nwasteful.\n\nI don't even think you really need the hash table. The key to the hash\ntable is a pointer, so it's not really doing anything that couldn't be\ndone more efficiently by just following the pointer.\n\nI suggest that you add a \"heap_index\" field to ReorderBufferTXN that\nwould point to the index into the heap's array (the same as\nbh_nodeidx_entry.index in your patch). Each time an element moves\nwithin the heap array, just follow the pointer to the ReorderBufferTXN\nobject and update the heap_index -- no hash lookup required.\n\nThat's not easy to do with the current binaryheap API. But a binary\nheap is not a terribly complex structure, so you can just do an inline\nimplementation of it where sift_{up|down} know to update the heap_index\nfield of the ReorderBufferTXN.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 03 Apr 2024 01:45:55 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, 2024-04-03 at 01:45 -0700, Jeff Davis wrote:\n> I suggest that you add a \"heap_index\" field to ReorderBufferTXN that\n> would point to the index into the heap's array (the same as\n> bh_nodeidx_entry.index in your patch). Each time an element moves\n> within the heap array, just follow the pointer to the\n> ReorderBufferTXN\n> object and update the heap_index -- no hash lookup required.\n\nIt looks like my email was slightly too late, as the work was already\ncommitted.\n\nMy suggestion is not required for 17, and so it's fine if this waits\nuntil the next CF. If it turns out to be a win we can consider\nbackporting to 17 just to keep the code consistent, otherwise it can go\nin 18.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 03 Apr 2024 10:32:29 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 4, 2024 at 2:32 AM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2024-04-03 at 01:45 -0700, Jeff Davis wrote:\n> > I suggest that you add a \"heap_index\" field to ReorderBufferTXN that\n> > would point to the index into the heap's array (the same as\n> > bh_nodeidx_entry.index in your patch). Each time an element moves\n> > within the heap array, just follow the pointer to the\n> > ReorderBufferTXN\n> > object and update the heap_index -- no hash lookup required.\n>\n> It looks like my email was slightly too late, as the work was already\n> committed.\n\nThank you for the suggestions! I should have informed it earlier.\n\n>\n> My suggestion is not required for 17, and so it's fine if this waits\n> until the next CF. If it turns out to be a win we can consider\n> backporting to 17 just to keep the code consistent, otherwise it can go\n> in 18.\n\nIIUC, with your suggestion, sift_{up|down} needs to update the\nheap_index field as well. Does it mean that the caller needs to pass\nthe address of heap_index down to sift_{up|down}?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 09:31:11 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, 2024-04-04 at 09:31 +0900, Masahiko Sawada wrote:\n> IIUC, with your suggestion, sift_{up|down} needs to update the\n> heap_index field as well. Does it mean that the caller needs to pass\n> the address of heap_index down to sift_{up|down}?\n\nI'm not sure quite how binaryheap should be changed. Bringing the heap\nimplementation into reorderbuffer.c would obviously work, but that\nwould be more code. Another option might be to make the API of\nbinaryheap look a little more like simplehash, where some #defines\ncontrol optional behavior and can tell the implementation where to find\nfields in the structure.\n\nPerhaps it's not worth the effort though, if performance is already\ngood enough?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 03 Apr 2024 21:54:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 1:54 PM Jeff Davis <[email protected]> wrote:\n>\n> On Thu, 2024-04-04 at 09:31 +0900, Masahiko Sawada wrote:\n> > IIUC, with your suggestion, sift_{up|down} needs to update the\n> > heap_index field as well. Does it mean that the caller needs to pass\n> > the address of heap_index down to sift_{up|down}?\n>\n> I'm not sure quite how binaryheap should be changed. Bringing the heap\n> implementation into reorderbuffer.c would obviously work, but that\n> would be more code.\n\nRight.\n\n> Another option might be to make the API of\n> binaryheap look a little more like simplehash, where some #defines\n> control optional behavior and can tell the implementation where to find\n> fields in the structure.\n\nInteresting idea.\n\n>\n> Perhaps it's not worth the effort though, if performance is already\n> good enough?\n\nYeah, it would be better to measure the overhead first. I'll do that.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 17:28:27 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, 2024-04-04 at 17:28 +0900, Masahiko Sawada wrote:\n> > Perhaps it's not worth the effort though, if performance is already\n> > good enough?\n> \n> Yeah, it would be better to measure the overhead first. I'll do that.\n\nI have some further comments and I believe changes are required for\nv17.\n\nAn indexed binary heap API requires both a comparator and a hash\nfunction to be specified, and has two different kinds of keys: the heap\nkey (mutable) and the hash key (immutable). It provides heap methods\nand hashtable methods, and keep the two internal structures (heap and\nHT) in sync.\n\nThe implementation in b840508644 uses the bh_node_type as the hash key,\nwhich is just a Datum, and it just hashes the bytes. I believe the\nimplicit assumption is that the Datum is a pointer -- I'm not sure how\none would use that API if the Datum were a value. Hashing a pointer\nseems strange to me and, while I see why you did it that way, I think\nit reflects that the API boundaries are not quite right.\n\nOne consequence of using the pointer as the hash key is that you need\nto find the pointer first: you can't change or remove elements based on\nthe transaction ID, you have to get the ReorderBufferTXN pointer by\nfinding it in another structure, first. Currently, that's being done by\nsearching ReorderBuffer->by_txn. So we actually have two hash tables\nfor essentially the same purpose: one with xid as the key, and the\nother with the pointer as the key. That makes no sense -- let's have a\nproper indexed binary heap to look things up by xid (the internal HT)\nor by transaction size (using the internal heap).\n\nI suggest:\n\n * Make a proper indexed binary heap API that accepts a hash function\nand provides both heap methods and HT methods that operate based on\nvalues (transaction size and transaction ID, respectively).\n * Get rid of ReorderBuffer->by_txn and use the indexed binary heap\ninstead.\n\nThis will be a net simplification in reorderbuffer.c, which is good,\nbecause that file makes use of a *lot* of data strucutres.\n\nRegards\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 04 Apr 2024 10:55:53 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, 2024-04-04 at 10:55 -0700, Jeff Davis wrote:\n> * Make a proper indexed binary heap API that accepts a hash\n> function\n> and provides both heap methods and HT methods that operate based on\n> values (transaction size and transaction ID, respectively).\n> * Get rid of ReorderBuffer->by_txn and use the indexed binary heap\n> instead.\n\nAn alternative idea:\n\n* remove the hash table from binaryheap.c\n\n* supply a new callback to the binary heap with type like:\n\n typedef void (*binaryheap_update_index)(\n bh_node_type node,\n int new_element_index);\n\n* make the remove, update_up, and update_down methods take the element\nindex rather than the pointer\n\nreorderbuffer.c would then do something like:\n\n void\n txn_update_heap_index(ReorderBufferTXN *txn, int new_element_index)\n {\n txn->heap_element_index = new_element_index;\n }\n\n ...\n\n txn_heap = binaryheap_allocate(..., txn_update_heap_index, ...);\n\nand then binaryheap.c would effectively maintain txn-\n>heap_element_index, so reorderbuffer.c can pass that to the APIs that\nrequire the element index.\n\n\nAnother alternative is to keep the hash table in binaryheap.c, and\nsupply a hash function that hashes the xid. That leaves us with two\nhash tables still, but it would be cleaner than hashing the pointer.\nThat might be best for right now, and we can consider these other ideas\nlater.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 04 Apr 2024 11:55:51 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 2:55 AM Jeff Davis <[email protected]> wrote:\n>\n> On Thu, 2024-04-04 at 17:28 +0900, Masahiko Sawada wrote:\n> > > Perhaps it's not worth the effort though, if performance is already\n> > > good enough?\n> >\n> > Yeah, it would be better to measure the overhead first. I'll do that.\n>\n> I have some further comments and I believe changes are required for\n> v17.\n>\n> An indexed binary heap API requires both a comparator and a hash\n> function to be specified, and has two different kinds of keys: the heap\n> key (mutable) and the hash key (immutable). It provides heap methods\n> and hashtable methods, and keep the two internal structures (heap and\n> HT) in sync.\n\nIIUC for example in ReorderBuffer, the heap key is transaction size\nand the hash key is xid.\n\n>\n> The implementation in b840508644 uses the bh_node_type as the hash key,\n> which is just a Datum, and it just hashes the bytes. I believe the\n> implicit assumption is that the Datum is a pointer -- I'm not sure how\n> one would use that API if the Datum were a value. Hashing a pointer\n> seems strange to me and, while I see why you did it that way, I think\n> it reflects that the API boundaries are not quite right.\n\nI see your point. It assumes that the bh_node_type is a pointer or at\nleast unique. So it cannot work with Datum being a value.\n\n>\n> One consequence of using the pointer as the hash key is that you need\n> to find the pointer first: you can't change or remove elements based on\n> the transaction ID, you have to get the ReorderBufferTXN pointer by\n> finding it in another structure, first. Currently, that's being done by\n> searching ReorderBuffer->by_txn. So we actually have two hash tables\n> for essentially the same purpose: one with xid as the key, and the\n> other with the pointer as the key. That makes no sense -- let's have a\n> proper indexed binary heap to look things up by xid (the internal HT)\n> or by transaction size (using the internal heap).\n>\n> I suggest:\n>\n> * Make a proper indexed binary heap API that accepts a hash function\n> and provides both heap methods and HT methods that operate based on\n> values (transaction size and transaction ID, respectively).\n> * Get rid of ReorderBuffer->by_txn and use the indexed binary heap\n> instead.\n>\n> This will be a net simplification in reorderbuffer.c, which is good,\n> because that file makes use of a *lot* of data strucutres.\n>\n\nIt sounds like a data structure that mixes the hash table and the\nbinary heap and we use it as the main storage (e.g. for\nReorderBufferTXN) instead of using the binary heap as the secondary\ndata structure. IIUC with your idea, the indexed binary heap has a\nhash table to store elements each of which has its index within the\nheap node array. I guess it's better to create it as a new data\nstructure rather than extending the existing binaryheap, since APIs\ncould be very different. I might be missing something, though.\n\nOn Fri, Apr 5, 2024 at 3:55 AM Jeff Davis <[email protected]> wrote:\n>\n> On Thu, 2024-04-04 at 10:55 -0700, Jeff Davis wrote:\n> > * Make a proper indexed binary heap API that accepts a hash\n> > function\n> > and provides both heap methods and HT methods that operate based on\n> > values (transaction size and transaction ID, respectively).\n> > * Get rid of ReorderBuffer->by_txn and use the indexed binary heap\n> > instead.\n>\n> An alternative idea:\n>\n> * remove the hash table from binaryheap.c\n>\n> * supply a new callback to the binary heap with type like:\n>\n> typedef void (*binaryheap_update_index)(\n> bh_node_type node,\n> int new_element_index);\n>\n> * make the remove, update_up, and update_down methods take the element\n> index rather than the pointer\n>\n> reorderbuffer.c would then do something like:\n>\n> void\n> txn_update_heap_index(ReorderBufferTXN *txn, int new_element_index)\n> {\n> txn->heap_element_index = new_element_index;\n> }\n>\n> ...\n>\n> txn_heap = binaryheap_allocate(..., txn_update_heap_index, ...);\n>\n> and then binaryheap.c would effectively maintain txn-\n> >heap_element_index, so reorderbuffer.c can pass that to the APIs that\n> require the element index.\n\nThank you for the idea. I was thinking the same idea when considering\nyour previous comment. With this idea, we still use the binaryheap for\nReorderBuffer as the second data structure. Since we can implement\nthis idea with relatively small changes to the current binaryheap,\nI've implemented it and measured performances.\n\nI've attached a patch that adds an extension for benchmarking\nbinaryheap implementations. binaryheap_bench.c is the main test\nmodule. To make the comparison between different binaryheap\nimplementations, the extension includes two different binaryheap\nimplementations. Therefore, binaryheap_bench.c uses three different\nbinaryheap implementation in total as the comment on top of the file\nsays:\n\n/*\n * This benchmark tool uses three binary heap implementations.\n *\n * \"binaryheap\" is the current binaryheap implementation in PostgreSQL. That\n * is, it internally has a hash table to track each node index within the\n * node array.\n *\n * \"xx_binaryheap\" is based on \"binaryheap\" but remove the hash table.\n * Instead, it has each element have its index with in the node array. The\n * element's index is updated by the callback function,\nxx_binaryheap_update_index_fn\n * specified when xx_binaryheap_allocate().\n *\n * \"old_binaryheap\" is the binaryheap implementation before the \"indexed\" binary\n * heap changes are made. It neither has a hash table internally nor\ntracks nodes'\n * indexes.\n */\n\nThat is, xx_binaryheap is the binaryheap implementation suggested above.\n\nThe bench_load() function measures the time for adding elements (i.e.\nusing binaryheap_add() and similar). Here are results:\n\npostgres(1:3882886)=# select * from generate_series(1,3) x(x), lateral\n(select * from bench_load(true, 10000000 * (1+x-x)));\n x | cnt | load_ms | xx_load_ms | old_load_ms\n---+----------+---------+------------+-------------\n 1 | 10000000 | 4372 | 582 | 429\n 2 | 10000000 | 4371 | 582 | 429\n 3 | 10000000 | 4373 | 582 | 429\n(3 rows)\n\nThis shows that the current indexed binaryheap is much slower than the\nother two implementations, and the xx_binaryheap has a good number in\nspite of also being indexed.\n\nHere are another run that disables indexing on the current binaryheap:\n\npostgres(1:3882886)=# select * from generate_series(1,3) x(x), lateral\n(select * from bench_load(false, 10000000 * (1+x-x)));\n x | cnt | load_ms | xx_load_ms | old_load_ms\n---+----------+---------+------------+-------------\n 1 | 10000000 | 697 | 579 | 430\n 2 | 10000000 | 704 | 582 | 430\n 3 | 10000000 | 698 | 581 | 429\n(3 rows)\n\nThis shows that there is still performance regression in the current\nbinaryheap even if the indexing is disabled. xx_binaryheap also has\nsome regressions. I haven't investigated the root cause yet though.\n\nOverall, we can say there is a large room to improve the current\nbinaryheap performance, as you pointed out. When it comes to\nimplementing the above idea (i.e. changing binaryheap to\nxx_binaryheap), it was simple since we just replace the code where we\nupdate the hash table with the code where we call the callback, if we\nget the consensus on API change.\n\n> Another alternative is to keep the hash table in binaryheap.c, and\n> supply a hash function that hashes the xid. That leaves us with two\n> hash tables still, but it would be cleaner than hashing the pointer.\n> That might be best for right now, and we can consider these other ideas\n> later.\n\nThe fact that we use simplehash for the internal hash table might make\nthis idea complex. If I understand your suggestion correctly, the\ncaller needs to tell the hash table the hash function when creating a\nbinaryheap but the hash function needs to be specified at a compile\ntime. We can use a dynahash instead but it would make the binaryheap\nslow further.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 5 Apr 2024 16:58:15 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, 2024-04-05 at 16:58 +0900, Masahiko Sawada wrote:\n> IIUC for example in ReorderBuffer, the heap key is transaction size\n> and the hash key is xid.\n\nYes.\n\n> \n> I see your point. It assumes that the bh_node_type is a pointer or at\n> least unique. So it cannot work with Datum being a value.\n\nRight. One option might just be to add some comments explaining the API\nand limitations, but in general I feel it's confusing to hash a pointer\nwithout a good reason.\n\n> \n> It sounds like a data structure that mixes the hash table and the\n> binary heap and we use it as the main storage (e.g. for\n> ReorderBufferTXN) instead of using the binary heap as the secondary\n> data structure. IIUC with your idea, the indexed binary heap has a\n> hash table to store elements each of which has its index within the\n> heap node array. I guess it's better to create it as a new data\n> structure rather than extending the existing binaryheap, since APIs\n> could be very different. I might be missing something, though.\n\nYou are right that this approach starts to feel like a new data\nstructure and is not v17 material.\n\nI am interested in this for v18 though -- we could make the API more\nlike simplehash to be more flexible when using values (rather than\npointers) and to be able to inline the comparator.\n\n> > * remove the hash table from binaryheap.c\n> > \n> > * supply a new callback to the binary heap with type like:\n> > \n> > typedef void (*binaryheap_update_index)(\n> > bh_node_type node,\n> > int new_element_index);\n> > \n> > * make the remove, update_up, and update_down methods take the\n> > element\n> > index rather than the pointer\n\n...\n\n> This shows that the current indexed binaryheap is much slower than\n> the\n> other two implementations, and the xx_binaryheap has a good number in\n> spite of also being indexed.\n\nxx_binaryheap isn't indexed though, right?\n\n> When it comes to\n> implementing the above idea (i.e. changing binaryheap to\n> xx_binaryheap), it was simple since we just replace the code where we\n> update the hash table with the code where we call the callback, if we\n> get the consensus on API change.\n\nThat seems reasonable to me.\n\n> The fact that we use simplehash for the internal hash table might\n> make\n> this idea complex. If I understand your suggestion correctly, the\n> caller needs to tell the hash table the hash function when creating a\n> binaryheap but the hash function needs to be specified at a compile\n> time. We can use a dynahash instead but it would make the binaryheap\n> slow further.\n\nsimplehash.h supports private_data, which makes it easier to track a\ncallback.\n\nIn binaryheap.c, that would look something like:\n\n static inline uint32\n binaryheap_hash(bh_nodeidx_hash *tab, uint32 key)\n {\n binaryheap_hashfunc hashfunc = tab->private_data;\n return hashfunc(key);\n }\n\n ...\n #define SH_HASH_KEY(tb, key) binaryheap_hash(tb, key)\n ...\n\n binaryheap_allocate(int num_nodes, binaryheap_comparator compare,\n void *arg, binaryheap_hashfunc hashfunc)\n {\n ...\n if (hashfunc != NULL)\n {\n /* could have a new structure, but we only need to\n * store one pointer, so don't bother with palloc/pfree */\n void *private_data = (void *)hashfunc;\n heap->bh_nodeidx = bh_nodeidx_create(..., private_data);\n ...\n\n\nAnd in reorderbuffer.c, define the callback like:\n\n static uint32\n reorderbuffer_xid_hash(TransactionId xid)\n {\n /* fasthash32 is 'static inline' so may\n * be faster than hash_bytes()? */\n return fasthash32(&xid, sizeof(TransactionId), 0);\n }\n\n\n\nIn summary, there are two viable approaches for addressing the concerns\nin v17:\n\n1. Provide callback to update ReorderBufferTXN->heap_element_index, and\nuse that index (rather than the pointer) for updating the heap key\n(transaction size) or removing elements from the heap.\n\n2. Provide callback for hashing, so that binaryheap.c can hash the xid\nvalue rather than the pointer.\n\nI don't have a strong opinion about which one to use. I prefer\nsomething closer to #1 for v18, but for v17 I suggest whichever one\ncomes out simpler.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 05 Apr 2024 13:43:59 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 5:44 AM Jeff Davis <[email protected]> wrote:\n>\n> >\n> > It sounds like a data structure that mixes the hash table and the\n> > binary heap and we use it as the main storage (e.g. for\n> > ReorderBufferTXN) instead of using the binary heap as the secondary\n> > data structure. IIUC with your idea, the indexed binary heap has a\n> > hash table to store elements each of which has its index within the\n> > heap node array. I guess it's better to create it as a new data\n> > structure rather than extending the existing binaryheap, since APIs\n> > could be very different. I might be missing something, though.\n>\n> You are right that this approach starts to feel like a new data\n> structure and is not v17 material.\n>\n> I am interested in this for v18 though -- we could make the API more\n> like simplehash to be more flexible when using values (rather than\n> pointers) and to be able to inline the comparator.\n\nInteresting project. It would be great if we could support increasing\nand decreasing the key as APIs. The current\nbinaryheap_update_{up|down} APIs are not very user-friendly.\n\n>\n> > > * remove the hash table from binaryheap.c\n> > >\n> > > * supply a new callback to the binary heap with type like:\n> > >\n> > > typedef void (*binaryheap_update_index)(\n> > > bh_node_type node,\n> > > int new_element_index);\n> > >\n> > > * make the remove, update_up, and update_down methods take the\n> > > element\n> > > index rather than the pointer\n>\n> ...\n>\n> > This shows that the current indexed binaryheap is much slower than\n> > the\n> > other two implementations, and the xx_binaryheap has a good number in\n> > spite of also being indexed.\n>\n> xx_binaryheap isn't indexed though, right?\n\nWell, yes. To be xact, xx_binaryheap isn't indexed but the element\nindexes are stored in the element itself (see test_elem struct) so the\ncaller still can update the key using xx_binaryheap_update_{up|down}.\n\n>\n> > When it comes to\n> > implementing the above idea (i.e. changing binaryheap to\n> > xx_binaryheap), it was simple since we just replace the code where we\n> > update the hash table with the code where we call the callback, if we\n> > get the consensus on API change.\n>\n> That seems reasonable to me.\n>\n> > The fact that we use simplehash for the internal hash table might\n> > make\n> > this idea complex. If I understand your suggestion correctly, the\n> > caller needs to tell the hash table the hash function when creating a\n> > binaryheap but the hash function needs to be specified at a compile\n> > time. We can use a dynahash instead but it would make the binaryheap\n> > slow further.\n>\n> simplehash.h supports private_data, which makes it easier to track a\n> callback.\n>\n> In binaryheap.c, that would look something like:\n>\n> static inline uint32\n> binaryheap_hash(bh_nodeidx_hash *tab, uint32 key)\n> {\n> binaryheap_hashfunc hashfunc = tab->private_data;\n> return hashfunc(key);\n> }\n>\n> ...\n> #define SH_HASH_KEY(tb, key) binaryheap_hash(tb, key)\n> ...\n>\n> binaryheap_allocate(int num_nodes, binaryheap_comparator compare,\n> void *arg, binaryheap_hashfunc hashfunc)\n> {\n> ...\n> if (hashfunc != NULL)\n> {\n> /* could have a new structure, but we only need to\n> * store one pointer, so don't bother with palloc/pfree */\n> void *private_data = (void *)hashfunc;\n> heap->bh_nodeidx = bh_nodeidx_create(..., private_data);\n> ...\n>\n>\n> And in reorderbuffer.c, define the callback like:\n>\n> static uint32\n> reorderbuffer_xid_hash(TransactionId xid)\n> {\n> /* fasthash32 is 'static inline' so may\n> * be faster than hash_bytes()? */\n> return fasthash32(&xid, sizeof(TransactionId), 0);\n> }\n>\n\nThanks, that's a good idea.\n\n>\n>\n> In summary, there are two viable approaches for addressing the concerns\n> in v17:\n>\n> 1. Provide callback to update ReorderBufferTXN->heap_element_index, and\n> use that index (rather than the pointer) for updating the heap key\n> (transaction size) or removing elements from the heap.\n>\n> 2. Provide callback for hashing, so that binaryheap.c can hash the xid\n> value rather than the pointer.\n>\n> I don't have a strong opinion about which one to use. I prefer\n> something closer to #1 for v18, but for v17 I suggest whichever one\n> comes out simpler.\n\nI've implemented prototypes of both ideas, and attached the draft patches.\n\nI agree with you that something closer to #1 is for v18. Probably we\ncan implement the #1 idea while making binaryheap codes template like\nsimplehash.h. For v17, changes for #2 are smaller, but I'm concerned\nthat the new API that requires a hash function to be able to use\nbinaryheap_update_{up|down} might not be user friendly. In terms of\nAPIs, I prefer #1 idea. And changes for #1 can make the binaryheap\ncode simple, although it requires adding a variable in\nReorderBufferTXN instead. But overall, it can remove the hash table\nand some functions so it looks better to me.\n\nWhen it comes to performance overhead, I mentioned that there is some\nregression in the current binaryheap even without indexing. Since\nfunction calling contributed to the regression, inlining some\nfunctions reduced some overheads. For example, inlining set_node() and\nreplace_node(), the same benchmark test I used showed:\n\npostgres(1:88476)=# select * from generate_series(1,3) x(x), lateral\n(select * from bench_load(false, 10000000 * (1+x-x)));\n x | cnt | load_ms | xx_load_ms | old_load_ms\n---+----------+---------+------------+-------------\n 1 | 10000000 | 502 | 624 | 427\n 2 | 10000000 | 503 | 622 | 428\n 3 | 10000000 | 502 | 621 | 427\n(3 rows)\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 Apr 2024 21:29:59 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Mon, 2024-04-08 at 21:29 +0900, Masahiko Sawada wrote:\n> For v17, changes for #2 are smaller, but I'm concerned\n> that the new API that requires a hash function to be able to use\n> binaryheap_update_{up|down} might not be user friendly.\n\nThe only API change in 02 is accepting a hash callback rather than a\nboolean in binaryheap_allocate(), so I don't see that as worse than\nwhat's there now. It also directly fixes my complaint (hashing the\npointer) and does nothing more, so I think it's the right fix for now.\n\nI do think that the API can be better (templated like simplehash), but\nI don't think right now is a great time to change it.\n\n> When it comes to performance overhead, I mentioned that there is some\n> regression in the current binaryheap even without indexing.\n\nAs far as I can tell, you are just adding a single branch in that path,\nand I would expect it to be a predictable branch, right?\n\nThank you for testing, but small differences in a microbenchmark aren't\nterribly worrying for me. If other call sites are that sensitive to\nbinaryheap performance, the right answer is to have a templated version\nthat would not only avoid this unnecessary branch, but also inline the\ncomparator (which probably matters more).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 08 Apr 2024 11:55:04 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Fri, 2024-04-05 at 16:58 +0900, Masahiko Sawada wrote:\n> > I have some further comments and I believe changes are required for\n> > v17.\n\nI also noticed that the simplehash is declared in binaryheap.h with\n\"SH_SCOPE extern\", which seems wrong. Attached is a rough patch to\nbring the declarations into binaryheap.c.\n\nNote that the attached patch uses \"SH_SCOPE static\", which makes sense\nto me in this case, but causes a bunch of warnings in gcc. I will post\nseparately about silencing that warning, but for now you can either\nuse:\n\n SH_SCOPE static inline\n\nwhich is probably fine, but will encourage the compiler to inline more\ncode, when not all callers even use the hash table. Alternatively, you\ncan do:\n\n SH_SCOPE static pg_attribute_unused()\n\nwhich looks a bit wrong to me, but seems to silence the warnings, and\nlets the compiler decide whether or not to inline.\n\nAlso probably needs comment updates, etc.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 09 Apr 2024 11:04:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 09/04/2024 21:04, Jeff Davis wrote:\n> On Fri, 2024-04-05 at 16:58 +0900, Masahiko Sawada wrote:\n>>> I have some further comments and I believe changes are required for\n>>> v17.\n> \n> I also noticed that the simplehash is declared in binaryheap.h with\n> \"SH_SCOPE extern\", which seems wrong. Attached is a rough patch to\n> bring the declarations into binaryheap.c.\n> \n> Note that the attached patch uses \"SH_SCOPE static\", which makes sense\n> to me in this case, but causes a bunch of warnings in gcc. I will post\n> separately about silencing that warning, but for now you can either\n> use:\n> \n> SH_SCOPE static inline\n> \n> which is probably fine, but will encourage the compiler to inline more\n> code, when not all callers even use the hash table. Alternatively, you\n> can do:\n> \n> SH_SCOPE static pg_attribute_unused()\n> \n> which looks a bit wrong to me, but seems to silence the warnings, and\n> lets the compiler decide whether or not to inline.\n> \n> Also probably needs comment updates, etc.\n\nSorry I'm late to the party, I didn't pay attention to this thread \nearlier. But I wonder why this doesn't use the existing pairing heap \nimplementation? I would assume that to be at least as good as the binary \nheap + hash table. And it's cheap to to insert to (O(1)), so we could \nprobably remove the MAX_HEAP_TXN_COUNT_THRESHOLD, and always keep the \nheap up-to-date.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 9 Apr 2024 23:49:58 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, 2024-04-09 at 23:49 +0300, Heikki Linnakangas wrote:\n> \n> I wonder why this doesn't use the existing pairing heap \n> implementation? I would assume that to be at least as good as the\n> binary \n> heap + hash table\n\nI agree that an additional hash table is not needed -- there's already\na hash table to do a lookup based on xid in reorderbuffer.c.\n\nI had suggested that the heap could track the element indexes for\nefficient update/removal, but that would be a change to the\nbinaryheap.h API, which would require some discussion (and possibly not\nbe acceptable post-freeze).\n\nBut I think you're right: a pairing heap already solves the problem\nwithout modification. (Note: our pairing heap API doesn't explicitly\nsupport updating a key, so I think it would need to be done with\nremove/add.) So we might as well just do that right now rather than\ntrying to fix the way the hash table is being used or trying to extend\nthe binaryheap API.\n\nOf course, we should measure to be sure there aren't bad cases around\nupdating/removing a key, but I don't see a fundamental reason that it\nshould be worse.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 09 Apr 2024 18:24:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 06:24:43PM -0700, Jeff Davis wrote:\n> I had suggested that the heap could track the element indexes for\n> efficient update/removal, but that would be a change to the\n> binaryheap.h API, which would require some discussion (and possibly not\n> be acceptable post-freeze).\n> \n> But I think you're right: a pairing heap already solves the problem\n> without modification. (Note: our pairing heap API doesn't explicitly\n> support updating a key, so I think it would need to be done with\n> remove/add.) So we might as well just do that right now rather than\n> trying to fix the way the hash table is being used or trying to extend\n> the binaryheap API.\n> \n> Of course, we should measure to be sure there aren't bad cases around\n> updating/removing a key, but I don't see a fundamental reason that it\n> should be worse.\n\nThis is going to require a rewrite of 5bec1d6bc5e3 with a new\nperformance study, which strikes me as something that we'd better not\ndo after feature freeze. Wouldn't the best way forward be to revert\n5bec1d6bc5e3 and revisit the whole in v18?\n\nI have added an open item, for now.\n--\nMichael",
"msg_date": "Wed, 10 Apr 2024 12:13:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n\n> Wouldn't the best way forward be to revert\n> 5bec1d6bc5e3 and revisit the whole in v18?\n\nThat's a reasonable conclusion. Also consider commits b840508644 and\nbcb14f4abc.\n\nI had tried to come up with a narrower fix, and I think it's already\nbeen implemented here in approach 2:\n\nhttps://www.postgresql.org/message-id/CAD21AoAtf12e9Z9NLBuaO1GjHMMo16_8R-yBu9Q9jrk2QLqMEA%40mail.gmail.com\n\nbut it does feel wrong to introduce an unnecessary hash table in 17\nwhen we know it's not the right solution.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 09 Apr 2024 21:16:53 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n>> Wouldn't the best way forward be to revert\n>> 5bec1d6bc5e3 and revisit the whole in v18?\n> \n> Also consider commits b840508644 and bcb14f4abc.\n\nIndeed. These are also linked.\n--\nMichael",
"msg_date": "Wed, 10 Apr 2024 13:45:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 10/04/2024 07:45, Michael Paquier wrote:\n> On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n>> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n>>> Wouldn't the best way forward be to revert\n>>> 5bec1d6bc5e3 and revisit the whole in v18?\n>>\n>> Also consider commits b840508644 and bcb14f4abc.\n> \n> Indeed. These are also linked.\n\nI don't feel the urge to revert this:\n\n- It's not broken as such, we're just discussing better ways to \nimplement it. We could also do nothing, and revisit this in v18. The \nonly must-fix issue is some compiler warnings IIUC.\n\n- It's a pretty localized change in reorderbuffer.c, so it's not in the \nway of other patches or reverts. Nothing else depends on the binaryheap \nchanges yet either.\n\n- It seems straightforward to repeat the performance tests with whatever \nalternative implementations we want to consider.\n\nMy #1 choice would be to write a patch to switch the pairing heap, \nperformance test that, and revert the binary heap changes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 10 Apr 2024 08:30:22 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 11:00 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 10/04/2024 07:45, Michael Paquier wrote:\n> > On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n> >> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n> >>> Wouldn't the best way forward be to revert\n> >>> 5bec1d6bc5e3 and revisit the whole in v18?\n> >>\n> >> Also consider commits b840508644 and bcb14f4abc.\n> >\n> > Indeed. These are also linked.\n>\n> I don't feel the urge to revert this:\n>\n> - It's not broken as such, we're just discussing better ways to\n> implement it. We could also do nothing, and revisit this in v18. The\n> only must-fix issue is some compiler warnings IIUC.\n>\n> - It's a pretty localized change in reorderbuffer.c, so it's not in the\n> way of other patches or reverts. Nothing else depends on the binaryheap\n> changes yet either.\n>\n> - It seems straightforward to repeat the performance tests with whatever\n> alternative implementations we want to consider.\n>\n> My #1 choice would be to write a patch to switch the pairing heap,\n> performance test that, and revert the binary heap changes.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Apr 2024 11:01:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Wed, 2024-04-10 at 08:30 +0300, Heikki Linnakangas wrote:\n> My #1 choice would be to write a patch to switch the pairing heap, \n> performance test that, and revert the binary heap changes.\n\nSounds good to me. I would expect it to perform better than the extra\nhash table, if anything.\n\nIt also has the advantage that we don't change the API for binaryheap\nin 17.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 10 Apr 2024 10:14:16 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 10/04/2024 08:31, Amit Kapila wrote:\n> On Wed, Apr 10, 2024 at 11:00 AM Heikki Linnakangas <[email protected]> wrote:\n>>\n>> On 10/04/2024 07:45, Michael Paquier wrote:\n>>> On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n>>>> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n>>>>> Wouldn't the best way forward be to revert\n>>>>> 5bec1d6bc5e3 and revisit the whole in v18?\n>>>>\n>>>> Also consider commits b840508644 and bcb14f4abc.\n>>>\n>>> Indeed. These are also linked.\n>>\n>> I don't feel the urge to revert this:\n>>\n>> - It's not broken as such, we're just discussing better ways to\n>> implement it. We could also do nothing, and revisit this in v18. The\n>> only must-fix issue is some compiler warnings IIUC.\n>>\n>> - It's a pretty localized change in reorderbuffer.c, so it's not in the\n>> way of other patches or reverts. Nothing else depends on the binaryheap\n>> changes yet either.\n>>\n>> - It seems straightforward to repeat the performance tests with whatever\n>> alternative implementations we want to consider.\n>>\n>> My #1 choice would be to write a patch to switch the pairing heap,\n>> performance test that, and revert the binary heap changes.\n>>\n> \n> +1.\n\nTo move this forward, here's a patch to switch to a pairing heap. In my \nvery quick testing, with the performance test cases posted earlier in \nthis thread [1] [2], I'm seeing no meaningful performance difference \nbetween this and what's in master currently.\n\nSawada-san, what do you think of this? To be sure, if you could also \nrepeat the performance tests you performed earlier, that'd be great. If \nyou agree with this direction, and you're happy with this patch, feel \nfree take it from here and commit this, and also revert commits \nb840508644 and bcb14f4abc.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 11 Apr 2024 00:20:55 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 12:20:55AM +0300, Heikki Linnakangas wrote:\n> To move this forward, here's a patch to switch to a pairing heap. In my very\n> quick testing, with the performance test cases posted earlier in this thread\n> [1] [2], I'm seeing no meaningful performance difference between this and\n> what's in master currently.\n\nReading through the patch, that's a nice cleanup. It cuts quite some\ncode.\n\n+++ b/src/include/replication/reorderbuffer.h\n@@ -12,6 +12,7 @@\n #include \"access/htup_details.h\"\n #include \"lib/binaryheap.h\"\n #include \"lib/ilist.h\"\n+#include \"lib/pairingheap.h\"\n\nI'm slightly annoyed by the extra amount of information that gets\nadded to reorderbuffer.h for stuff that's only local to\nreorderbuffer.c, but that's not something new in this area, so..\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 07:37:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 11/04/2024 01:37, Michael Paquier wrote:\n> On Thu, Apr 11, 2024 at 12:20:55AM +0300, Heikki Linnakangas wrote:\n>> To move this forward, here's a patch to switch to a pairing heap. In my very\n>> quick testing, with the performance test cases posted earlier in this thread\n>> [1] [2], I'm seeing no meaningful performance difference between this and\n>> what's in master currently.\n> \n> Reading through the patch, that's a nice cleanup. It cuts quite some\n> code.\n> \n> +++ b/src/include/replication/reorderbuffer.h\n> @@ -12,6 +12,7 @@\n> #include \"access/htup_details.h\"\n> #include \"lib/binaryheap.h\"\n> #include \"lib/ilist.h\"\n> +#include \"lib/pairingheap.h\"\n> \n> I'm slightly annoyed by the extra amount of information that gets\n> added to reorderbuffer.h for stuff that's only local to\n> reorderbuffer.c, but that's not something new in this area, so..\n\nWe can actually remove the \"lib/binaryheap.h\" in this patch; I missed \nthat. There are no other uses of binaryheap in the file.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 11 Apr 2024 02:03:08 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Hi,\n\nSorry for the late reply, I took two days off.\n\nOn Thu, Apr 11, 2024 at 6:20 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 10/04/2024 08:31, Amit Kapila wrote:\n> > On Wed, Apr 10, 2024 at 11:00 AM Heikki Linnakangas <[email protected]> wrote:\n> >>\n> >> On 10/04/2024 07:45, Michael Paquier wrote:\n> >>> On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n> >>>> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n> >>>>> Wouldn't the best way forward be to revert\n> >>>>> 5bec1d6bc5e3 and revisit the whole in v18?\n> >>>>\n> >>>> Also consider commits b840508644 and bcb14f4abc.\n> >>>\n> >>> Indeed. These are also linked.\n> >>\n> >> I don't feel the urge to revert this:\n> >>\n> >> - It's not broken as such, we're just discussing better ways to\n> >> implement it. We could also do nothing, and revisit this in v18. The\n> >> only must-fix issue is some compiler warnings IIUC.\n> >>\n> >> - It's a pretty localized change in reorderbuffer.c, so it's not in the\n> >> way of other patches or reverts. Nothing else depends on the binaryheap\n> >> changes yet either.\n> >>\n> >> - It seems straightforward to repeat the performance tests with whatever\n> >> alternative implementations we want to consider.\n> >>\n> >> My #1 choice would be to write a patch to switch the pairing heap,\n> >> performance test that, and revert the binary heap changes.\n> >>\n> >\n> > +1.\n>\n> To move this forward, here's a patch to switch to a pairing heap. In my\n> very quick testing, with the performance test cases posted earlier in\n> this thread [1] [2], I'm seeing no meaningful performance difference\n> between this and what's in master currently.\n>\n> Sawada-san, what do you think of this? To be sure, if you could also\n> repeat the performance tests you performed earlier, that'd be great. If\n> you agree with this direction, and you're happy with this patch, feel\n> free take it from here and commit this, and also revert commits\n> b840508644 and bcb14f4abc.\n>\n\nThank you for the patch!\n\nI agree with the direction that we replace binaryheap + index with the\nexisting pairing heap and revert the changes for binaryheap. Regarding\nthe patch, I'm not sure we can remove the MAX_HEAP_TXN_COUNT_THRESHOLD\nlogic because otherwise we need to remove and add the txn node (i.e.\nO(log n)) for every memory update. I'm concerned it could cause some\nperformance degradation in a case where there are not many\ntransactions being decoded.\n\nI'll do performance tests, and share the results soon.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 10:32:52 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "Dear Heikki,\r\n\r\nI also prototyped the idea, which has almost the same shape.\r\nI attached just in case, but we may not have to see.\r\n\r\nFew comments based on the experiment.\r\n\r\n```\r\n+\t/* txn_heap is ordered by transaction size */\r\n+\tbuffer->txn_heap = pairingheap_allocate(ReorderBufferTXNSizeCompare, NULL);\r\n```\r\n\r\nI think the pairing heap should be in the same MemoryContext with the buffer.\r\nCan we add MemoryContextSwithTo()?\r\n\r\n```\r\n+\t\t/* Update the max-heap */\r\n+\t\tif (oldsize != 0)\r\n+\t\t\tpairingheap_remove(rb->txn_heap, &txn->txn_node);\r\n+\t\tpairingheap_add(rb->txn_heap, &txn->txn_node);\r\n...\r\n+\t\t/* Update the max-heap */\r\n+\t\tpairingheap_remove(rb->txn_heap, &txn->txn_node);\r\n+\t\tif (txn->size != 0)\r\n+\t\t\tpairingheap_add(rb->txn_heap, &txn->txn_node);\r\n```\r\n\r\nSince the number of stored transactions does not affect to the insert operation, we may able\r\nto add the node while creating ReorederBufferTXN and remove while cleaning up it. This can\r\nreduce branches in ReorderBufferChangeMemoryUpdate().\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Thu, 11 Apr 2024 01:46:37 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 10:32 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> Sorry for the late reply, I took two days off.\n>\n> On Thu, Apr 11, 2024 at 6:20 AM Heikki Linnakangas <[email protected]> wrote:\n> >\n> > On 10/04/2024 08:31, Amit Kapila wrote:\n> > > On Wed, Apr 10, 2024 at 11:00 AM Heikki Linnakangas <[email protected]> wrote:\n> > >>\n> > >> On 10/04/2024 07:45, Michael Paquier wrote:\n> > >>> On Tue, Apr 09, 2024 at 09:16:53PM -0700, Jeff Davis wrote:\n> > >>>> On Wed, 2024-04-10 at 12:13 +0900, Michael Paquier wrote:\n> > >>>>> Wouldn't the best way forward be to revert\n> > >>>>> 5bec1d6bc5e3 and revisit the whole in v18?\n> > >>>>\n> > >>>> Also consider commits b840508644 and bcb14f4abc.\n> > >>>\n> > >>> Indeed. These are also linked.\n> > >>\n> > >> I don't feel the urge to revert this:\n> > >>\n> > >> - It's not broken as such, we're just discussing better ways to\n> > >> implement it. We could also do nothing, and revisit this in v18. The\n> > >> only must-fix issue is some compiler warnings IIUC.\n> > >>\n> > >> - It's a pretty localized change in reorderbuffer.c, so it's not in the\n> > >> way of other patches or reverts. Nothing else depends on the binaryheap\n> > >> changes yet either.\n> > >>\n> > >> - It seems straightforward to repeat the performance tests with whatever\n> > >> alternative implementations we want to consider.\n> > >>\n> > >> My #1 choice would be to write a patch to switch the pairing heap,\n> > >> performance test that, and revert the binary heap changes.\n> > >>\n> > >\n> > > +1.\n> >\n> > To move this forward, here's a patch to switch to a pairing heap. In my\n> > very quick testing, with the performance test cases posted earlier in\n> > this thread [1] [2], I'm seeing no meaningful performance difference\n> > between this and what's in master currently.\n> >\n> > Sawada-san, what do you think of this? To be sure, if you could also\n> > repeat the performance tests you performed earlier, that'd be great. If\n> > you agree with this direction, and you're happy with this patch, feel\n> > free take it from here and commit this, and also revert commits\n> > b840508644 and bcb14f4abc.\n> >\n>\n> Thank you for the patch!\n>\n> I agree with the direction that we replace binaryheap + index with the\n> existing pairing heap and revert the changes for binaryheap. Regarding\n> the patch, I'm not sure we can remove the MAX_HEAP_TXN_COUNT_THRESHOLD\n> logic because otherwise we need to remove and add the txn node (i.e.\n> O(log n)) for every memory update. I'm concerned it could cause some\n> performance degradation in a case where there are not many\n> transactions being decoded.\n>\n> I'll do performance tests, and share the results soon.\n>\n\nHere are some performance test results.\n\n* test case 1 (many subtransactions)\n\ntest script:\n\ncreate or replace function testfn (cnt int) returns void as $$\nbegin\n for i in 1..cnt loop\n begin\n insert into test values (i);\n exception when division_by_zero then\n raise notice 'caught error';\n return;\n end;\n end loop;\nend;\n$$\nlanguage plpgsql;\nselect pg_create_logical_replication_slot('s', 'test_decoding');\nselect testfn(1000000);\nset logical_decoding_work_mem to '4MB';\nselect from pg_logical_slot_peek_changes('s', null, null);\n\nHEAD:\n\n43128.266 ms\n40116.313 ms\n38790.666 ms\n\nPatched:\n\n43626.316 ms\n44456.234 ms\n39899.753 ms\n\n\n* test case 2 (single big insertion)\n\ntest script:\n\ncreate table test (c int);\nselect pg_create_logical_replication_slot('s', 'test_decoding');\ninsert into test select generate_series(1, 10000000);\nset logical_decoding_work_mem to '10GB'; -- avoid data spill\nselect from pg_logical_slot_peek_changes('s', null, null);\n\nHEAD:\n\n7996.476 ms\n8034.022 ms\n8005.583 ms\n\nPatched:\n\n8153.500 ms\n8121.588 ms\n8121.538 ms\n\n\n* test case 3 (many small transactions)\n\ntest script:\n\npgbench -s -i 300\npsql -c \"select pg_create_replication_slot('s', 'test_decoding')\";\npgbench -t 100000 -c 32\npsql -c \"set logical_decoding_work_mem to '10GB'; select count(*) from\npg_logical_slot_peek_changes('s', null, null)\"\n\nHEAD:\n\n22586.343 ms\n22507.905 ms\n22504.133 ms\n\nPatched:\n\n23365.142 ms\n23110.651 ms\n23102.170 ms\n\nWe can see 2% ~ 3% performance regressions compared to the current\nHEAD, but it's much smaller than I expected. Given that we can make\nthe code simple, I think we can go with this direction.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 11:52:27 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 11:52 AM Masahiko Sawada <[email protected]> wrote:\n>\n> We can see 2% ~ 3% performance regressions compared to the current\n> HEAD, but it's much smaller than I expected. Given that we can make\n> the code simple, I think we can go with this direction.\n\nPushed the patch and reverted binaryheap changes.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 17:20:33 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On 11/04/2024 11:20, Masahiko Sawada wrote:\n> On Thu, Apr 11, 2024 at 11:52 AM Masahiko Sawada <[email protected]> wrote:\n>>\n>> We can see 2% ~ 3% performance regressions compared to the current\n>> HEAD, but it's much smaller than I expected. Given that we can make\n>> the code simple, I think we can go with this direction.\n> \n> Pushed the patch and reverted binaryheap changes.\n\nThank you!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 11 Apr 2024 15:55:21 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 10:46 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Heikki,\n>\n> I also prototyped the idea, which has almost the same shape.\n> I attached just in case, but we may not have to see.\n>\n> Few comments based on the experiment.\n\nThank you for reviewing the patch. I didn't include the following\nsuggestions since firstly I wanted to just fix binaryheap part while\nkeeping other parts. If we need these changes, we can do them in\nseparate commits as fixes.\n\n>\n> ```\n> + /* txn_heap is ordered by transaction size */\n> + buffer->txn_heap = pairingheap_allocate(ReorderBufferTXNSizeCompare, NULL);\n> ```\n>\n> I think the pairing heap should be in the same MemoryContext with the buffer.\n> Can we add MemoryContextSwithTo()?\n\nThe pairingheap_allocate() allocates a tiny amount of memory for\npairingheap and its memory usage doesn't grow even when adding more\ndata. And since it's allocated in logical decoding context its\nlifetime is also fine. So I'm not sure it's worth including it in\nrb->context for better memory accountability.\n\n>\n> ```\n> + /* Update the max-heap */\n> + if (oldsize != 0)\n> + pairingheap_remove(rb->txn_heap, &txn->txn_node);\n> + pairingheap_add(rb->txn_heap, &txn->txn_node);\n> ...\n> + /* Update the max-heap */\n> + pairingheap_remove(rb->txn_heap, &txn->txn_node);\n> + if (txn->size != 0)\n> + pairingheap_add(rb->txn_heap, &txn->txn_node);\n> ```\n>\n> Since the number of stored transactions does not affect to the insert operation, we may able\n> to add the node while creating ReorederBufferTXN and remove while cleaning up it. This can\n> reduce branches in ReorderBufferChangeMemoryUpdate().\n\nI think it also means that we need to remove the entry while cleaning\nup even if it doesn't have any changes, which is done in O(log n). I\nfeel the current approach that we don't store transactions with size 0\nin the heap is better and I'm not sure that reducing these branches\nreally contributes to the performance improvements..\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 22:36:10 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve eviction algorithm in ReorderBuffer"
}
] |
[
{
"msg_contents": "Hi hackers,\n\n6af1793954 added a new field namely \"isCatalogRel\" in some WAL records\nto help detecting row removal conflict during logical decoding from standby.\n\nPlease find attached a patch to add this field in the related rmgrdesc (i.e\nall the ones that already provide the snapshotConflictHorizon except the one\nrelated to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\ninstead of adding the isCatalogRel bool).\n\nI think it's worth it, as this new field could help diagnose conflicts issues (if any).\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 12 Dec 2023 09:23:46 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 09:23:46AM +0100, Drouvot, Bertrand wrote:\n> Please find attached a patch to add this field in the related rmgrdesc (i.e\n> all the ones that already provide the snapshotConflictHorizon except the one\n> related to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\n> instead of adding the isCatalogRel bool).\n> \n> I think it's worth it, as this new field could help diagnose conflicts issues (if any).\n\nAgreed that this is helpful. One would likely guess if you are\ndealing with a catalog relation depending on its relfilenode, but that\ndoes not take into account user_catalog_table that can be set as a\nreloption, impacting the value of isCatalogRel stored in the records.\n--\nMichael",
"msg_date": "Tue, 12 Dec 2023 10:15:01 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "Hi,\n\nOn 12/12/23 10:15 AM, Michael Paquier wrote:\n> On Tue, Dec 12, 2023 at 09:23:46AM +0100, Drouvot, Bertrand wrote:\n>> Please find attached a patch to add this field in the related rmgrdesc (i.e\n>> all the ones that already provide the snapshotConflictHorizon except the one\n>> related to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\n>> instead of adding the isCatalogRel bool).\n>>\n>> I think it's worth it, as this new field could help diagnose conflicts issues (if any).\n> \n> Agreed that this is helpful.\n\nThanks for looking at it!\n\n> One would likely guess if you are\n> dealing with a catalog relation depending on its relfilenode, but that\n> does not take into account user_catalog_table that can be set as a\n> reloption, impacting the value of isCatalogRel stored in the records.\n\nExactly and not mentioning the other checks in RelationIsAccessibleInLogicalDecoding()\nlike the wal_level >= logical one.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 12 Dec 2023 10:41:22 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 6:15 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Dec 12, 2023 at 09:23:46AM +0100, Drouvot, Bertrand wrote:\n> > Please find attached a patch to add this field in the related rmgrdesc (i.e\n> > all the ones that already provide the snapshotConflictHorizon except the one\n> > related to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\n> > instead of adding the isCatalogRel bool).\n> >\n> > I think it's worth it, as this new field could help diagnose conflicts issues (if any).\n\n+1\n\n- appendStringInfo(buf, \"rel %u/%u/%u; blk %u; snapshotConflictHorizon %u:%u\",\n+ appendStringInfo(buf, \"rel %u/%u/%u; blk %u;\nsnapshotConflictHorizon %u:%u, isCatalogRel %u\",\n xlrec->locator.spcOid, xlrec->locator.dbOid,\n xlrec->locator.relNumber, xlrec->block,\n EpochFromFullTransactionId(xlrec->snapshotConflictHorizon),\n- XidFromFullTransactionId(xlrec->snapshotConflictHorizon));\n+ XidFromFullTransactionId(xlrec->snapshotConflictHorizon),\n+ xlrec->isCatalogRel);\n\nThe patch prints isCatalogRel, a bool field, as %u. But other rmgrdesc\nimplementations seem to use different ways. For instance in spgdesc.c,\nwe print flag name if it's set: (newPage and postfixBlkSame are bool\nfields):\n\n appendStringInfo(buf, \"prefixoff: %u, postfixoff: %u\",\n xlrec->offnumPrefix,\n xlrec->offnumPostfix);\n if (xlrec->newPage)\n appendStringInfoString(buf, \" (newpage)\");\n if (xlrec->postfixBlkSame)\n appendStringInfoString(buf, \" (same)\");\n\nwhereas in hashdesc.c, we print either 'T' of 'F':\n\n appendStringInfo(buf, \"clear_dead_marking %c, is_primary %c\",\n xlrec->clear_dead_marking ? 'T' : 'F',\n xlrec->is_primary_bucket_page ? 'T' : 'F');\n\nIs it probably worth considering such formats? I prefer to always\nprint the field name like the current patch and hashdesc.c since it's\neasier to parse it. But I'm fine with either way to show the field\nvalue.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 17:00:09 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "Hi,\n\nOn 12/19/23 9:00 AM, Masahiko Sawada wrote:\n> On Tue, Dec 12, 2023 at 6:15 PM Michael Paquier <[email protected]> wrote:\n>>\n>> On Tue, Dec 12, 2023 at 09:23:46AM +0100, Drouvot, Bertrand wrote:\n>>> Please find attached a patch to add this field in the related rmgrdesc (i.e\n>>> all the ones that already provide the snapshotConflictHorizon except the one\n>>> related to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\n>>> instead of adding the isCatalogRel bool).\n>>>\n>>> I think it's worth it, as this new field could help diagnose conflicts issues (if any).\n> \n> +1\n\nThanks for looking at it!\n\n> \n> - appendStringInfo(buf, \"rel %u/%u/%u; blk %u; snapshotConflictHorizon %u:%u\",\n> + appendStringInfo(buf, \"rel %u/%u/%u; blk %u;\n> snapshotConflictHorizon %u:%u, isCatalogRel %u\",\n> xlrec->locator.spcOid, xlrec->locator.dbOid,\n> xlrec->locator.relNumber, xlrec->block,\n> EpochFromFullTransactionId(xlrec->snapshotConflictHorizon),\n> - XidFromFullTransactionId(xlrec->snapshotConflictHorizon));\n> + XidFromFullTransactionId(xlrec->snapshotConflictHorizon),\n> + xlrec->isCatalogRel);\n> \n> The patch prints isCatalogRel, a bool field, as %u. But other rmgrdesc\n> implementations seem to use different ways. For instance in spgdesc.c,\n> we print flag name if it's set: (newPage and postfixBlkSame are bool\n> fields):\n> \n> appendStringInfo(buf, \"prefixoff: %u, postfixoff: %u\",\n> xlrec->offnumPrefix,\n> xlrec->offnumPostfix);\n> if (xlrec->newPage)\n> appendStringInfoString(buf, \" (newpage)\");\n> if (xlrec->postfixBlkSame)\n> appendStringInfoString(buf, \" (same)\");\n> \n> whereas in hashdesc.c, we print either 'T' of 'F':\n> \n> appendStringInfo(buf, \"clear_dead_marking %c, is_primary %c\",\n> xlrec->clear_dead_marking ? 'T' : 'F',\n> xlrec->is_primary_bucket_page ? 'T' : 'F');\n> \n> Is it probably worth considering such formats?\n\nGood point, let's not add another format.\n\n> I prefer to always\n> print the field name like the current patch and hashdesc.c since it's\n> easier to parse it. \n\nI like this format too, so done that way in v2 attached.\n\nBTW, I noticed that sometimes the snapshotConflictHorizon is displayed\nas \"snapshotConflictHorizon:\" and sometimes as \"snapshotConflictHorizon\".\n\nSo v2 is doing the same, means using \"isCatalogRel:\" if \"snapshotConflictHorizon:\"\nis being used or using \"isCatalogRel\" if not.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 19 Dec 2023 10:27:39 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 6:27 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 12/19/23 9:00 AM, Masahiko Sawada wrote:\n> > On Tue, Dec 12, 2023 at 6:15 PM Michael Paquier <[email protected]> wrote:\n> >>\n> >> On Tue, Dec 12, 2023 at 09:23:46AM +0100, Drouvot, Bertrand wrote:\n> >>> Please find attached a patch to add this field in the related rmgrdesc (i.e\n> >>> all the ones that already provide the snapshotConflictHorizon except the one\n> >>> related to xl_heap_visible: indeed a new bit was added in its flag field in 6af1793954\n> >>> instead of adding the isCatalogRel bool).\n> >>>\n> >>> I think it's worth it, as this new field could help diagnose conflicts issues (if any).\n> >\n> > +1\n>\n> Thanks for looking at it!\n>\n> >\n> > - appendStringInfo(buf, \"rel %u/%u/%u; blk %u; snapshotConflictHorizon %u:%u\",\n> > + appendStringInfo(buf, \"rel %u/%u/%u; blk %u;\n> > snapshotConflictHorizon %u:%u, isCatalogRel %u\",\n> > xlrec->locator.spcOid, xlrec->locator.dbOid,\n> > xlrec->locator.relNumber, xlrec->block,\n> > EpochFromFullTransactionId(xlrec->snapshotConflictHorizon),\n> > - XidFromFullTransactionId(xlrec->snapshotConflictHorizon));\n> > + XidFromFullTransactionId(xlrec->snapshotConflictHorizon),\n> > + xlrec->isCatalogRel);\n> >\n> > The patch prints isCatalogRel, a bool field, as %u. But other rmgrdesc\n> > implementations seem to use different ways. For instance in spgdesc.c,\n> > we print flag name if it's set: (newPage and postfixBlkSame are bool\n> > fields):\n> >\n> > appendStringInfo(buf, \"prefixoff: %u, postfixoff: %u\",\n> > xlrec->offnumPrefix,\n> > xlrec->offnumPostfix);\n> > if (xlrec->newPage)\n> > appendStringInfoString(buf, \" (newpage)\");\n> > if (xlrec->postfixBlkSame)\n> > appendStringInfoString(buf, \" (same)\");\n> >\n> > whereas in hashdesc.c, we print either 'T' of 'F':\n> >\n> > appendStringInfo(buf, \"clear_dead_marking %c, is_primary %c\",\n> > xlrec->clear_dead_marking ? 'T' : 'F',\n> > xlrec->is_primary_bucket_page ? 'T' : 'F');\n> >\n> > Is it probably worth considering such formats?\n>\n> Good point, let's not add another format.\n>\n> > I prefer to always\n> > print the field name like the current patch and hashdesc.c since it's\n> > easier to parse it.\n>\n> I like this format too, so done that way in v2 attached.\n>\n> BTW, I noticed that sometimes the snapshotConflictHorizon is displayed\n> as \"snapshotConflictHorizon:\" and sometimes as \"snapshotConflictHorizon\".\n>\n> So v2 is doing the same, means using \"isCatalogRel:\" if \"snapshotConflictHorizon:\"\n> is being used or using \"isCatalogRel\" if not.\n\nAgreed.\n\nThank you for updating the patch. The v2 patch looks good to me. I'll\npush it, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:43:30 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 10:43:30AM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch. The v2 patch looks good to me. I'll\n> push it, barring any objections.\n\nThis is capturing the eight records where the flag exists, so it looks\nOK seen from here. \n\nAs you said, there may be a point in reducing the output in the most\ncommon case and not show the flag when !isCatalogRel, but I cannot get\nexcited about that either because that would require one to do more\ncross-checks with the core code when looking at WAL dumps.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 09:04:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 9:04 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 10:43:30AM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch. The v2 patch looks good to me. I'll\n> > push it, barring any objections.\n>\n> This is capturing the eight records where the flag exists, so it looks\n> OK seen from here.\n>\n> As you said, there may be a point in reducing the output in the most\n> common case and not show the flag when !isCatalogRel, but I cannot get\n> excited about that either because that would require one to do more\n> cross-checks with the core code when looking at WAL dumps.\n\nThank you for the comments. Agreed.\n\nI've just pushed, bf6260b39.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 10:13:16 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 10:13 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 9:04 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 10:43:30AM +0900, Masahiko Sawada wrote:\n> > > Thank you for updating the patch. The v2 patch looks good to me. I'll\n> > > push it, barring any objections.\n> >\n> > This is capturing the eight records where the flag exists, so it looks\n> > OK seen from here.\n> >\n> > As you said, there may be a point in reducing the output in the most\n> > common case and not show the flag when !isCatalogRel, but I cannot get\n> > excited about that either because that would require one to do more\n> > cross-checks with the core code when looking at WAL dumps.\n>\n> Thank you for the comments. Agreed.\n>\n> I've just pushed, bf6260b39.\n>\n\nFYI, in the commitfest app, there seems to be two duplicated entries\nfor this item:\n\nhttps://commitfest.postgresql.org/46/4694/\nhttps://commitfest.postgresql.org/46/4695/\n\nI've marked the latter one as committed, and will remove the former.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 10:18:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
},
{
"msg_contents": "Hi,\n\nOn Thu, Dec 21, 2023 at 10:13:16AM +0900, Masahiko Sawada wrote:\n> On Thu, Dec 21, 2023 at 9:04 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Dec 20, 2023 at 10:43:30AM +0900, Masahiko Sawada wrote:\n> > > Thank you for updating the patch. The v2 patch looks good to me. I'll\n> > > push it, barring any objections.\n> >\n> > This is capturing the eight records where the flag exists, so it looks\n> > OK seen from here.\n> >\n> > As you said, there may be a point in reducing the output in the most\n> > common case and not show the flag when !isCatalogRel, but I cannot get\n> > excited about that either because that would require one to do more\n> > cross-checks with the core code when looking at WAL dumps.\n> \n> Thank you for the comments. Agreed.\n> \n> I've just pushed, bf6260b39.\n> \n\nThanks!\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 05:56:12 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add isCatalogRel in rmgrdesc"
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nI've come across a behaviour of the planner I can't explain.\nAfter a migration from 11 to 15 (on RDS) we noticed a degradation in \nresponse time on a query, it went from a few seconds to ten minutes.\nA vacuum(analyze) has been realized to be sure that all is clean.\nThe 'explain analyze' shows us a change of plan. Postgresql 15 chooses \n`incremental sort` with an index corresponding to the ORDER BY clause \n(on the created_at column). The previous v11 plan used a more efficient \nindex.\n\nBy deactivating incremental sort, response times in v15 are equal to v11 \none.\n\nHere is the query\n\n SELECT inputdocum0_.id AS col_0_0_\n FROM document_management_services.input_document inputdocum0_\n WHERE (inputdocum0_.indexation_domain_id in \n('2d29daf6-e151-479a-a52a-78b08bb3009d'))\n AND (inputdocum0_.indexation_subsidiary_id in \n('9f9df402-f70b-40d9-b283-a3c35232469a'))\n AND (inputdocum0_.locked_at IS NULL)\n AND (inputdocum0_.locked_by_app IS NULL)\n AND (inputdocum0_.locked_by_user IS NULL)\n AND (inputdocum0_.lock_time_out IS NULL)\n AND inputdocum0_.archiving_state<> 'DESTROYED'\n AND (inputdocum0_.creation_state in ('READY'))\n AND inputdocum0_.active_content=true\n AND (inputdocum0_.processing_state in ('PENDING_INDEXATION'))\n ORDER BY inputdocum0_.created_at ASC,\n inputdocum0_.reception_id ASC,\n inputdocum0_.reception_order ASC\n LIMIT 50 ;\n\nHere are some details, the table `input_document` is partionned by hash \nwith 20 partitions with a lot of indexes\n\nIndexes:\n \"input_document_pkey\" PRIMARY KEY, btree (id)\n \"input_document_api_version_idx\" btree (api_version) INVALID\n \"input_document_created_at_idx\" btree (created_at)\n \"input_document_created_by_user_profile_idx\" btree \n(created_by_user_profile)\n \"input_document_dashboard_idx\" btree (processing_state, \nindexation_family_id, indexation_group_id, reception_id) INCLUDE \n(active_content, archiving_state, creation_state) WHERE active_content = \ntrue AND archiving_state <> 'DESTROYED'::text AND creation_state <> \n'PENDING'::text\n \"input_document_fts_description_idx\" gin \n(to_tsvector('simple'::regconfig, description))\n \"input_document_fts_insured_firstname_idx\" gin \n(to_tsvector('simple'::regconfig, indexation_insured_firstname))\n \"input_document_fts_insured_lastname_idx\" gin \n(to_tsvector('simple'::regconfig, indexation_insured_lastname))\n \"input_document_indexation_activity_id_idx\" btree \n(indexation_activity_id)\n \"input_document_indexation_agency_id_idx\" btree (indexation_agency_id)\n \"input_document_indexation_distributor_id_idx\" btree \n(indexation_distributor_id)\n \"input_document_indexation_domain_id_idx\" btree (indexation_domain_id)\n \"input_document_indexation_family_id_idx\" btree (indexation_family_id)\n \"input_document_indexation_group_id_idx\" btree (indexation_group_id)\n \"input_document_indexation_insurer_id_idx\" btree \n(indexation_insurer_id)\n \"input_document_indexation_nature_id_idx\" btree (indexation_nature_id)\n \"input_document_indexation_reference_idx\" btree (indexation_reference)\n \"input_document_indexation_subsidiary_id_idx\" btree \n(indexation_subsidiary_id)\n \"input_document_indexation_warranty_id_idx\" btree \n(indexation_warranty_id)\n \"input_document_locked_by_user_idx\" btree (locked_by_user)\n \"input_document_modified_at_idx\" btree (modified_at)\n \"input_document_modified_by_user_profile_idx\" btree \n(modified_by_user_profile)\n \"input_document_processing_state_idx\" btree (processing_state)\n \"input_document_stock_idx\" btree (active_content, archiving_state, \ncreation_state, processing_state) WHERE active_content AND \narchiving_state <> 'DESTROYED'::text AND creation_state <> \n'PENDING'::text AND (processing_state = ANY \n('{PENDING_PROCESSING,PENDING_INDEXATION,READY}'::text[]))\n \"input_dom_act_pi_idx\" btree (indexation_activity_id, \nindexation_domain_id) WHERE processing_state = 'PENDING_INDEXATION'::text\n \"input_dom_act_pp_idx\" btree (indexation_activity_id, \nindexation_domain_id) WHERE processing_state = 'PENDING_PROCESSING'::text\n \"input_dom_act_sub_idx\" btree (indexation_activity_id, \nindexation_domain_id, indexation_subsidiary_id)\n \"input_reception_id_created_at_idx\" btree (reception_id, created_at)\n \"input_reception_id_reception_order_idx\" btree (reception_id, \nreception_order)\n \"operational_perimeter_view_idx\" btree (processing_state, \nindexation_distributor_id) WHERE processing_state = \n'PENDING_PROCESSING'::text\n\nPlease find attached the 3 plans\n\nexplain_analyse_incremental_off.txt with enable_incremental_sort to off\nexplain_analyse_incremental_on.txt with enable_incremental_sort to on\nexplain_analyse_incremental_on_limit5000 with enable_incremental_sort to \non but with increase the limit to 5000, in this case plan choose don't \nuse `Incremental Sort`\n\nThe point that I don't understand in the plan (incremental_sort to on) \nis the top level one, the limit cost doesn't seem right.\n\nLimit (cost=324.05..16073.82 rows=50 width=44) (actual \ntime=1663688.290..1663696.151 rows=50 loops=1)\n Buffers: shared hit=114672881 read=5725197 dirtied=38564 written=24394\n I/O Timings: shared/local read=1481378.069 write=313.574\n -> Incremental Sort (cost=324.05..27838050.13 rows=88375 width=44) \n(actual time=1663688.289..1663696.144 rows=50 loops=1)\n\n\nHave you a explaination on the behaviour ?\n\nBest regards\n\n\n-- \nNicolas Lutic",
"msg_date": "Tue, 12 Dec 2023 09:40:14 +0100",
"msg_from": "Nicolas Lutic <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner chooses incremental but not the best one"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 4:40 PM Nicolas Lutic <[email protected]> wrote:\n\n> I've come across a behaviour of the planner I can't explain.\n> After a migration from 11 to 15 (on RDS) we noticed a degradation in\n> response time on a query, it went from a few seconds to ten minutes.\n> A vacuum(analyze) has been realized to be sure that all is clean.\n> The 'explain analyze' shows us a change of plan. Postgresql 15 chooses\n> `incremental sort` with an index corresponding to the ORDER BY clause\n> (on the created_at column). The previous v11 plan used a more efficient\n> index.\n>\n> By deactivating incremental sort, response times in v15 are equal to v11\n> one.\n\n\nI think this issue is caused by under-estimating the startup cost of\nincremental sort, which in turn is caused by over-estimating the number\nof groups with equal presorted keys by estimate_num_groups().\n\nWe can simulate the same issue with the query below.\n\ncreate table t (a int, b int, c int, d int) partition by range (a);\ncreate table tp1 partition of t for values from (0) to (1000);\ncreate table tp2 partition of t for values from (1000) to (2000);\n\ninsert into t select i%2000, i%1000, i%300, i from\ngenerate_series(1,1000000)i;\n\ncreate index on t(b);\ncreate index on t(c);\n\nanalyze t;\n\n-- by default incremental sort is chosen\nexplain analyze select * from t where b = 3 order by c, d limit 10;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=143.03..570.89 rows=10 width=16) (actual\ntime=375.399..375.402 rows=10 loops=1)\n -> Incremental Sort (cost=143.03..42671.85 rows=994 width=16) (actual\ntime=375.397..375.399 rows=10 loops=1)\n Sort Key: t.c, t.d\n Presorted Key: t.c\n Full-sort Groups: 1 Sort Method: top-N heapsort Average Memory:\n25kB Peak Memory: 25kB\n Pre-sorted Groups: 1 Sort Method: top-N heapsort Average Memory:\n25kB Peak Memory: 25kB\n -> Merge Append (cost=0.85..42644.84 rows=994 width=16) (actual\ntime=11.415..375.289 rows=335 loops=1)\n Sort Key: t.c\n -> Index Scan using tp1_c_idx on tp1 t_1\n (cost=0.42..21318.39 rows=498 width=16) (actual time=6.666..189.356\nrows=168 loops=1)\n Filter: (b = 3)\n Rows Removed by Filter: 171537\n -> Index Scan using tp2_c_idx on tp2 t_2\n (cost=0.42..21316.50 rows=496 width=16) (actual time=4.745..185.870\nrows=168 loops=1)\n Filter: (b = 3)\n Rows Removed by Filter: 171534\n Planning Time: 0.501 ms\n Execution Time: 375.477 ms\n(16 rows)\n\n-- disable incremental sort\nset enable_incremental_sort to off;\nSET\n\nexplain analyze select * from t where b = 3 order by c, d limit 10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2577.51..2577.53 rows=10 width=16) (actual time=2.928..2.930\nrows=10 loops=1)\n -> Sort (cost=2577.51..2579.99 rows=994 width=16) (actual\ntime=2.925..2.927 rows=10 loops=1)\n Sort Key: t.c, t.d\n Sort Method: top-N heapsort Memory: 25kB\n -> Append (cost=8.28..2556.03 rows=994 width=16) (actual\ntime=0.627..2.670 rows=1000 loops=1)\n -> Bitmap Heap Scan on tp1 t_1 (cost=8.28..1276.62\nrows=498 width=16) (actual time=0.625..1.688 rows=500 loops=1)\n Recheck Cond: (b = 3)\n Heap Blocks: exact=500\n -> Bitmap Index Scan on tp1_b_idx (cost=0.00..8.16\nrows=498 width=0) (actual time=0.330..0.330 rows=500 loops=1)\n Index Cond: (b = 3)\n -> Bitmap Heap Scan on tp2 t_2 (cost=8.27..1274.43\nrows=496 width=16) (actual time=0.178..0.876 rows=500 loops=1)\n Recheck Cond: (b = 3)\n Heap Blocks: exact=500\n -> Bitmap Index Scan on tp2_b_idx (cost=0.00..8.14\nrows=496 width=0) (actual time=0.093..0.093 rows=500 loops=1)\n Index Cond: (b = 3)\n Planning Time: 0.481 ms\n Execution Time: 3.031 ms\n(17 rows)\n\nAs we can see the execution time is 375.477 ms by default and 3.031 ms\nif we disable incremental sort.\n\nWhen we calculate the cost of incremental sort, we need to estimate the\nnumber of groups into which the relation is divided by the presorted\nkeys, and then calculate the cost of sorting a single group. If we\nover-estimate the number of groups, the startup cost of incremental sort\nwould be under-estimated. In the first plan above, the number of groups\nwith equal 't.c' is estimated to be 300 by estimate_num_groups(), but is\nactually 3 after applying the restriction 'b = 3'. As a result, the\nstartup cost of the incremental sort is estimated to be 143.03, but is\nactually 14222.68. That's why the planner mistakenly thinks the\nincrement sort path is the cheaper one.\n\nIt seems that we need to improve estimate of distinct values in\nestimate_num_groups() when taking the selectivity of restrictions into\naccount.\n\nIn 84f9a35e3 we changed to a new formula to perform such estimation.\nBut that does not apply to the case here, because for an appendrel,\nset_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",\nand that would make estimate_num_groups() skip the adjustment of the\nestimate using the new formula.\n\nAny thoughts on how to improve this?\n\nThanks\nRichard\n\nOn Tue, Dec 12, 2023 at 4:40 PM Nicolas Lutic <[email protected]> wrote:\nI've come across a behaviour of the planner I can't explain.\nAfter a migration from 11 to 15 (on RDS) we noticed a degradation in \nresponse time on a query, it went from a few seconds to ten minutes.\nA vacuum(analyze) has been realized to be sure that all is clean.\nThe 'explain analyze' shows us a change of plan. Postgresql 15 chooses \n`incremental sort` with an index corresponding to the ORDER BY clause \n(on the created_at column). The previous v11 plan used a more efficient \nindex.\n\nBy deactivating incremental sort, response times in v15 are equal to v11 \none.I think this issue is caused by under-estimating the startup cost ofincremental sort, which in turn is caused by over-estimating the numberof groups with equal presorted keys by estimate_num_groups().We can simulate the same issue with the query below.create table t (a int, b int, c int, d int) partition by range (a);create table tp1 partition of t for values from (0) to (1000);create table tp2 partition of t for values from (1000) to (2000);insert into t select i%2000, i%1000, i%300, i from generate_series(1,1000000)i;create index on t(b);create index on t(c);analyze t;-- by default incremental sort is chosenexplain analyze select * from t where b = 3 order by c, d limit 10; QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=143.03..570.89 rows=10 width=16) (actual time=375.399..375.402 rows=10 loops=1) -> Incremental Sort (cost=143.03..42671.85 rows=994 width=16) (actual time=375.397..375.399 rows=10 loops=1) Sort Key: t.c, t.d Presorted Key: t.c Full-sort Groups: 1 Sort Method: top-N heapsort Average Memory: 25kB Peak Memory: 25kB Pre-sorted Groups: 1 Sort Method: top-N heapsort Average Memory: 25kB Peak Memory: 25kB -> Merge Append (cost=0.85..42644.84 rows=994 width=16) (actual time=11.415..375.289 rows=335 loops=1) Sort Key: t.c -> Index Scan using tp1_c_idx on tp1 t_1 (cost=0.42..21318.39 rows=498 width=16) (actual time=6.666..189.356 rows=168 loops=1) Filter: (b = 3) Rows Removed by Filter: 171537 -> Index Scan using tp2_c_idx on tp2 t_2 (cost=0.42..21316.50 rows=496 width=16) (actual time=4.745..185.870 rows=168 loops=1) Filter: (b = 3) Rows Removed by Filter: 171534 Planning Time: 0.501 ms Execution Time: 375.477 ms(16 rows)-- disable incremental sortset enable_incremental_sort to off;SETexplain analyze select * from t where b = 3 order by c, d limit 10; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=2577.51..2577.53 rows=10 width=16) (actual time=2.928..2.930 rows=10 loops=1) -> Sort (cost=2577.51..2579.99 rows=994 width=16) (actual time=2.925..2.927 rows=10 loops=1) Sort Key: t.c, t.d Sort Method: top-N heapsort Memory: 25kB -> Append (cost=8.28..2556.03 rows=994 width=16) (actual time=0.627..2.670 rows=1000 loops=1) -> Bitmap Heap Scan on tp1 t_1 (cost=8.28..1276.62 rows=498 width=16) (actual time=0.625..1.688 rows=500 loops=1) Recheck Cond: (b = 3) Heap Blocks: exact=500 -> Bitmap Index Scan on tp1_b_idx (cost=0.00..8.16 rows=498 width=0) (actual time=0.330..0.330 rows=500 loops=1) Index Cond: (b = 3) -> Bitmap Heap Scan on tp2 t_2 (cost=8.27..1274.43 rows=496 width=16) (actual time=0.178..0.876 rows=500 loops=1) Recheck Cond: (b = 3) Heap Blocks: exact=500 -> Bitmap Index Scan on tp2_b_idx (cost=0.00..8.14 rows=496 width=0) (actual time=0.093..0.093 rows=500 loops=1) Index Cond: (b = 3) Planning Time: 0.481 ms Execution Time: 3.031 ms(17 rows)As we can see the execution time is 375.477 ms by default and 3.031 msif we disable incremental sort.When we calculate the cost of incremental sort, we need to estimate thenumber of groups into which the relation is divided by the presortedkeys, and then calculate the cost of sorting a single group. If weover-estimate the number of groups, the startup cost of incremental sortwould be under-estimated. In the first plan above, the number of groupswith equal 't.c' is estimated to be 300 by estimate_num_groups(), but isactually 3 after applying the restriction 'b = 3'. As a result, thestartup cost of the incremental sort is estimated to be 143.03, but isactually 14222.68. That's why the planner mistakenly thinks theincrement sort path is the cheaper one.It seems that we need to improve estimate of distinct values inestimate_num_groups() when taking the selectivity of restrictions intoaccount.In 84f9a35e3 we changed to a new formula to perform such estimation.But that does not apply to the case here, because for an appendrel,set_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",and that would make estimate_num_groups() skip the adjustment of theestimate using the new formula.Any thoughts on how to improve this?ThanksRichard",
"msg_date": "Thu, 14 Dec 2023 18:02:08 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 6:02 PM Richard Guo <[email protected]> wrote:\n\n> It seems that we need to improve estimate of distinct values in\n> estimate_num_groups() when taking the selectivity of restrictions into\n> account.\n>\n> In 84f9a35e3 we changed to a new formula to perform such estimation.\n> But that does not apply to the case here, because for an appendrel,\n> set_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",\n> and that would make estimate_num_groups() skip the adjustment of the\n> estimate using the new formula.\n>\n\nI'm wondering why we set the appendrel's 'tuples' equal to its 'rows'.\nWhy don't we set it to the accumulated estimate of tuples from each live\nchild, like attached? I believe this aligns more closely with reality.\n\nAnd this would also allow us to adjust the estimate for the number of\ndistinct values in estimate_num_groups() for appendrels using the new\nformula introduced in 84f9a35e3. As I experimented, this can improve\nthe estimate for appendrels. For instance,\n\ncreate table t (a int, b int, c float) partition by range(a);\ncreate table tp1 partition of t for values from (0) to (1000);\ncreate table tp2 partition of t for values from (1000) to (2000);\n\ninsert into t select i%2000, (100000 * random())::int, random() from\ngenerate_series(1,1000000) i;\nanalyze t;\n\nexplain analyze select b from t where c < 0.1 group by b;\n\n-- on master\n HashAggregate (cost=18659.28..19598.74 rows=93946 width=4)\n (actual time=220.760..234.439 rows=63224 loops=1)\n\n-- on patched\n HashAggregate (cost=18659.28..19294.25 rows=63497 width=4)\n (actual time=235.161..250.023 rows=63224 loops=1)\n\nWith the patch the estimate for the number of distinct 'b' values is\nmore accurate.\n\nBTW, this patch does not change any existing regression test results. I\nattempted to devise a regression test that shows how this change can\nimprove query plans, but failed. Should I try harder to find such a\ntest case?\n\nThanks\nRichard",
"msg_date": "Fri, 15 Dec 2023 16:58:22 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "\n\nOn 12/15/23 09:58, Richard Guo wrote:\n> \n> On Thu, Dec 14, 2023 at 6:02 PM Richard Guo <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> It seems that we need to improve estimate of distinct values in\n> estimate_num_groups() when taking the selectivity of restrictions into\n> account.\n> \n> In 84f9a35e3 we changed to a new formula to perform such estimation.\n> But that does not apply to the case here, because for an appendrel,\n> set_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",\n> and that would make estimate_num_groups() skip the adjustment of the\n> estimate using the new formula.\n> \n> \n> I'm wondering why we set the appendrel's 'tuples' equal to its 'rows'.\n> Why don't we set it to the accumulated estimate of tuples from each live\n> child, like attached? I believe this aligns more closely with reality.\n> \n\nYeah, seems like that's the right thing to do. FWIW I've been often\nconfused by these fields, because we use tuples and rows as synonyms,\nbut in this particular case that's not the same. I wonder if this is\njust a manifestation of this confusion.\n\n> And this would also allow us to adjust the estimate for the number of\n> distinct values in estimate_num_groups() for appendrels using the new\n> formula introduced in 84f9a35e3.\n\nI don't follow. Why wouldn't it be using the new formula even without\nyour patch? (using the \"wrong\" value, ofc, but the same formula).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Dec 2023 23:18:37 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Yeah, seems like that's the right thing to do. FWIW I've been often\n> confused by these fields, because we use tuples and rows as synonyms,\n> but in this particular case that's not the same. I wonder if this is\n> just a manifestation of this confusion.\n\nFor tables, one is the raw number of rows on-disk and the other is the\nnumber of rows predicted to pass the relation's quals. For something\nlike an appendrel that doesn't enforce any quals (today anyway), they\nshould probably be the same; but you need to be sure you're adding\nup the right numbers from the inputs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Dec 2023 17:33:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "\n\nOn 12/14/23 11:02, Richard Guo wrote:\n> \n> On Tue, Dec 12, 2023 at 4:40 PM Nicolas Lutic <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> I've come across a behaviour of the planner I can't explain.\n> After a migration from 11 to 15 (on RDS) we noticed a degradation in\n> response time on a query, it went from a few seconds to ten minutes.\n> A vacuum(analyze) has been realized to be sure that all is clean.\n> The 'explain analyze' shows us a change of plan. Postgresql 15 chooses\n> `incremental sort` with an index corresponding to the ORDER BY clause\n> (on the created_at column). The previous v11 plan used a more efficient\n> index.\n> \n> By deactivating incremental sort, response times in v15 are equal to\n> v11\n> one.\n> \n> \n> I think this issue is caused by under-estimating the startup cost of\n> incremental sort, which in turn is caused by over-estimating the number\n> of groups with equal presorted keys by estimate_num_groups().\n> \n> We can simulate the same issue with the query below.\n> \n> create table t (a int, b int, c int, d int) partition by range (a);\n> create table tp1 partition of t for values from (0) to (1000);\n> create table tp2 partition of t for values from (1000) to (2000);\n> \n> insert into t select i%2000, i%1000, i%300, i from\n> generate_series(1,1000000)i;\n> \n> create index on t(b);\n> create index on t(c);\n> \n> analyze t;\n> \n> -- by default incremental sort is chosen\n> explain analyze select * from t where b = 3 order by c, d limit 10;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=143.03..570.89 rows=10 width=16) (actual\n> time=375.399..375.402 rows=10 loops=1)\n> -> Incremental Sort (cost=143.03..42671.85 rows=994 width=16)\n> (actual time=375.397..375.399 rows=10 loops=1)\n> Sort Key: t.c, t.d\n> Presorted Key: t.c\n> Full-sort Groups: 1 Sort Method: top-N heapsort Average\n> Memory: 25kB Peak Memory: 25kB\n> Pre-sorted Groups: 1 Sort Method: top-N heapsort Average\n> Memory: 25kB Peak Memory: 25kB\n> -> Merge Append (cost=0.85..42644.84 rows=994 width=16)\n> (actual time=11.415..375.289 rows=335 loops=1)\n> Sort Key: t.c\n> -> Index Scan using tp1_c_idx on tp1 t_1\n> (cost=0.42..21318.39 rows=498 width=16) (actual time=6.666..189.356\n> rows=168 loops=1)\n> Filter: (b = 3)\n> Rows Removed by Filter: 171537\n> -> Index Scan using tp2_c_idx on tp2 t_2\n> (cost=0.42..21316.50 rows=496 width=16) (actual time=4.745..185.870\n> rows=168 loops=1)\n> Filter: (b = 3)\n> Rows Removed by Filter: 171534\n> Planning Time: 0.501 ms\n> Execution Time: 375.477 ms\n> (16 rows)\n> \n> -- disable incremental sort\n> set enable_incremental_sort to off;\n> SET\n> \n> explain analyze select * from t where b = 3 order by c, d limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=2577.51..2577.53 rows=10 width=16) (actual\n> time=2.928..2.930 rows=10 loops=1)\n> -> Sort (cost=2577.51..2579.99 rows=994 width=16) (actual\n> time=2.925..2.927 rows=10 loops=1)\n> Sort Key: t.c, t.d\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Append (cost=8.28..2556.03 rows=994 width=16) (actual\n> time=0.627..2.670 rows=1000 loops=1)\n> -> Bitmap Heap Scan on tp1 t_1 (cost=8.28..1276.62\n> rows=498 width=16) (actual time=0.625..1.688 rows=500 loops=1)\n> Recheck Cond: (b = 3)\n> Heap Blocks: exact=500\n> -> Bitmap Index Scan on tp1_b_idx\n> (cost=0.00..8.16 rows=498 width=0) (actual time=0.330..0.330 rows=500\n> loops=1)\n> Index Cond: (b = 3)\n> -> Bitmap Heap Scan on tp2 t_2 (cost=8.27..1274.43\n> rows=496 width=16) (actual time=0.178..0.876 rows=500 loops=1)\n> Recheck Cond: (b = 3)\n> Heap Blocks: exact=500\n> -> Bitmap Index Scan on tp2_b_idx\n> (cost=0.00..8.14 rows=496 width=0) (actual time=0.093..0.093 rows=500\n> loops=1)\n> Index Cond: (b = 3)\n> Planning Time: 0.481 ms\n> Execution Time: 3.031 ms\n> (17 rows)\n> \n> As we can see the execution time is 375.477 ms by default and 3.031 ms\n> if we disable incremental sort.\n> \n> When we calculate the cost of incremental sort, we need to estimate the\n> number of groups into which the relation is divided by the presorted\n> keys, and then calculate the cost of sorting a single group. If we\n> over-estimate the number of groups, the startup cost of incremental sort\n> would be under-estimated. In the first plan above, the number of groups\n> with equal 't.c' is estimated to be 300 by estimate_num_groups(), but is\n> actually 3 after applying the restriction 'b = 3'. As a result, the\n> startup cost of the incremental sort is estimated to be 143.03, but is\n> actually 14222.68. That's why the planner mistakenly thinks the\n> increment sort path is the cheaper one.\n> \n\nI haven't done any debugging on this, but this seems plausible. In a\nway, it seems like a combination of three issues - assumption that\n\"LIMIT n\" has cost proportional to n/N, group by estimate of correlated\ncolumns, and assumption of independence.\n\n> It seems that we need to improve estimate of distinct values in\n> estimate_num_groups() when taking the selectivity of restrictions into\n> account.\n> \n> In 84f9a35e3 we changed to a new formula to perform such estimation.\n> But that does not apply to the case here, because for an appendrel,\n> set_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",\n> and that would make estimate_num_groups() skip the adjustment of the\n> estimate using the new formula.\n> \n> Any thoughts on how to improve this?\n> \n\nOh! Now I see what you meant by using the new formula in 84f9a35e3\ndepending on how we sum tuples. I agree that seems like the right thing.\n\nI'm not sure it'll actually help with the issue, though - if I apply the\npatch, the plan does not actually change (and the cost changes just a\nlittle bit).\n\nI looked at this only very briefly, but I believe it's due to the\nassumption of independence I mentioned earlier - we end up using the new\nformula introduced in 84f9a35e3, but it assumes it assumes the\nselectivity and number of groups are independent. But that'd not the\ncase here, because the groups are very clearly correlated (with the\ncondition on \"b\").\n\nIf that's the case, I'm not sure how to fix this :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Dec 2023 00:31:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra <[email protected]>\nwrote:\n\n> Oh! Now I see what you meant by using the new formula in 84f9a35e3\n> depending on how we sum tuples. I agree that seems like the right thing.\n>\n> I'm not sure it'll actually help with the issue, though - if I apply the\n> patch, the plan does not actually change (and the cost changes just a\n> little bit).\n>\n> I looked at this only very briefly, but I believe it's due to the\n> assumption of independence I mentioned earlier - we end up using the new\n> formula introduced in 84f9a35e3, but it assumes it assumes the\n> selectivity and number of groups are independent. But that'd not the\n> case here, because the groups are very clearly correlated (with the\n> condition on \"b\").\n\n\nYou're right. The patch allows us to adjust the estimate of distinct\nvalues for appendrels using the new formula introduced in 84f9a35e3.\nBut if the restrictions are correlated with the grouping expressions,\nthe new formula does not behave well. That's why the patch does not\nhelp in case [1], where 'b' and 'c' are correlated.\n\nOTOH, if the restrictions are not correlated with the grouping\nexpressions, the new formula would perform quite well. And in this case\nthe patch would help a lot, as shown in [2] where estimate_num_groups()\ngives a much more accurate estimate with the help of this patch.\n\nSo this patch could be useful in certain situations. I'm wondering if\nwe should at least have this patch (if it is right).\n\n\n> If that's the case, I'm not sure how to fix this :-(\n\n\nThe commit message of 84f9a35e3 says\n\n This could possibly be improved upon in the future by identifying\n correlated restrictions and using a hybrid of the old and new\n formulae.\n\nMaybe this is something we can consider trying. But anyhow this is not\nan easy task I suppose.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4-FtsJ0dQUv6g%3D%3DXR_gsq%3DFj9oiydW6gbqwQ_wrbU0osw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs4-ocromEKMtVDH3RBMuAJQaQDK0qi4k6zOuvpOnMWZauw%40mail.gmail.com\n\nThanks\nRichard\n\nOn Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra <[email protected]> wrote:\nOh! Now I see what you meant by using the new formula in 84f9a35e3\ndepending on how we sum tuples. I agree that seems like the right thing.\n\nI'm not sure it'll actually help with the issue, though - if I apply the\npatch, the plan does not actually change (and the cost changes just a\nlittle bit).\n\nI looked at this only very briefly, but I believe it's due to the\nassumption of independence I mentioned earlier - we end up using the new\nformula introduced in 84f9a35e3, but it assumes it assumes the\nselectivity and number of groups are independent. But that'd not the\ncase here, because the groups are very clearly correlated (with the\ncondition on \"b\").You're right. The patch allows us to adjust the estimate of distinctvalues for appendrels using the new formula introduced in 84f9a35e3.But if the restrictions are correlated with the grouping expressions,the new formula does not behave well. That's why the patch does nothelp in case [1], where 'b' and 'c' are correlated.OTOH, if the restrictions are not correlated with the groupingexpressions, the new formula would perform quite well. And in this casethe patch would help a lot, as shown in [2] where estimate_num_groups()gives a much more accurate estimate with the help of this patch.So this patch could be useful in certain situations. I'm wondering ifwe should at least have this patch (if it is right). \nIf that's the case, I'm not sure how to fix this :-(The commit message of 84f9a35e3 says This could possibly be improved upon in the future by identifying correlated restrictions and using a hybrid of the old and new formulae.Maybe this is something we can consider trying. But anyhow this is notan easy task I suppose.[1] https://www.postgresql.org/message-id/CAMbWs4-FtsJ0dQUv6g%3D%3DXR_gsq%3DFj9oiydW6gbqwQ_wrbU0osw%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAMbWs4-ocromEKMtVDH3RBMuAJQaQDK0qi4k6zOuvpOnMWZauw%40mail.gmail.comThanksRichard",
"msg_date": "Mon, 18 Dec 2023 18:40:06 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "\n\nOn 12/18/23 11:40, Richard Guo wrote:\n> \n> On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n> \n> Oh! Now I see what you meant by using the new formula in 84f9a35e3\n> depending on how we sum tuples. I agree that seems like the right thing.\n> \n> I'm not sure it'll actually help with the issue, though - if I apply the\n> patch, the plan does not actually change (and the cost changes just a\n> little bit).\n> \n> I looked at this only very briefly, but I believe it's due to the\n> assumption of independence I mentioned earlier - we end up using the new\n> formula introduced in 84f9a35e3, but it assumes it assumes the\n> selectivity and number of groups are independent. But that'd not the\n> case here, because the groups are very clearly correlated (with the\n> condition on \"b\").\n> \n> \n> You're right. The patch allows us to adjust the estimate of distinct\n> values for appendrels using the new formula introduced in 84f9a35e3.\n> But if the restrictions are correlated with the grouping expressions,\n> the new formula does not behave well. That's why the patch does not\n> help in case [1], where 'b' and 'c' are correlated.\n> \n> OTOH, if the restrictions are not correlated with the grouping\n> expressions, the new formula would perform quite well. And in this case\n> the patch would help a lot, as shown in [2] where estimate_num_groups()\n> gives a much more accurate estimate with the help of this patch.\n> \n> So this patch could be useful in certain situations. I'm wondering if\n> we should at least have this patch (if it is right).\n>\n\nI do agree the patch seems to do the right thing, and it's worth pushing\non it's own.\n\n> \n> If that's the case, I'm not sure how to fix this :-(\n> \n> \n> The commit message of 84f9a35e3 says\n> \n> This could possibly be improved upon in the future by identifying\n> correlated restrictions and using a hybrid of the old and new\n> formulae.\n> \n> Maybe this is something we can consider trying. But anyhow this is not\n> an easy task I suppose.\n\nYeah, if it was easy, it'd have been done in 84f9a35e3 already ;-)\n\nThe challenge is where to get usable information about correlation\nbetween columns. I only have a couple very rought ideas of what might\ntry. For example, if we have multi-column ndistinct statistics, we might\nlook at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n\n ndistinct(b,c,d) / ndistinct(b,c)\n\nIf we know how many distinct values we have for the predicate column, we\ncould then estimate the number of groups. I mean, we know that for the\nrestriction \"WHERE b = 3\" we only have 1 distinct value, so we could\nestimate the number of groups as\n\n 1 * ndistinct(b,c)\n\nI'm well aware this is only a very trivial example, and for more complex\nexamples it's likely way more complicated. But hopefully it illustrates\nthe general idea.\n\nThe other idea would be to maybe look at multi-column MCV, and try using\nit to deduce cross-column correlation too (it could be more accurate for\narbitrary predicates).\n\nAnd finally, we might try being a bit more pessimistic and look at what\nthe \"worst case\" behavior could be. So for example instead of trying to\nestimate the real number of groups, we'd ask \"What's the minimum number\nof groups we're likely to get?\". And use that to cost the incremental\nsort. But I don't think we do this elsewhere, and I'm not sure we want\nto start with this risk-based approach here. It might be OK, because we\nusually expect the incremental sort to be much cheaper, ...\n\nIf this something would be interested in exploring? I don't have\ncapacity to work on this myself, but I'd be available for reviews,\nfeedback and so on.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Dec 2023 13:53:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "The possible solution of one scenario I can come up with so far is the\nquery's predicate columns and group columns belonging to one table .\n\nFor a query that contains where clause, perhaps num_groups could be\nestimated according to the following formula.\n\nnum_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where\nclause * ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1,\nsort_col_2, ... sort_col_m) / ndistinct(pred_col_1, pred_col_2, ...\npred_col_n).\n\nndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause =\nndistinct(pred_var_1, pred_var_2, ... pred_var_n) * selectivity of rel.\n\npred_col_n belong to the columns involved in the where clause and\nsort_col_m belong to the columns involved in the group by clause.\n\nThe reason for multiplying by selectivity of rel directly is that the\nselectivity of rel depends on only pred_col not sort_col. So the above\nformula can be simplified as follows.\n\nnum_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1,\nsort_col_2, ... sort_col_m) * selectivity of rel.\n\nThe correctness of the above formula depends on the following conditions.\n\n -\n\n ndistinct(pred_col_1, pred_col_2, ... pred_col_n)* ndistinct(pred_col_1,\n pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m)\n statistics already exist, and need be accurate.\n -\n\n Both pred_col_n and sort_col_m are uniformly distributed, if not,\n statistics such as mcv are needed for correction.\n -\n\n The tuples of rel are the number of total tuples of the table , not the\n number of filtered tuples.\n\nAfter experimentation, in the scenario mentioned in previous thread. The\nestimate num_groups is 3, the accuracy of result strongly relies on the\nuniform distribution of b, which makes ndistinct(pred_col_1, pred_col_2,\n... pred_col_n) with where clause could be able to estimated accurately.\n\nI'd like to hear your opinions.\n\nRegards.\n\nywgrit.\n\nTomas Vondra <[email protected]> 于2023年12月18日周一 20:53写道:\n\n>\n>\n> On 12/18/23 11:40, Richard Guo wrote:\n> >\n> > On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra\n> > <[email protected] <mailto:[email protected]>>\n> > wrote:\n> >\n> > Oh! Now I see what you meant by using the new formula in 84f9a35e3\n> > depending on how we sum tuples. I agree that seems like the right\n> thing.\n> >\n> > I'm not sure it'll actually help with the issue, though - if I apply\n> the\n> > patch, the plan does not actually change (and the cost changes just a\n> > little bit).\n> >\n> > I looked at this only very briefly, but I believe it's due to the\n> > assumption of independence I mentioned earlier - we end up using the\n> new\n> > formula introduced in 84f9a35e3, but it assumes it assumes the\n> > selectivity and number of groups are independent. But that'd not the\n> > case here, because the groups are very clearly correlated (with the\n> > condition on \"b\").\n> >\n> >\n> > You're right. The patch allows us to adjust the estimate of distinct\n> > values for appendrels using the new formula introduced in 84f9a35e3.\n> > But if the restrictions are correlated with the grouping expressions,\n> > the new formula does not behave well. That's why the patch does not\n> > help in case [1], where 'b' and 'c' are correlated.\n> >\n> > OTOH, if the restrictions are not correlated with the grouping\n> > expressions, the new formula would perform quite well. And in this case\n> > the patch would help a lot, as shown in [2] where estimate_num_groups()\n> > gives a much more accurate estimate with the help of this patch.\n> >\n> > So this patch could be useful in certain situations. I'm wondering if\n> > we should at least have this patch (if it is right).\n> >\n>\n> I do agree the patch seems to do the right thing, and it's worth pushing\n> on it's own.\n>\n> >\n> > If that's the case, I'm not sure how to fix this :-(\n> >\n> >\n> > The commit message of 84f9a35e3 says\n> >\n> > This could possibly be improved upon in the future by identifying\n> > correlated restrictions and using a hybrid of the old and new\n> > formulae.\n> >\n> > Maybe this is something we can consider trying. But anyhow this is not\n> > an easy task I suppose.\n>\n> Yeah, if it was easy, it'd have been done in 84f9a35e3 already ;-)\n>\n> The challenge is where to get usable information about correlation\n> between columns. I only have a couple very rought ideas of what might\n> try. For example, if we have multi-column ndistinct statistics, we might\n> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n>\n> ndistinct(b,c,d) / ndistinct(b,c)\n>\n> If we know how many distinct values we have for the predicate column, we\n> could then estimate the number of groups. I mean, we know that for the\n> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n> estimate the number of groups as\n>\n> 1 * ndistinct(b,c)\n>\n> I'm well aware this is only a very trivial example, and for more complex\n> examples it's likely way more complicated. But hopefully it illustrates\n> the general idea.\n>\n> The other idea would be to maybe look at multi-column MCV, and try using\n> it to deduce cross-column correlation too (it could be more accurate for\n> arbitrary predicates).\n>\n> And finally, we might try being a bit more pessimistic and look at what\n> the \"worst case\" behavior could be. So for example instead of trying to\n> estimate the real number of groups, we'd ask \"What's the minimum number\n> of groups we're likely to get?\". And use that to cost the incremental\n> sort. But I don't think we do this elsewhere, and I'm not sure we want\n> to start with this risk-based approach here. It might be OK, because we\n> usually expect the incremental sort to be much cheaper, ...\n>\n> If this something would be interested in exploring? I don't have\n> capacity to work on this myself, but I'd be available for reviews,\n> feedback and so on.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nThe possible solution of one scenario I can come up with so far is the query's predicate columns and group columns belonging to one table .For a query that contains where clause, perhaps num_groups could be estimated according to the following formula.num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause * ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) / ndistinct(pred_col_1, pred_col_2, ... pred_col_n). ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause = ndistinct(pred_var_1, pred_var_2, ... pred_var_n) * selectivity of rel.pred_col_n belong to the columns involved in the where clause and sort_col_m belong to the columns involved in the group by clause.The reason for multiplying by selectivity of rel directly is that the selectivity of rel depends on only pred_col not sort_col. So the above formula can be simplified as follows.num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) * selectivity of rel.The correctness of the above formula depends on the following conditions.ndistinct(pred_col_1, pred_col_2, ... pred_col_n)* ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) statistics already exist, and need be accurate.Both pred_col_n and sort_col_m are uniformly distributed, if not, statistics such as mcv are needed for correction.The tuples of rel are the number of total tuples of the table , not the number of filtered tuples.After experimentation, in the scenario mentioned in previous thread. The estimate num_groups is 3, the accuracy of result strongly relies on the uniform distribution of b, which makes ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause could be able to estimated accurately.I'd like to hear your opinions.Regards.ywgrit.Tomas Vondra <[email protected]> 于2023年12月18日周一 20:53写道:\n\nOn 12/18/23 11:40, Richard Guo wrote:\n> \n> On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n> \n> Oh! Now I see what you meant by using the new formula in 84f9a35e3\n> depending on how we sum tuples. I agree that seems like the right thing.\n> \n> I'm not sure it'll actually help with the issue, though - if I apply the\n> patch, the plan does not actually change (and the cost changes just a\n> little bit).\n> \n> I looked at this only very briefly, but I believe it's due to the\n> assumption of independence I mentioned earlier - we end up using the new\n> formula introduced in 84f9a35e3, but it assumes it assumes the\n> selectivity and number of groups are independent. But that'd not the\n> case here, because the groups are very clearly correlated (with the\n> condition on \"b\").\n> \n> \n> You're right. The patch allows us to adjust the estimate of distinct\n> values for appendrels using the new formula introduced in 84f9a35e3.\n> But if the restrictions are correlated with the grouping expressions,\n> the new formula does not behave well. That's why the patch does not\n> help in case [1], where 'b' and 'c' are correlated.\n> \n> OTOH, if the restrictions are not correlated with the grouping\n> expressions, the new formula would perform quite well. And in this case\n> the patch would help a lot, as shown in [2] where estimate_num_groups()\n> gives a much more accurate estimate with the help of this patch.\n> \n> So this patch could be useful in certain situations. I'm wondering if\n> we should at least have this patch (if it is right).\n>\n\nI do agree the patch seems to do the right thing, and it's worth pushing\non it's own.\n\n> \n> If that's the case, I'm not sure how to fix this :-(\n> \n> \n> The commit message of 84f9a35e3 says\n> \n> This could possibly be improved upon in the future by identifying\n> correlated restrictions and using a hybrid of the old and new\n> formulae.\n> \n> Maybe this is something we can consider trying. But anyhow this is not\n> an easy task I suppose.\n\nYeah, if it was easy, it'd have been done in 84f9a35e3 already ;-)\n\nThe challenge is where to get usable information about correlation\nbetween columns. I only have a couple very rought ideas of what might\ntry. For example, if we have multi-column ndistinct statistics, we might\nlook at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n\n ndistinct(b,c,d) / ndistinct(b,c)\n\nIf we know how many distinct values we have for the predicate column, we\ncould then estimate the number of groups. I mean, we know that for the\nrestriction \"WHERE b = 3\" we only have 1 distinct value, so we could\nestimate the number of groups as\n\n 1 * ndistinct(b,c)\n\nI'm well aware this is only a very trivial example, and for more complex\nexamples it's likely way more complicated. But hopefully it illustrates\nthe general idea.\n\nThe other idea would be to maybe look at multi-column MCV, and try using\nit to deduce cross-column correlation too (it could be more accurate for\narbitrary predicates).\n\nAnd finally, we might try being a bit more pessimistic and look at what\nthe \"worst case\" behavior could be. So for example instead of trying to\nestimate the real number of groups, we'd ask \"What's the minimum number\nof groups we're likely to get?\". And use that to cost the incremental\nsort. But I don't think we do this elsewhere, and I'm not sure we want\nto start with this risk-based approach here. It might be OK, because we\nusually expect the incremental sort to be much cheaper, ...\n\nIf this something would be interested in exploring? I don't have\ncapacity to work on this myself, but I'd be available for reviews,\nfeedback and so on.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Dec 2023 16:20:43 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On 15/12/2023 09:58, Richard Guo wrote:\n>\n> On Thu, Dec 14, 2023 at 6:02 PM Richard Guo <[email protected]> \n> wrote:\n>\n> It seems that we need to improve estimate of distinct values in\n> estimate_num_groups() when taking the selectivity of restrictions into\n> account.\n>\n> In 84f9a35e3 we changed to a new formula to perform such estimation.\n> But that does not apply to the case here, because for an appendrel,\n> set_append_rel_size() always sets \"raw tuples\" count equal to \"rows\",\n> and that would make estimate_num_groups() skip the adjustment of the\n> estimate using the new formula.\n>\n>\n> I'm wondering why we set the appendrel's 'tuples' equal to its 'rows'.\n> Why don't we set it to the accumulated estimate of tuples from each live\n> child, like attached? I believe this aligns more closely with reality.\n>\n> And this would also allow us to adjust the estimate for the number of\n> distinct values in estimate_num_groups() for appendrels using the new\n> formula introduced in 84f9a35e3. As I experimented, this can improve\n> the estimate for appendrels. For instance,\n>\n> create table t (a int, b int, c float) partition by range(a);\n> create table tp1 partition of t for values from (0) to (1000);\n> create table tp2 partition of t for values from (1000) to (2000);\n>\n> insert into t select i%2000, (100000 * random())::int, random() from \n> generate_series(1,1000000) i;\n> analyze t;\n>\n> explain analyze select b from t where c < 0.1 group by b;\n>\n> -- on master\n> HashAggregate (cost=18659.28..19598.74 rows=93946 width=4)\n> (actual time=220.760..234.439 rows=63224 loops=1)\n>\n> -- on patched\n> HashAggregate (cost=18659.28..19294.25 rows=63497 width=4)\n> (actual time=235.161..250.023 rows=63224 loops=1)\n>\n> With the patch the estimate for the number of distinct 'b' values is\n> more accurate.\n>\n> BTW, this patch does not change any existing regression test results. I\n> attempted to devise a regression test that shows how this change can\n> improve query plans, but failed. Should I try harder to find such a\n> test case?\n\n\nHi,\n\nthank you for the patch ; I've tried it and it works with the scenario \nyou provide.\n\nAs Nicolas's co-worker, I've been involved in this case, but, \nunfortunately, we're not able to test the patch with the actual data for \nthe moment, but I'll ask a dump to the real owner.\n\nAbout the regression test, I don't know how to implement it either.\n\nbest regards,\n\n-- \nSébastien\n\n\n\n\n\n\nOn 15/12/2023 09:58, Richard Guo wrote:\n\n\n\n\n\n\n\nOn Thu, Dec 14, 2023 at\n 6:02 PM Richard Guo <[email protected]>\n wrote:\n\n\n\nIt seems that we need to improve estimate of\n distinct values in\n estimate_num_groups() when taking the selectivity of\n restrictions into\n account.\n\n In 84f9a35e3 we changed to a new formula to perform\n such estimation.\n But that does not apply to the case here, because for\n an appendrel,\n set_append_rel_size() always sets \"raw tuples\" count\n equal to \"rows\",\n and that would make estimate_num_groups() skip the\n adjustment of the\n estimate using the new formula.\n\n\n\n\n\nI'm wondering why we set the appendrel's 'tuples' equal\n to its 'rows'.\n Why don't we set it to the accumulated estimate of tuples\n from each live\n child, like attached? I believe this aligns more closely\n with reality.\n\n And this would also allow us to adjust the estimate for the\n number of\n distinct values in estimate_num_groups() for appendrels\n using the new\n formula introduced in 84f9a35e3. As I experimented, this\n can improve\n the estimate for appendrels. For instance,\n\n create table t (a int, b int, c float) partition by\n range(a);\n create table tp1 partition of t for values from (0) to\n (1000);\n create table tp2 partition of t for values from (1000) to\n (2000);\n\n insert into t select i%2000, (100000 * random())::int,\n random() from generate_series(1,1000000) i;\n analyze t;\n\n explain analyze select b from t where c < 0.1 group by b;\n\n -- on master\n HashAggregate (cost=18659.28..19598.74 rows=93946 width=4)\n (actual time=220.760..234.439 rows=63224\n loops=1)\n\n -- on patched\n HashAggregate (cost=18659.28..19294.25 rows=63497 width=4)\n (actual time=235.161..250.023 rows=63224\n loops=1)\n\n With the patch the estimate for the number of distinct 'b'\n values is\n more accurate.\n\n BTW, this patch does not change any existing regression test\n results. I\n attempted to devise a regression test that shows how this\n change can\n improve query plans, but failed. Should I try harder to\n find such a\n test case?\n\n\n\n\n\nHi, \n\nthank you for the patch ; I've tried it and it works with the\n scenario you provide. \n\nAs Nicolas's co-worker, I've been involved in this case, but,\n unfortunately, we're not able to test the patch with the actual\n data for the moment, but I'll ask a dump to the real owner. \n\n About the regression test, I don't know how to implement it\n either. \n\n\nbest regards, \n\n\n-- \nSébastien",
"msg_date": "Fri, 22 Dec 2023 10:17:45 +0100",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lardi=C3=A8re?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "Hi,Tomas\n\nRecently, I looked at papers related to estimation of cardinarity with\nselection. I may be biased towards the scheme provided by the paper\n\"Distinct Sampling for Highly-Accurate Answers to Distinct Values Queries\nand Event Reports\". This paper uses distinct sampling as opposed to the\ncurrent uniform sampling, and the main differences between the two are as\nfollows.\n\n1)It not only counts the distincts on each column or multiple columns, but\nalso saves some rows corresponding to each distinct value, i.e., it saves\nsome part of the rows of the original relation as samples. The purpose of\nsaving these rows is to accommodate restrictions on the queries, such as\nwhere clauses.\n\n2)The queries are executed on the samples, and the result of the execution\nis used as the statistical value of cardinarity.\n\nThe advantages of this paper over existing practices are as follows.\n\n1)The samples collected can be applied to arbitrary predicates, e.g.\npredicates that are correlated with the columns of group by clauses.\n\n2)The accuracy is very high, and in some scenarios, the statistical error\ncan be minimized by hundreds of times compared to uniform sampling.\n\nHowever, the scheme provided in this paper also has some defects, as\nmentioned above, the scheme relies on the collected samples, which will\nlead to a significant increase in the storage overhead of statistical\ninformation.\n\nI'd like to hear your opinions.\n\nywgrit.\n\nywgrit <[email protected]> 于2023年12月22日周五 16:20写道:\n\n> The possible solution of one scenario I can come up with so far is the\n> query's predicate columns and group columns belonging to one table .\n>\n> For a query that contains where clause, perhaps num_groups could be\n> estimated according to the following formula.\n>\n> num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where\n> clause * ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1,\n> sort_col_2, ... sort_col_m) / ndistinct(pred_col_1, pred_col_2, ...\n> pred_col_n).\n>\n> ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause =\n> ndistinct(pred_var_1, pred_var_2, ... pred_var_n) * selectivity of rel.\n>\n> pred_col_n belong to the columns involved in the where clause and\n> sort_col_m belong to the columns involved in the group by clause.\n>\n> The reason for multiplying by selectivity of rel directly is that the\n> selectivity of rel depends on only pred_col not sort_col. So the above\n> formula can be simplified as follows.\n>\n> num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1,\n> sort_col_2, ... sort_col_m) * selectivity of rel.\n>\n> The correctness of the above formula depends on the following conditions.\n>\n> -\n>\n> ndistinct(pred_col_1, pred_col_2, ... pred_col_n)*\n> ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2,\n> ... sort_col_m) statistics already exist, and need be accurate.\n> -\n>\n> Both pred_col_n and sort_col_m are uniformly distributed, if not,\n> statistics such as mcv are needed for correction.\n> -\n>\n> The tuples of rel are the number of total tuples of the table , not\n> the number of filtered tuples.\n>\n> After experimentation, in the scenario mentioned in previous thread. The\n> estimate num_groups is 3, the accuracy of result strongly relies on the\n> uniform distribution of b, which makes ndistinct(pred_col_1, pred_col_2,\n> ... pred_col_n) with where clause could be able to estimated accurately.\n>\n> I'd like to hear your opinions.\n>\n> Regards.\n>\n> ywgrit.\n>\n> Tomas Vondra <[email protected]> 于2023年12月18日周一 20:53写道:\n>\n>>\n>>\n>> On 12/18/23 11:40, Richard Guo wrote:\n>> >\n>> > On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra\n>> > <[email protected] <mailto:[email protected]>>\n>> > wrote:\n>> >\n>> > Oh! Now I see what you meant by using the new formula in 84f9a35e3\n>> > depending on how we sum tuples. I agree that seems like the right\n>> thing.\n>> >\n>> > I'm not sure it'll actually help with the issue, though - if I\n>> apply the\n>> > patch, the plan does not actually change (and the cost changes just\n>> a\n>> > little bit).\n>> >\n>> > I looked at this only very briefly, but I believe it's due to the\n>> > assumption of independence I mentioned earlier - we end up using\n>> the new\n>> > formula introduced in 84f9a35e3, but it assumes it assumes the\n>> > selectivity and number of groups are independent. But that'd not the\n>> > case here, because the groups are very clearly correlated (with the\n>> > condition on \"b\").\n>> >\n>> >\n>> > You're right. The patch allows us to adjust the estimate of distinct\n>> > values for appendrels using the new formula introduced in 84f9a35e3.\n>> > But if the restrictions are correlated with the grouping expressions,\n>> > the new formula does not behave well. That's why the patch does not\n>> > help in case [1], where 'b' and 'c' are correlated.\n>> >\n>> > OTOH, if the restrictions are not correlated with the grouping\n>> > expressions, the new formula would perform quite well. And in this case\n>> > the patch would help a lot, as shown in [2] where estimate_num_groups()\n>> > gives a much more accurate estimate with the help of this patch.\n>> >\n>> > So this patch could be useful in certain situations. I'm wondering if\n>> > we should at least have this patch (if it is right).\n>> >\n>>\n>> I do agree the patch seems to do the right thing, and it's worth pushing\n>> on it's own.\n>>\n>> >\n>> > If that's the case, I'm not sure how to fix this :-(\n>> >\n>> >\n>> > The commit message of 84f9a35e3 says\n>> >\n>> > This could possibly be improved upon in the future by identifying\n>> > correlated restrictions and using a hybrid of the old and new\n>> > formulae.\n>> >\n>> > Maybe this is something we can consider trying. But anyhow this is not\n>> > an easy task I suppose.\n>>\n>> Yeah, if it was easy, it'd have been done in 84f9a35e3 already ;-)\n>>\n>> The challenge is where to get usable information about correlation\n>> between columns. I only have a couple very rought ideas of what might\n>> try. For example, if we have multi-column ndistinct statistics, we might\n>> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n>>\n>> ndistinct(b,c,d) / ndistinct(b,c)\n>>\n>> If we know how many distinct values we have for the predicate column, we\n>> could then estimate the number of groups. I mean, we know that for the\n>> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n>> estimate the number of groups as\n>>\n>> 1 * ndistinct(b,c)\n>>\n>> I'm well aware this is only a very trivial example, and for more complex\n>> examples it's likely way more complicated. But hopefully it illustrates\n>> the general idea.\n>>\n>> The other idea would be to maybe look at multi-column MCV, and try using\n>> it to deduce cross-column correlation too (it could be more accurate for\n>> arbitrary predicates).\n>>\n>> And finally, we might try being a bit more pessimistic and look at what\n>> the \"worst case\" behavior could be. So for example instead of trying to\n>> estimate the real number of groups, we'd ask \"What's the minimum number\n>> of groups we're likely to get?\". And use that to cost the incremental\n>> sort. But I don't think we do this elsewhere, and I'm not sure we want\n>> to start with this risk-based approach here. It might be OK, because we\n>> usually expect the incremental sort to be much cheaper, ...\n>>\n>> If this something would be interested in exploring? I don't have\n>> capacity to work on this myself, but I'd be available for reviews,\n>> feedback and so on.\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>>\n\nHi,TomasRecently, I looked at papers related to estimation of cardinarity with selection. I may be biased towards the scheme provided by the paper \"Distinct Sampling for Highly-Accurate Answers to Distinct Values Queries and Event Reports\". This paper uses distinct sampling as opposed to the current uniform sampling, and the main differences between the two are as follows.1)It not only counts the distincts on each column or multiple columns, but also saves some rows corresponding to each distinct value, i.e., it saves some part of the rows of the original relation as samples. The purpose of saving these rows is to accommodate restrictions on the queries, such as where clauses.2)The queries are executed on the samples, and the result of the execution is used as the statistical value of cardinarity.The advantages of this paper over existing practices are as follows.1)The samples collected can be applied to arbitrary predicates, e.g. predicates that are correlated with the columns of group by clauses.2)The accuracy is very high, and in some scenarios, the statistical error can be minimized by hundreds of times compared to uniform sampling.However, the scheme provided in this paper also has some defects, as mentioned above, the scheme relies on the collected samples, which will lead to a significant increase in the storage overhead of statistical information.I'd like to hear your opinions.ywgrit.ywgrit <[email protected]> 于2023年12月22日周五 16:20写道:The possible solution of one scenario I can come up with so far is the query's predicate columns and group columns belonging to one table .For a query that contains where clause, perhaps num_groups could be estimated according to the following formula.num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause * ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) / ndistinct(pred_col_1, pred_col_2, ... pred_col_n). ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause = ndistinct(pred_var_1, pred_var_2, ... pred_var_n) * selectivity of rel.pred_col_n belong to the columns involved in the where clause and sort_col_m belong to the columns involved in the group by clause.The reason for multiplying by selectivity of rel directly is that the selectivity of rel depends on only pred_col not sort_col. So the above formula can be simplified as follows.num_groups = ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) * selectivity of rel.The correctness of the above formula depends on the following conditions.ndistinct(pred_col_1, pred_col_2, ... pred_col_n)* ndistinct(pred_col_1, pred_col_2, ... pred_col_n, sort_col_1, sort_col_2, ... sort_col_m) statistics already exist, and need be accurate.Both pred_col_n and sort_col_m are uniformly distributed, if not, statistics such as mcv are needed for correction.The tuples of rel are the number of total tuples of the table , not the number of filtered tuples.After experimentation, in the scenario mentioned in previous thread. The estimate num_groups is 3, the accuracy of result strongly relies on the uniform distribution of b, which makes ndistinct(pred_col_1, pred_col_2, ... pred_col_n) with where clause could be able to estimated accurately.I'd like to hear your opinions.Regards.ywgrit.Tomas Vondra <[email protected]> 于2023年12月18日周一 20:53写道:\n\nOn 12/18/23 11:40, Richard Guo wrote:\n> \n> On Mon, Dec 18, 2023 at 7:31 AM Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n> \n> Oh! Now I see what you meant by using the new formula in 84f9a35e3\n> depending on how we sum tuples. I agree that seems like the right thing.\n> \n> I'm not sure it'll actually help with the issue, though - if I apply the\n> patch, the plan does not actually change (and the cost changes just a\n> little bit).\n> \n> I looked at this only very briefly, but I believe it's due to the\n> assumption of independence I mentioned earlier - we end up using the new\n> formula introduced in 84f9a35e3, but it assumes it assumes the\n> selectivity and number of groups are independent. But that'd not the\n> case here, because the groups are very clearly correlated (with the\n> condition on \"b\").\n> \n> \n> You're right. The patch allows us to adjust the estimate of distinct\n> values for appendrels using the new formula introduced in 84f9a35e3.\n> But if the restrictions are correlated with the grouping expressions,\n> the new formula does not behave well. That's why the patch does not\n> help in case [1], where 'b' and 'c' are correlated.\n> \n> OTOH, if the restrictions are not correlated with the grouping\n> expressions, the new formula would perform quite well. And in this case\n> the patch would help a lot, as shown in [2] where estimate_num_groups()\n> gives a much more accurate estimate with the help of this patch.\n> \n> So this patch could be useful in certain situations. I'm wondering if\n> we should at least have this patch (if it is right).\n>\n\nI do agree the patch seems to do the right thing, and it's worth pushing\non it's own.\n\n> \n> If that's the case, I'm not sure how to fix this :-(\n> \n> \n> The commit message of 84f9a35e3 says\n> \n> This could possibly be improved upon in the future by identifying\n> correlated restrictions and using a hybrid of the old and new\n> formulae.\n> \n> Maybe this is something we can consider trying. But anyhow this is not\n> an easy task I suppose.\n\nYeah, if it was easy, it'd have been done in 84f9a35e3 already ;-)\n\nThe challenge is where to get usable information about correlation\nbetween columns. I only have a couple very rought ideas of what might\ntry. For example, if we have multi-column ndistinct statistics, we might\nlook at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n\n ndistinct(b,c,d) / ndistinct(b,c)\n\nIf we know how many distinct values we have for the predicate column, we\ncould then estimate the number of groups. I mean, we know that for the\nrestriction \"WHERE b = 3\" we only have 1 distinct value, so we could\nestimate the number of groups as\n\n 1 * ndistinct(b,c)\n\nI'm well aware this is only a very trivial example, and for more complex\nexamples it's likely way more complicated. But hopefully it illustrates\nthe general idea.\n\nThe other idea would be to maybe look at multi-column MCV, and try using\nit to deduce cross-column correlation too (it could be more accurate for\narbitrary predicates).\n\nAnd finally, we might try being a bit more pessimistic and look at what\nthe \"worst case\" behavior could be. So for example instead of trying to\nestimate the real number of groups, we'd ask \"What's the minimum number\nof groups we're likely to get?\". And use that to cost the incremental\nsort. But I don't think we do this elsewhere, and I'm not sure we want\nto start with this risk-based approach here. It might be OK, because we\nusually expect the incremental sort to be much cheaper, ...\n\nIf this something would be interested in exploring? I don't have\ncapacity to work on this myself, but I'd be available for reviews,\nfeedback and so on.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 26 Dec 2023 17:25:05 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On 18/12/2023 19:53, Tomas Vondra wrote:\n> On 12/18/23 11:40, Richard Guo wrote:\n> The challenge is where to get usable information about correlation\n> between columns. I only have a couple very rought ideas of what might\n> try. For example, if we have multi-column ndistinct statistics, we might\n> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n> \n> ndistinct(b,c,d) / ndistinct(b,c)\n> \n> If we know how many distinct values we have for the predicate column, we\n> could then estimate the number of groups. I mean, we know that for the\n> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n> estimate the number of groups as\n> \n> 1 * ndistinct(b,c)\nDid you mean here ndistinct(c,d) and the formula:\nndistinct(b,c,d) / ndistinct(c,d) ?\n\nDo you implicitly bear in mind here the necessity of tracking clauses \nthat were applied to the data up to the moment of grouping?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 13:50:22 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On 15/12/2023 15:58, Richard Guo wrote:\n> With the patch the estimate for the number of distinct 'b' values is\n> more accurate.\n+1 to commit this patch.\nIt looks good and resolves kind of a bug in the code.\n> \n> BTW, this patch does not change any existing regression test results. I\n> attempted to devise a regression test that shows how this change can\n> improve query plans, but failed. Should I try harder to find such a\n> test case?\nThe test that was changed refers to different features. Its behaviour \ncan be changed in. the future, and mask testing of this code. IMO, you \nshould add a test directly checking appendrel->tuples correction.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 14:35:57 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "\n\nOn 2/15/24 07:50, Andrei Lepikhov wrote:\n> On 18/12/2023 19:53, Tomas Vondra wrote:\n>> On 12/18/23 11:40, Richard Guo wrote:\n>> The challenge is where to get usable information about correlation\n>> between columns. I only have a couple very rought ideas of what might\n>> try. For example, if we have multi-column ndistinct statistics, we might\n>> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n>>\n>> ndistinct(b,c,d) / ndistinct(b,c)\n>>\n>> If we know how many distinct values we have for the predicate column, we\n>> could then estimate the number of groups. I mean, we know that for the\n>> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n>> estimate the number of groups as\n>>\n>> 1 * ndistinct(b,c)\n> Did you mean here ndistinct(c,d) and the formula:\n> ndistinct(b,c,d) / ndistinct(c,d) ?\n\nYes, I think that's probably a more correct ... Essentially, the idea is\nto estimate the change in number of distinct groups after adding a\ncolumn (or restricting it in some way).\n\n> \n> Do you implicitly bear in mind here the necessity of tracking clauses\n> that were applied to the data up to the moment of grouping?\n> \n\nI don't recall what exactly I considered two months ago when writing the\nmessage, but I don't see why we would need to track that beyond what we\nalready have. Shouldn't it be enough for the grouping to simply inspect\nthe conditions on the lower levels?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Feb 2024 12:10:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "On 15/2/2024 18:10, Tomas Vondra wrote:\n> \n> \n> On 2/15/24 07:50, Andrei Lepikhov wrote:\n>> On 18/12/2023 19:53, Tomas Vondra wrote:\n>>> On 12/18/23 11:40, Richard Guo wrote:\n>>> The challenge is where to get usable information about correlation\n>>> between columns. I only have a couple very rought ideas of what might\n>>> try. For example, if we have multi-column ndistinct statistics, we might\n>>> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n>>>\n>>> ndistinct(b,c,d) / ndistinct(b,c)\n>>>\n>>> If we know how many distinct values we have for the predicate column, we\n>>> could then estimate the number of groups. I mean, we know that for the\n>>> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n>>> estimate the number of groups as\n>>>\n>>> 1 * ndistinct(b,c)\n>> Did you mean here ndistinct(c,d) and the formula:\n>> ndistinct(b,c,d) / ndistinct(c,d) ?\n> \n> Yes, I think that's probably a more correct ... Essentially, the idea is\n> to estimate the change in number of distinct groups after adding a\n> column (or restricting it in some way).\nThanks, I got it. I just think how to implement such techniques with \nextensions just to test the idea in action. In the case of GROUP-BY we \ncan use path hook, of course. But what if to invent a hook on clauselist \nestimation?\n>> Do you implicitly bear in mind here the necessity of tracking clauses\n>> that were applied to the data up to the moment of grouping?\n>>\n> \n> I don't recall what exactly I considered two months ago when writing the\n> message, but I don't see why we would need to track that beyond what we\n> already have. Shouldn't it be enough for the grouping to simply inspect\n> the conditions on the lower levels?\nYes, exactly. I've thought about looking into baserestrictinfos and, if \ngroup-by references a subquery targetlist, into subqueries too.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 19:45:36 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
},
{
"msg_contents": "\n\nOn 2/15/24 13:45, Andrei Lepikhov wrote:\n> On 15/2/2024 18:10, Tomas Vondra wrote:\n>>\n>>\n>> On 2/15/24 07:50, Andrei Lepikhov wrote:\n>>> On 18/12/2023 19:53, Tomas Vondra wrote:\n>>>> On 12/18/23 11:40, Richard Guo wrote:\n>>>> The challenge is where to get usable information about correlation\n>>>> between columns. I only have a couple very rought ideas of what might\n>>>> try. For example, if we have multi-column ndistinct statistics, we\n>>>> might\n>>>> look at ndistinct(b,c) and ndistinct(b,c,d) and deduce something from\n>>>>\n>>>> ndistinct(b,c,d) / ndistinct(b,c)\n>>>>\n>>>> If we know how many distinct values we have for the predicate\n>>>> column, we\n>>>> could then estimate the number of groups. I mean, we know that for the\n>>>> restriction \"WHERE b = 3\" we only have 1 distinct value, so we could\n>>>> estimate the number of groups as\n>>>>\n>>>> 1 * ndistinct(b,c)\n>>> Did you mean here ndistinct(c,d) and the formula:\n>>> ndistinct(b,c,d) / ndistinct(c,d) ?\n>>\n>> Yes, I think that's probably a more correct ... Essentially, the idea is\n>> to estimate the change in number of distinct groups after adding a\n>> column (or restricting it in some way).\n> Thanks, I got it. I just think how to implement such techniques with\n> extensions just to test the idea in action. In the case of GROUP-BY we\n> can use path hook, of course. But what if to invent a hook on clauselist\n> estimation?\n\nMaybe.\n\nI have thought about introducing such hook to alter estimation of\nclauses, so I'm not opposed to it. Ofc, it depends on where would the\nhook be, what would it be allowed to do etc. And as it doesn't exist\nyet, it'd be more a \"local\" improvement to separate the changes into an\nextension.\n\n>>> Do you implicitly bear in mind here the necessity of tracking clauses\n>>> that were applied to the data up to the moment of grouping?\n>>>\n>>\n>> I don't recall what exactly I considered two months ago when writing the\n>> message, but I don't see why we would need to track that beyond what we\n>> already have. Shouldn't it be enough for the grouping to simply inspect\n>> the conditions on the lower levels?\n> Yes, exactly. I've thought about looking into baserestrictinfos and, if\n> group-by references a subquery targetlist, into subqueries too.\n> \n\nTrue. Something like that.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Feb 2024 17:56:15 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner chooses incremental but not the best one"
}
] |
[
{
"msg_contents": "build system using configure set VAL_CFLAGS with debug and\noptimization flags, so pg_config will show these infos. Some\nextensions depend on the mechanism.\n\nThis patch exposes these flags with a typo fixed together.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 12 Dec 2023 18:40:15 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "[meson] expose buildtype debug/optimization info to pg_config"
},
{
"msg_contents": "On 12.12.23 11:40, Junwang Zhao wrote:\n> build system using configure set VAL_CFLAGS with debug and\n> optimization flags, so pg_config will show these infos. Some\n> extensions depend on the mechanism.\n> \n> This patch exposes these flags with a typo fixed together.\n\nI have committed the typo fix.\n\nBut I would like to learn more about the requirements of extensions in \nthis area. This seems a bit suspicious to me.\n\n\n\n",
"msg_date": "Thu, 14 Dec 2023 09:50:45 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [meson] expose buildtype debug/optimization info to pg_config"
},
{
"msg_contents": "Hi Peter,\n\nThanks for looking into this.\n\nOn Thu, Dec 14, 2023 at 4:50 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 12.12.23 11:40, Junwang Zhao wrote:\n> > build system using configure set VAL_CFLAGS with debug and\n> > optimization flags, so pg_config will show these infos. Some\n> > extensions depend on the mechanism.\n> >\n> > This patch exposes these flags with a typo fixed together.\n>\n> I have committed the typo fix.\n>\n> But I would like to learn more about the requirements of extensions in\n> this area. This seems a bit suspicious to me.\n\nThis is what I found when building citus against an installation\nof meson debug build pg instance, since the CFLAGS doesn't\ncontain -g flag, the binary doesn't include the debug information,\nwhich is different behavior from configure building system.\n\nAnother issue I found is that some C++\nextensions(ajust/parquet_fdw for example) don't build against\nthe meson generated pgxs.mk, since it doesn't set the CXX\ncommand. CXX is only set when llvm option is enabled, which\nis different from old building system.\n\nI don't insist we make Meson the same behaviour with old building\nsystem, I just think the issues I raised might stop developers try\nthe fancy new building system. And the fix I post might not be\nideal, you and Andres might have better solutions.\n\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 14 Dec 2023 17:24:58 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [meson] expose buildtype debug/optimization info to pg_config"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-14 17:24:58 +0800, Junwang Zhao wrote:\n> On Thu, Dec 14, 2023 at 4:50 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 12.12.23 11:40, Junwang Zhao wrote:\n> > > build system using configure set VAL_CFLAGS with debug and\n> > > optimization flags, so pg_config will show these infos. Some\n> > > extensions depend on the mechanism.\n> > >\n> > > This patch exposes these flags with a typo fixed together.\n> >\n> > I have committed the typo fix.\n> >\n> > But I would like to learn more about the requirements of extensions in\n> > this area. This seems a bit suspicious to me.\n> \n> This is what I found when building citus against an installation\n> of meson debug build pg instance, since the CFLAGS doesn't\n> contain -g flag, the binary doesn't include the debug information,\n> which is different behavior from configure building system.\n\nHm. I'm not sure it's the right call to make extensions build the same way as\nthe main postgres install with regard to optimization and debug info. So I\nfeel a bit hesitant around generating -g and particularly -Ox. But it's\nhistorically what we've done...\n\nIf we want to do so, I think this should not check buildtype, but debug.\n\n\n> Another issue I found is that some C++\n> extensions(ajust/parquet_fdw for example) don't build against\n> the meson generated pgxs.mk, since it doesn't set the CXX\n> command. CXX is only set when llvm option is enabled, which\n> is different from old building system.\n\nI wanted to skip the C++ tests when we don't need C++, because it makes\nconfigure take longer. But I could be convinced that we should always at least\ndetermine the C++ compiler for Makefile.global.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Dec 2023 06:20:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [meson] expose buildtype debug/optimization info to pg_config"
},
{
"msg_contents": "Hi,\n\nOn Fri, Dec 15, 2023 at 10:20 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-12-14 17:24:58 +0800, Junwang Zhao wrote:\n> > On Thu, Dec 14, 2023 at 4:50 PM Peter Eisentraut <[email protected]> wrote:\n> > >\n> > > On 12.12.23 11:40, Junwang Zhao wrote:\n> > > > build system using configure set VAL_CFLAGS with debug and\n> > > > optimization flags, so pg_config will show these infos. Some\n> > > > extensions depend on the mechanism.\n> > > >\n> > > > This patch exposes these flags with a typo fixed together.\n> > >\n> > > I have committed the typo fix.\n> > >\n> > > But I would like to learn more about the requirements of extensions in\n> > > this area. This seems a bit suspicious to me.\n> >\n> > This is what I found when building citus against an installation\n> > of meson debug build pg instance, since the CFLAGS doesn't\n> > contain -g flag, the binary doesn't include the debug information,\n> > which is different behavior from configure building system.\n>\n> Hm. I'm not sure it's the right call to make extensions build the same way as\n> the main postgres install with regard to optimization and debug info. So I\n> feel a bit hesitant around generating -g and particularly -Ox. But it's\n> historically what we've done...\n>\n> If we want to do so, I think this should not check buildtype, but debug.\n\nI'm confused which *debug* do you mean, can you be more specific?\n>\n>\n> > Another issue I found is that some C++\n> > extensions(ajust/parquet_fdw for example) don't build against\n> > the meson generated pgxs.mk, since it doesn't set the CXX\n> > command. CXX is only set when llvm option is enabled, which\n> > is different from old building system.\n>\n> I wanted to skip the C++ tests when we don't need C++, because it makes\n> configure take longer. But I could be convinced that we should always at least\n> determine the C++ compiler for Makefile.global.\n\nThe first idea that came to my mind is using the *project* command\nto set [`c`, `cpp`], but this might be a little bit confusing for somebody.\n\nThen I tried another way by adding a 'pgxscpp' option to let the user\nchoose whether he will set the C++ compiler for Makefile.global.\nIt works but may not be an ideal way, see the attached.\n\n\n>\n> Greetings,\n>\n> Andres Freund\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 22 Dec 2023 17:44:20 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [meson] expose buildtype debug/optimization info to pg_config"
},
{
"msg_contents": "On 14.12.23 10:24, Junwang Zhao wrote:\n> On Thu, Dec 14, 2023 at 4:50 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 12.12.23 11:40, Junwang Zhao wrote:\n>>> build system using configure set VAL_CFLAGS with debug and\n>>> optimization flags, so pg_config will show these infos. Some\n>>> extensions depend on the mechanism.\n>>>\n>>> This patch exposes these flags with a typo fixed together.\n>>\n>> I have committed the typo fix.\n>>\n>> But I would like to learn more about the requirements of extensions in\n>> this area. This seems a bit suspicious to me.\n> \n> This is what I found when building citus against an installation\n> of meson debug build pg instance, since the CFLAGS doesn't\n> contain -g flag, the binary doesn't include the debug information,\n> which is different behavior from configure building system.\n\nOk, that makes sense.\n\nI think a better place to add those options would the variable \nvar_cflags, which are the combined C flags that we export to \nMakefile.global and pg_config. The cflags variable that you used is \nmore for internal use, for passing to the actual compilation commands, \nso adding more options there would be duplicative.\n\nAnd then set var_cxxflags as well.\n\nMaybe you should also check whether the compiler takes unix-style \narguments, perhaps using cc.get_argument_syntax().\n\n\n\n",
"msg_date": "Tue, 26 Dec 2023 20:58:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [meson] expose buildtype debug/optimization info to pg_config"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nCurrently walrcv->walRcvState is set to WALRCV_STREAMING at the\nbeginning of WalReceiverMain().\n\nBut it seems that after this assignment things could be wrong before the\nwalreicever actually starts streaming (like not being able to connect\nto the primary).\n\nIt looks to me that WALRCV_STREAMING should be set once walrcv_startstreaming()\nreturns true: this is the proposal of this patch.\n\nI don't think the current assignment location is causing any issues, but I\nthink it's more appropriate to move it like in the attached.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 12 Dec 2023 16:58:58 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "On Tue, Dec 12, 2023, at 12:58 PM, Drouvot, Bertrand wrote:\n> Currently walrcv->walRcvState is set to WALRCV_STREAMING at the\n> beginning of WalReceiverMain().\n> \n> But it seems that after this assignment things could be wrong before the\n> walreicever actually starts streaming (like not being able to connect\n> to the primary).\n> \n> It looks to me that WALRCV_STREAMING should be set once walrcv_startstreaming()\n> returns true: this is the proposal of this patch.\n\nPer the state name (streaming), it seems it should be set later as you\nproposed, however, I'm concerned about the previous state (WALRCV_STARTING). If\nI'm reading the code correctly, WALRCV_STARTING is assigned at\nRequestXLogStreaming():\n\n SetInstallXLogFileSegmentActive();\n RequestXLogStreaming(tli, ptr, PrimaryConnInfo,\n PrimarySlotName,\n wal_receiver_create_temp_slot);\n flushedUpto = 0; \n }\n\n /*\n * Check if WAL receiver is active or wait to start up.\n */\n if (!WalRcvStreaming())\n { \n lastSourceFailed = true;\n break;\n }\n\nAfter a few lines the function WalRcvStreaming() has:\n\n SpinLockRelease(&walrcv->mutex);\n\n /* \n * If it has taken too long for walreceiver to start up, give up. Setting\n * the state to STOPPED ensures that if walreceiver later does start up\n * after all, it will see that it's not supposed to be running and die\n * without doing anything.\n */\n if (state == WALRCV_STARTING)\n { \n pg_time_t now = (pg_time_t) time(NULL);\n\n if ((now - startTime) > WALRCV_STARTUP_TIMEOUT)\n { \n bool stopped = false;\n\n SpinLockAcquire(&walrcv->mutex);\n if (walrcv->walRcvState == WALRCV_STARTING)\n { \n state = walrcv->walRcvState = WALRCV_STOPPED;\n stopped = true;\n }\n SpinLockRelease(&walrcv->mutex);\n\n if (stopped)\n ConditionVariableBroadcast(&walrcv->walRcvStoppedCV);\n }\n } \n\nCouldn't it give up before starting if you apply your patch? My main concern is\ndue to a slow system, the walrcv_connect() took to long in WalReceiverMain()\nand the code above kills the walreceiver while in the process to start it.\nSince you cannot control the hardcoded WALRCV_STARTUP_TIMEOUT value, you might\nhave issues during overload periods.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Dec 12, 2023, at 12:58 PM, Drouvot, Bertrand wrote:Currently walrcv->walRcvState is set to WALRCV_STREAMING at thebeginning of WalReceiverMain().But it seems that after this assignment things could be wrong before thewalreicever actually starts streaming (like not being able to connectto the primary).It looks to me that WALRCV_STREAMING should be set once walrcv_startstreaming()returns true: this is the proposal of this patch.Per the state name (streaming), it seems it should be set later as youproposed, however, I'm concerned about the previous state (WALRCV_STARTING). IfI'm reading the code correctly, WALRCV_STARTING is assigned atRequestXLogStreaming(): SetInstallXLogFileSegmentActive(); RequestXLogStreaming(tli, ptr, PrimaryConnInfo, PrimarySlotName, wal_receiver_create_temp_slot); flushedUpto = 0; } /* * Check if WAL receiver is active or wait to start up. */ if (!WalRcvStreaming()) { lastSourceFailed = true; break; }After a few lines the function WalRcvStreaming() has: SpinLockRelease(&walrcv->mutex); /* * If it has taken too long for walreceiver to start up, give up. Setting * the state to STOPPED ensures that if walreceiver later does start up * after all, it will see that it's not supposed to be running and die * without doing anything. */ if (state == WALRCV_STARTING) { pg_time_t now = (pg_time_t) time(NULL); if ((now - startTime) > WALRCV_STARTUP_TIMEOUT) { bool stopped = false; SpinLockAcquire(&walrcv->mutex); if (walrcv->walRcvState == WALRCV_STARTING) { state = walrcv->walRcvState = WALRCV_STOPPED; stopped = true; } SpinLockRelease(&walrcv->mutex); if (stopped) ConditionVariableBroadcast(&walrcv->walRcvStoppedCV); } } Couldn't it give up before starting if you apply your patch? My main concern isdue to a slow system, the walrcv_connect() took to long in WalReceiverMain()and the code above kills the walreceiver while in the process to start it.Since you cannot control the hardcoded WALRCV_STARTUP_TIMEOUT value, you mighthave issues during overload periods.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Dec 2023 16:54:32 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 04:54:32PM -0300, Euler Taveira wrote:\n> Couldn't it give up before starting if you apply your patch? My main concern is\n> due to a slow system, the walrcv_connect() took to long in WalReceiverMain()\n> and the code above kills the walreceiver while in the process to start it.\n> Since you cannot control the hardcoded WALRCV_STARTUP_TIMEOUT value, you might\n> have issues during overload periods.\n\nSounds like a fair point to me, this area is trickier than it looks.\nAnother thing that I'm a bit surprised with is why it would be OK to\nswitch the status to STREAMING only we first_stream is set, discarding\nthe restart case.\n--\nMichael",
"msg_date": "Wed, 13 Dec 2023 15:33:31 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "Hi,\n\nOn 12/12/23 8:54 PM, Euler Taveira wrote:\n> On Tue, Dec 12, 2023, at 12:58 PM, Drouvot, Bertrand wrote:\n>> Currently walrcv->walRcvState is set to WALRCV_STREAMING at the\n>> beginning of WalReceiverMain().\n>>\n>> But it seems that after this assignment things could be wrong before the\n>> walreicever actually starts streaming (like not being able to connect\n>> to the primary).\n>>\n>> It looks to me that WALRCV_STREAMING should be set once walrcv_startstreaming()\n>> returns true: this is the proposal of this patch.\n> \n> Per the state name (streaming), it seems it should be set later as you\n> proposed, \n\nThanks for looking at it!\n\n> however, I'm concerned about the previous state (WALRCV_STARTING). If\n> I'm reading the code correctly, WALRCV_STARTING is assigned at\n> RequestXLogStreaming():\n> \n> SetInstallXLogFileSegmentActive();\n> RequestXLogStreaming(tli, ptr, PrimaryConnInfo,\n> PrimarySlotName,\n> wal_receiver_create_temp_slot);\n> flushedUpto = 0;\n> }\n> \n> /*\n> * Check if WAL receiver is active or wait to start up.\n> */\n> if (!WalRcvStreaming())\n> {\n> lastSourceFailed = true;\n> break;\n> }\n> \n> After a few lines the function WalRcvStreaming() has:\n> \n> SpinLockRelease(&walrcv->mutex);\n> \n> /*\n> * If it has taken too long for walreceiver to start up, give up. Setting\n> * the state to STOPPED ensures that if walreceiver later does start up\n> * after all, it will see that it's not supposed to be running and die\n> * without doing anything.\n> */\n> if (state == WALRCV_STARTING)\n> {\n> pg_time_t now = (pg_time_t) time(NULL);\n> \n> if ((now - startTime) > WALRCV_STARTUP_TIMEOUT)\n> {\n> bool stopped = false;\n> \n> SpinLockAcquire(&walrcv->mutex);\n> if (walrcv->walRcvState == WALRCV_STARTING)\n> {\n> state = walrcv->walRcvState = WALRCV_STOPPED;\n> stopped = true;\n> }\n> SpinLockRelease(&walrcv->mutex);\n> \n> if (stopped)\n> ConditionVariableBroadcast(&walrcv->walRcvStoppedCV);\n> }\n> }\n> \n> Couldn't it give up before starting if you apply your patch? My main concern is\n> due to a slow system, the walrcv_connect() took to long in WalReceiverMain()\n> and the code above kills the walreceiver while in the process to start it.\n\nYeah, so it looks to me that the sequence of events is:\n\n1) The startup process sets walrcv->walRcvState = WALRCV_STARTING (in RequestXLogStreaming())\n2) The startup process sets the walrcv->startTime (in RequestXLogStreaming())\n3) The startup process asks then the postmaster to starts the walreceiver\n4) Then The startup process checks if WalRcvStreaming() is true\n\nNote that 3) is not waiting for the walreceiver to actually start: it \"just\" sets\na flag and kill (SIGUSR1) the postmaster (in SendPostmasterSignal()).\n\nIt means that if the time between 1 and 4 is <= WALRCV_STARTUP_TIMEOUT (10 seconds)\nthen WalRcvStreaming() returns true (even if the walreceiver is not streaming yet).\n\nSo it looks to me that even if the walreceiver does take time to start streaming,\nas long as the time between 1 and 4 is <= 10 seconds we are fine.\n\nAnd I think it's fine because WalRcvStreaming() does not actually \"only\" check that the\nwalreceiver is streaming but as its comment states:\n\n\"\n/*\n * Is walreceiver running and streaming (or at least attempting to connect,\n * or starting up)?\n */\n\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 11:18:10 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "Hi,\n\nOn 12/13/23 3:33 PM, Michael Paquier wrote:\n> On Tue, Dec 12, 2023 at 04:54:32PM -0300, Euler Taveira wrote:\n>> Couldn't it give up before starting if you apply your patch? My main concern is\n>> due to a slow system, the walrcv_connect() took to long in WalReceiverMain()\n>> and the code above kills the walreceiver while in the process to start it.\n>> Since you cannot control the hardcoded WALRCV_STARTUP_TIMEOUT value, you might\n>> have issues during overload periods.\n> \n> Sounds like a fair point to me, \n\nThanks for looking at it! I'm not sure about it, see my comment in [1].\n\n> Another thing that I'm a bit surprised with is why it would be OK to\n> switch the status to STREAMING only we first_stream is set, discarding\n> the restart case.\n\nYeah, that looks like a miss on my side. Thanks for pointing out!\n\nPlease find attached v2 addressing this remark.\n\n[1]: https://www.postgresql.org/message-id/c76c0a65-f754-4614-b616-1d48f9195745%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Dec 2023 11:36:25 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4698/\n[2] https://cirrus-ci.com/task/5367036042280960\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 16:14:46 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 22, 2024 at 04:14:46PM +1100, Peter Smith wrote:\n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nThanks for the warning! I don't think the current code is causing any issues so\ngiven the feedback I've had so far I think I'll withdraw the patch.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:37:52 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move walreceiver state assignment (to WALRCV_STREAMING) in\n WalReceiverMain()"
}
] |
[
{
"msg_contents": "Prevent tuples to be marked as dead in subtransactions on standbys\n\nDead tuples are ignored and are not marked as dead during recovery, as\nit can lead to MVCC issues on a standby because its xmin may not match\nwith the primary. This information is tracked by a field called\n\"xactStartedInRecovery\" in the transaction state data, switched on when\nstarting a transaction in recovery.\n\nUnfortunately, this information was not correctly tracked when starting\na subtransaction, because the transaction state used for the\nsubtransaction did not update \"xactStartedInRecovery\" based on the state\nof its parent. This would cause index scans done in subtransactions to\nreturn inconsistent data, depending on how the xmin of the primary\nand/or the standby evolved.\n\nThis is broken since the introduction of hot standby in efc16ea52067, so\nbackpatch all the way down.\n\nAuthor: Fei Changhong\nReviewed-by: Kyotaro Horiguchi\nDiscussion: https://postgr.es/m/[email protected]\nBackpatch-through: 12\n\nBranch\n------\nREL_15_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f5d8f59cae3e0befda51a97cc3715fa6caf7105d\n\nModified Files\n--------------\nsrc/backend/access/transam/xact.c | 1 +\n1 file changed, 1 insertion(+)",
"msg_date": "Tue, 12 Dec 2023 16:06:51 +0000",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Prevent tuples to be marked as dead in subtransactions on\n standb"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 11:06 AM Michael Paquier <[email protected]> wrote:\n> Prevent tuples to be marked as dead in subtransactions on standbys\n\nI don't think this is a good commit message. It's totally unclear what\nit means, and when I opened up the diff to try to see what was\nchanged, it looked nothing like what I expected.\n\nI think a better message would have been something like\n\"startedInRecovery flag must be propagated to subtransactions\". And I\nthink there should have been some analysis in the commit message or\nthe comments within the commit itself of whether it was intended that\nthis be propagated to subtransactions or not. It's hard to understand\nwhy the flag would have been placed in the TransactionState if it\napplied globally to the transaction and all subtransactions, but maybe\nthat's what happened.\n\nInstead of discussing that issue, your commit message focuses in the\nuser-visible consequences, but in a sort of baffling way. The\nstatement that \"Dead tuples are ignored and are not marked as dead\nduring recovery,\" for example, is clearly false on its face. If\nrecovery didn't mark dead tuples as dead, it would be completely\nbroken.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 15:15:42 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Prevent tuples to be marked as dead in subtransactions on\n standb"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 03:15:42PM -0500, Robert Haas wrote:\n> I don't think this is a good commit message. It's totally unclear what\n> it means, and when I opened up the diff to try to see what was\n> changed, it looked nothing like what I expected.\n\n(Not my intention to ignore you here, my head has been underwater for\nsome time.)\n\n> I think a better message would have been something like\n> \"startedInRecovery flag must be propagated to subtransactions\".\n\nYou are right, sorry about that. Something like \"Propagate properly\nstartedInRecovery in subtransactions started during recovery\" would\nhave been better than what I used.\n\n> And I\n> think there should have been some analysis in the commit message or\n> the comments within the commit itself of whether it was intended that\n> this be propagated to subtransactions or not. It's hard to understand\n> why the flag would have been placed in the TransactionState if it\n> applied globally to the transaction and all subtransactions, but maybe\n> that's what happened.\n\nAlvaro has mentioned something like that on the original thread where\nwe could use comments when a transaction state is pushed to a\nsubtransaction to track better the fields used and/or not used.\nDocumenting more all that at the top of TransactionStateData is\nsomething we should do.\n\n> Instead of discussing that issue, your commit message focuses in the\n> user-visible consequences, but in a sort of baffling way. The\n> statement that \"Dead tuples are ignored and are not marked as dead\n> during recovery,\" for example, is clearly false on its face. If\n> recovery didn't mark dead tuples as dead, it would be completely\n> broken.\n\nRather than \"dead\" tuples, I implied \"killed\" tuples in this sentence.\nKilled tuples are ignored during recovery.\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 11:30:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Prevent tuples to be marked as dead in subtransactions on\n standb"
}
] |
[
{
"msg_contents": "The script was using a few deprecated things according to POSIX:\n\n- -o instead of ||\n- egrep\n- `` instead of $()\n\nI removed those for their \"modern\" equivalents. Hopefully no buildfarm \nmember complains. I can remove any of those patches though. I did go \nahead and remove egrep usage from the entire codebase while I was at it. \nThere is still a configure check though. I'm thinking that could also be \nremoved?\n\nI moved system detection to use uname -s. I hope that isn't a big deal. \nNot sure the best way to identify Mac otherwise.\n\nThe big patch here is adding support for Mac. objdump -W doesn't work on \nMac. So, I used dsymutil and dwarfdump to achieve the same result. I am \nnot someone who ever uses awk, so someone should definitely check my \nwork there. I can only confirm this works on the latest version of Mac, \nand have no clue how backward compatible it is. I also wrote this \nwithout having a Mac. I had to ping a coworker with a Mac for help.\n\nMy goal with the Mac support is to enable use of find_typedef for \nextension developers, where using a Mac might be more prominent than \nupstream Postgres development, but that is just a guess.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 12 Dec 2023 15:16:10 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> The big patch here is adding support for Mac. objdump -W doesn't work on \n> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n\nWe should probably nuke the current version of src/tools/find_typedef\naltogether in favor of copying the current buildfarm code for that.\nWe know that the buildfarm's version works, whereas I'm not real sure\nthat src/tools/find_typedef is being used in anger by anyone. Also,\nwe're long past the point where developers can avoid having Perl\ninstalled.\n\nIdeally, the buildfarm client would then start to use find_typedef\nfrom the tree rather than have its own copy, both to reduce\nduplication and to ensure that the in-tree copy keeps working.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Dec 2023 18:02:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On Tue Dec 12, 2023 at 5:02 PM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > The big patch here is adding support for Mac. objdump -W doesn't work on \n> > Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n>\n> We should probably nuke the current version of src/tools/find_typedef\n> altogether in favor of copying the current buildfarm code for that.\n> We know that the buildfarm's version works, whereas I'm not real sure\n> that src/tools/find_typedef is being used in anger by anyone. Also,\n> we're long past the point where developers can avoid having Perl\n> installed.\n>\n> Ideally, the buildfarm client would then start to use find_typedef\n> from the tree rather than have its own copy, both to reduce\n> duplication and to ensure that the in-tree copy keeps working.\n\nThat makes sense to me. Where can I find the buildfarm code to propose \na different patch, at least pulling in the current state of the \nbuildfarm script? Or perhaps Andrew is the best person for this job.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 13 Dec 2023 11:23:50 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> That makes sense to me. Where can I find the buildfarm code to propose \n> a different patch, at least pulling in the current state of the \n> buildfarm script? Or perhaps Andrew is the best person for this job.\n\nI think this is the authoritative repo:\n\nhttps://github.com/PGBuildFarm/client-code.git\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Dec 2023 12:27:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On Wed Dec 13, 2023 at 11:27 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > That makes sense to me. Where can I find the buildfarm code to propose \n> > a different patch, at least pulling in the current state of the \n> > buildfarm script? Or perhaps Andrew is the best person for this job.\n>\n> I think this is the authoritative repo:\n>\n> https://github.com/PGBuildFarm/client-code.git\n\nCool. I'll reach out to Andrew off list to work with him. Maybe I'll \ngain a little bit more knowledge of how the buildfarm works :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 13 Dec 2023 12:02:20 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "\nOn 2023-12-12 Tu 18:02, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n>> The big patch here is adding support for Mac. objdump -W doesn't work on\n>> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n> We should probably nuke the current version of src/tools/find_typedef\n> altogether in favor of copying the current buildfarm code for that.\n> We know that the buildfarm's version works, whereas I'm not real sure\n> that src/tools/find_typedef is being used in anger by anyone. Also,\n> we're long past the point where developers can avoid having Perl\n> installed.\n>\n> Ideally, the buildfarm client would then start to use find_typedef\n> from the tree rather than have its own copy, both to reduce\n> duplication and to ensure that the in-tree copy keeps working.\n>\n> \t\t\t\n\n\n+1. I'd be more than happy to be rid of maintaining the code. We already \nperformed a rather more complicated operation along these lines with the \nPostgreSQL::Test::AdjustUpgrade stuff.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Dec 2023 15:35:40 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On Wed Dec 13, 2023 at 2:35 PM CST, Andrew Dunstan wrote:\n>\n> On 2023-12-12 Tu 18:02, Tom Lane wrote:\n> > \"Tristan Partin\" <[email protected]> writes:\n> >> The big patch here is adding support for Mac. objdump -W doesn't work on\n> >> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n> > We should probably nuke the current version of src/tools/find_typedef\n> > altogether in favor of copying the current buildfarm code for that.\n> > We know that the buildfarm's version works, whereas I'm not real sure\n> > that src/tools/find_typedef is being used in anger by anyone. Also,\n> > we're long past the point where developers can avoid having Perl\n> > installed.\n> >\n> > Ideally, the buildfarm client would then start to use find_typedef\n> > from the tree rather than have its own copy, both to reduce\n> > duplication and to ensure that the in-tree copy keeps working.\n> >\n> > \t\t\t\n>\n>\n> +1. I'd be more than happy to be rid of maintaining the code. We already \n> performed a rather more complicated operation along these lines with the \n> PostgreSQL::Test::AdjustUpgrade stuff.\n\nI'll work with you on GitHub to help make the transition. I've already \nmade a draft PR in the client-code repo, but I am sure I am missing some \nstuff.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 13 Dec 2023 14:59:11 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "\nOn 2023-12-13 We 15:59, Tristan Partin wrote:\n> On Wed Dec 13, 2023 at 2:35 PM CST, Andrew Dunstan wrote:\n>>\n>> On 2023-12-12 Tu 18:02, Tom Lane wrote:\n>> > \"Tristan Partin\" <[email protected]> writes:\n>> >> The big patch here is adding support for Mac. objdump -W doesn't \n>> work on\n>> >> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n>> > We should probably nuke the current version of src/tools/find_typedef\n>> > altogether in favor of copying the current buildfarm code for that.\n>> > We know that the buildfarm's version works, whereas I'm not real sure\n>> > that src/tools/find_typedef is being used in anger by anyone. Also,\n>> > we're long past the point where developers can avoid having Perl\n>> > installed.\n>> >\n>> > Ideally, the buildfarm client would then start to use find_typedef\n>> > from the tree rather than have its own copy, both to reduce\n>> > duplication and to ensure that the in-tree copy keeps working.\n>> >\n>> >\n>>\n>>\n>> +1. I'd be more than happy to be rid of maintaining the code. We \n>> already performed a rather more complicated operation along these \n>> lines with the PostgreSQL::Test::AdjustUpgrade stuff.\n>\n> I'll work with you on GitHub to help make the transition. I've already \n> made a draft PR in the client-code repo, but I am sure I am missing \n> some stuff.\n\n\n\nI think we're putting the cart before the horse here. I think we need a \nperl module in core that can be loaded by the buildfarm code, and could \nalso be used by a standalone find_typedef (c.f. \nsrc/test/PostgreSQL/Test/AdjustUpgrade.pm). To be useful that would have \nto be backported.\n\nI'll have a go at that.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 14 Dec 2023 10:16:44 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On Thu Dec 14, 2023 at 9:16 AM CST, Andrew Dunstan wrote:\n>\n> On 2023-12-13 We 15:59, Tristan Partin wrote:\n> > On Wed Dec 13, 2023 at 2:35 PM CST, Andrew Dunstan wrote:\n> >>\n> >> On 2023-12-12 Tu 18:02, Tom Lane wrote:\n> >> > \"Tristan Partin\" <[email protected]> writes:\n> >> >> The big patch here is adding support for Mac. objdump -W doesn't \n> >> work on\n> >> >> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n> >> > We should probably nuke the current version of src/tools/find_typedef\n> >> > altogether in favor of copying the current buildfarm code for that.\n> >> > We know that the buildfarm's version works, whereas I'm not real sure\n> >> > that src/tools/find_typedef is being used in anger by anyone. Also,\n> >> > we're long past the point where developers can avoid having Perl\n> >> > installed.\n> >> >\n> >> > Ideally, the buildfarm client would then start to use find_typedef\n> >> > from the tree rather than have its own copy, both to reduce\n> >> > duplication and to ensure that the in-tree copy keeps working.\n> >> >\n> >> >\n> >>\n> >>\n> >> +1. I'd be more than happy to be rid of maintaining the code. We \n> >> already performed a rather more complicated operation along these \n> >> lines with the PostgreSQL::Test::AdjustUpgrade stuff.\n> >\n> > I'll work with you on GitHub to help make the transition. I've already \n> > made a draft PR in the client-code repo, but I am sure I am missing \n> > some stuff.\n>\n>\n>\n> I think we're putting the cart before the horse here. I think we need a \n> perl module in core that can be loaded by the buildfarm code, and could \n> also be used by a standalone find_typedef (c.f. \n> src/test/PostgreSQL/Test/AdjustUpgrade.pm). To be useful that would have \n> to be backported.\n>\n> I'll have a go at that.\n\nHere is an adaptation of the find_typedefs from run_build.pl. Maybe it \nwill help you.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Thu, 14 Dec 2023 09:36:59 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On 2023-12-14 Th 10:36, Tristan Partin wrote:\n> On Thu Dec 14, 2023 at 9:16 AM CST, Andrew Dunstan wrote:\n>>\n>> On 2023-12-13 We 15:59, Tristan Partin wrote:\n>> > On Wed Dec 13, 2023 at 2:35 PM CST, Andrew Dunstan wrote:\n>> >>\n>> >> On 2023-12-12 Tu 18:02, Tom Lane wrote:\n>> >> > \"Tristan Partin\" <[email protected]> writes:\n>> >> >> The big patch here is adding support for Mac. objdump -W \n>> doesn't >> work on\n>> >> >> Mac. So, I used dsymutil and dwarfdump to achieve the same result.\n>> >> > We should probably nuke the current version of \n>> src/tools/find_typedef\n>> >> > altogether in favor of copying the current buildfarm code for that.\n>> >> > We know that the buildfarm's version works, whereas I'm not real \n>> sure\n>> >> > that src/tools/find_typedef is being used in anger by anyone. \n>> Also,\n>> >> > we're long past the point where developers can avoid having Perl\n>> >> > installed.\n>> >> >\n>> >> > Ideally, the buildfarm client would then start to use find_typedef\n>> >> > from the tree rather than have its own copy, both to reduce\n>> >> > duplication and to ensure that the in-tree copy keeps working.\n>> >> >\n>> >> >\n>> >>\n>> >>\n>> >> +1. I'd be more than happy to be rid of maintaining the code. We \n>> >> already performed a rather more complicated operation along these \n>> >> lines with the PostgreSQL::Test::AdjustUpgrade stuff.\n>> >\n>> > I'll work with you on GitHub to help make the transition. I've \n>> already > made a draft PR in the client-code repo, but I am sure I am \n>> missing > some stuff.\n>>\n>>\n>>\n>> I think we're putting the cart before the horse here. I think we need \n>> a perl module in core that can be loaded by the buildfarm code, and \n>> could also be used by a standalone find_typedef (c.f. \n>> src/test/PostgreSQL/Test/AdjustUpgrade.pm). To be useful that would \n>> have to be backported.\n>>\n>> I'll have a go at that.\n>\n> Here is an adaptation of the find_typedefs from run_build.pl. Maybe it \n> will help you.\n\n\n\nHere's more or less what I had in mind.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 15 Dec 2023 08:42:57 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Here's more or less what I had in mind.\n\nDo we really need the dependency on an install tree?\nCan't we just find the executables (or .o files for Darwin)\nin the build tree? Seems like that would simplify usage,\nand reduce the possibility for version-skew errors.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Dec 2023 11:06:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "> if ($using_osx)\n> {\n> # On OS X, we need to examine the .o files\n> # exclude ecpg/test, which pgindent does too\n> my $obj_wanted = sub {\n> /^.*\\.o\\z/s\n> && !($File::Find::name =~ m!/ecpg/test/!s)\n> && push(@testfiles, $File::Find::name);\n> };\n> \n> File::Find::find($obj_wanted, $bindir);\n> }\n\nI think we should use dsymutil --flat to assemble .dwarf files on Mac \ninstead of inspecting the plain object files. This would allow you to \nuse the script in the same way on presumably any system that Postgres \nsupports, meaning we can drop this distinction:\n\n> # The second argument is a directory. For everything except OSX it should be\n> # a directory where postgres is installed (e.g. $installdir for the buildfarm).\n> # It should have bin and lib subdirectories. On OSX it should instead be the\n> # top of the build tree, as we need to examine the individual object files.\n\n> my @err = `$objdump -W 2>&1`;\n> my @readelferr = `readelf -w 2>&1`;\n> my $using_osx = (`uname` eq \"Darwin\\n\");\n\nIs there any reason we can't use uname -s to detect the system instead \nof relying on error condition heuristics of readelf and objdump?\n\n> # Note that this assumes there is not a platform-specific subdirectory of\n> # lib like meson likes to use. (The buildfarm avoids this by specifying\n> # --libdir=lib to meson setup.)\n\nShould we just default the Meson build to libdir=lib in \nproject(default_options:)? This assumes that you don't think what Tom \nsaid about running it on the build tree is better.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 15 Dec 2023 10:08:58 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "On Fri Dec 15, 2023 at 10:06 AM CST, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n> > Here's more or less what I had in mind.\n>\n> Do we really need the dependency on an install tree?\n> Can't we just find the executables (or .o files for Darwin)\n> in the build tree? Seems like that would simplify usage,\n> and reduce the possibility for version-skew errors.\n\nSeems like you would be forcing an extension author to keep a Postgres \nsource tree around if you went this route. Perhaps supporting either the \nbuild tree or an install tree would get you the best of both worlds.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 15 Dec 2023 10:10:14 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
},
{
"msg_contents": "\nOn 2023-12-15 Fr 11:06, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> Here's more or less what I had in mind.\n> Do we really need the dependency on an install tree?\n> Can't we just find the executables (or .o files for Darwin)\n> in the build tree? Seems like that would simplify usage,\n> and reduce the possibility for version-skew errors.\n>\n> \t\t\t\n\n\nSure, I just picked up the buildfarm code more or less without any \nexcept necessary modifications. I don't remember the history - we've \ndone it that way for a good long while.\n\nWe'll need a way to identify the files to analyze, e.g. different \nlibrary and exe extensions.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 15 Dec 2023 12:11:38 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up find_typedefs and add support for Mac"
}
] |
[
{
"msg_contents": "Can postgres client send subsequent requests without receiving response? If\nso, how does the postgres client correlate between a request and its\nresponse? Where can I get more hints about it?\n\nCan postgres client send subsequent requests without receiving response? If so, how does the postgres client correlate between a request and its response? Where can I get more hints about it?",
"msg_date": "Wed, 13 Dec 2023 12:00:35 +0600",
"msg_from": "Abdul Matin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Subsequent request from pg client"
},
{
"msg_contents": "Abdul Matin <[email protected]> writes:\n> Can postgres client send subsequent requests without receiving response? If\n> so, how does the postgres client correlate between a request and its\n> response? Where can I get more hints about it?\n\nhttps://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-PIPELINING\n\nSee also the pipelining-related functions in recent libpq versions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Dec 2023 09:40:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subsequent request from pg client"
}
] |
[
{
"msg_contents": "Hi, Thomas Munro and Laurenz Albe.\n\nSince I didn't subscribe to the psql-hackers mailing list before this bug\nwas raised, please forgive me for not being able to reply to this email by\nplacing the email message below.\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nI forbid to create indexes on whole-row expression in the following patch.\nI'd like to hear your opinions.\n\n\n--\nBest Wishes,\nywgrit",
"msg_date": "Wed, 13 Dec 2023 15:44:20 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix bug with indexes on whole-row expressions"
},
{
"msg_contents": "ywgrit <[email protected]> writes:\n> I forbid to create indexes on whole-row expression in the following patch.\n> I'd like to hear your opinions.\n\nAs I said in the previous thread, I don't think this can possibly\nbe acceptable. Surely there are people depending on the capability.\nI'm not worried so much about the exact case of an index column\nbeing a whole-row Var --- I agree that that's pretty useless ---\nbut an index column that is a function on a whole-row Var seems\nquite useful. (Your patch fails to detect that, BTW, which means\nit does not block the case presented in bug #18244.)\n\nI thought about extending the ALTER TABLE logic to disallow changes\nin composite types that appear in index expressions. We already have\nfind_composite_type_dependencies(), and it turns out that this already\nblocks ALTER for the case you want to forbid, but we concluded that we\ndidn't need to prevent it for the bug #18244 case:\n\n * If objsubid identifies a specific column, refer to that in error\n * messages. Otherwise, search to see if there's a user column of the\n * type. (We assume system columns are never of interesting types.)\n * The search is needed because an index containing an expression\n * column of the target type will just be recorded as a whole-relation\n * dependency. If we do not find a column of the type, the dependency\n * must indicate that the type is transiently referenced in an index\n * expression but not stored on disk, which we assume is OK, just as\n * we do for references in views. (It could also be that the target\n * type is embedded in some container type that is stored in an index\n * column, but the previous recursion should catch such cases.)\n\nPerhaps a reasonable answer would be to issue a WARNING (not error)\nin the case where an index has this kind of dependency. The index\nmight need to be reindexed --- but it might not, too, and in any case\nI doubt that flat-out forbidding the ALTER is a helpful idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Dec 2023 10:01:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug with indexes on whole-row expressions"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 7:01 AM Tom Lane <[email protected]> wrote:\n\n> ywgrit <[email protected]> writes:\n> > I forbid to create indexes on whole-row expression in the following\n> patch.\n> > I'd like to hear your opinions.\n>\n> As I said in the previous thread, I don't think this can possibly\n> be acceptable. Surely there are people depending on the capability.\n> I'm not worried so much about the exact case of an index column\n> being a whole-row Var --- I agree that that's pretty useless ---\n> but an index column that is a function on a whole-row Var seems\n> quite useful. (Your patch fails to detect that, BTW, which means\n> it does not block the case presented in bug #18244.)\n>\n> I thought about extending the ALTER TABLE logic to disallow changes\n> in composite types that appear in index expressions. We already have\n> find_composite_type_dependencies(), and it turns out that this already\n> blocks ALTER for the case you want to forbid, but we concluded that we\n> didn't need to prevent it for the bug #18244 case:\n>\n> * If objsubid identifies a specific column, refer to that in error\n> * messages. Otherwise, search to see if there's a user column of\n> the\n> * type. (We assume system columns are never of interesting\n> types.)\n> * The search is needed because an index containing an expression\n> * column of the target type will just be recorded as a\n> whole-relation\n> * dependency. If we do not find a column of the type, the\n> dependency\n> * must indicate that the type is transiently referenced in an\n> index\n> * expression but not stored on disk, which we assume is OK, just\n> as\n> * we do for references in views. (It could also be that the\n> target\n> * type is embedded in some container type that is stored in an\n> index\n> * column, but the previous recursion should catch such cases.)\n>\n> Perhaps a reasonable answer would be to issue a WARNING (not error)\n> in the case where an index has this kind of dependency. The index\n> might need to be reindexed --- but it might not, too, and in any case\n> I doubt that flat-out forbidding the ALTER is a helpful idea.\n>\n> regards, tom lane\n>\n\nWARNING can be easily overlooked. Users of mobile/web apps don't see\nPostgres WARNINGs.\n\nForbidding ALTER sounds more reasonable.\n\nDo you see any good use cases for whole-row indexes?\n\nAnd for such cases, wouldn't it be reasonable for users to specify all\ncolumns explicitly? E.g.:\n\n create index on t using btree(row(c1, c2, c3));\n\nOn Wed, Dec 13, 2023 at 7:01 AM Tom Lane <[email protected]> wrote:ywgrit <[email protected]> writes:\n> I forbid to create indexes on whole-row expression in the following patch.\n> I'd like to hear your opinions.\n\nAs I said in the previous thread, I don't think this can possibly\nbe acceptable. Surely there are people depending on the capability.\nI'm not worried so much about the exact case of an index column\nbeing a whole-row Var --- I agree that that's pretty useless ---\nbut an index column that is a function on a whole-row Var seems\nquite useful. (Your patch fails to detect that, BTW, which means\nit does not block the case presented in bug #18244.)\n\nI thought about extending the ALTER TABLE logic to disallow changes\nin composite types that appear in index expressions. We already have\nfind_composite_type_dependencies(), and it turns out that this already\nblocks ALTER for the case you want to forbid, but we concluded that we\ndidn't need to prevent it for the bug #18244 case:\n\n * If objsubid identifies a specific column, refer to that in error\n * messages. Otherwise, search to see if there's a user column of the\n * type. (We assume system columns are never of interesting types.)\n * The search is needed because an index containing an expression\n * column of the target type will just be recorded as a whole-relation\n * dependency. If we do not find a column of the type, the dependency\n * must indicate that the type is transiently referenced in an index\n * expression but not stored on disk, which we assume is OK, just as\n * we do for references in views. (It could also be that the target\n * type is embedded in some container type that is stored in an index\n * column, but the previous recursion should catch such cases.)\n\nPerhaps a reasonable answer would be to issue a WARNING (not error)\nin the case where an index has this kind of dependency. The index\nmight need to be reindexed --- but it might not, too, and in any case\nI doubt that flat-out forbidding the ALTER is a helpful idea.\n\n regards, tom laneWARNING can be easily overlooked. Users of mobile/web apps don't see Postgres WARNINGs.Forbidding ALTER sounds more reasonable.Do you see any good use cases for whole-row indexes?And for such cases, wouldn't it be reasonable for users to specify all columns explicitly? E.g.: create index on t using btree(row(c1, c2, c3));",
"msg_date": "Thu, 14 Dec 2023 22:11:29 -0800",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug with indexes on whole-row expressions"
},
{
"msg_contents": "Thanks, tom. Considering the scenario where the indexed column is a\nfunction Var on a whole expression, it's really not a good idea to disable\ncreating index on whole expression. I tried\nfind_composite_type_dependencies, it seems that this function can only\ndetect dependencies created by statements such as 'CREATE INDEX\ntest_tbl1_idx ON test_tbl1((row(x,y)::test_type1))', and cannot detect\ndependencies created by statements such as 'CREATE INDEX test_tbl1_idx ON\ntest_tbl1((test _tbl1))'. After the execution of the former sql statement,\n4 rows are added to the pg_depend table, one of which is the index ->\npg_type dependency. After the latter sql statement is executed, only one\nrow is added to the pg_depend table, and there is no index -> pg_type\ndependency, so I guess this function doesn't detect all cases of index on\nwhole-row expression. And I would suggest to do the detection when the\nindex is created, because then we can get the details of the index and give\na warning in the way you mentioned.\n\nTom Lane <[email protected]> 于2023年12月13日周三 23:01写道:\n\n> ywgrit <[email protected]> writes:\n> > I forbid to create indexes on whole-row expression in the following\n> patch.\n> > I'd like to hear your opinions.\n>\n> As I said in the previous thread, I don't think this can possibly\n> be acceptable. Surely there are people depending on the capability.\n> I'm not worried so much about the exact case of an index column\n> being a whole-row Var --- I agree that that's pretty useless ---\n> but an index column that is a function on a whole-row Var seems\n> quite useful. (Your patch fails to detect that, BTW, which means\n> it does not block the case presented in bug #18244.)\n>\n> I thought about extending the ALTER TABLE logic to disallow changes\n> in composite types that appear in index expressions. We already have\n> find_composite_type_dependencies(), and it turns out that this already\n> blocks ALTER for the case you want to forbid, but we concluded that we\n> didn't need to prevent it for the bug #18244 case:\n>\n> * If objsubid identifies a specific column, refer to that in error\n> * messages. Otherwise, search to see if there's a user column of\n> the\n> * type. (We assume system columns are never of interesting\n> types.)\n> * The search is needed because an index containing an expression\n> * column of the target type will just be recorded as a\n> whole-relation\n> * dependency. If we do not find a column of the type, the\n> dependency\n> * must indicate that the type is transiently referenced in an\n> index\n> * expression but not stored on disk, which we assume is OK, just\n> as\n> * we do for references in views. (It could also be that the\n> target\n> * type is embedded in some container type that is stored in an\n> index\n> * column, but the previous recursion should catch such cases.)\n>\n> Perhaps a reasonable answer would be to issue a WARNING (not error)\n> in the case where an index has this kind of dependency. The index\n> might need to be reindexed --- but it might not, too, and in any case\n> I doubt that flat-out forbidding the ALTER is a helpful idea.\n>\n> regards, tom lane\n>\n\nThanks, tom. Considering the scenario where the indexed column is a function Var on a whole expression, it's really not a good idea to disable creating index on whole expression. I tried find_composite_type_dependencies, it seems that this function can only detect dependencies created by statements such as 'CREATE INDEX test_tbl1_idx ON test_tbl1((row(x,y)::test_type1))', and cannot detect dependencies created by statements such as 'CREATE INDEX test_tbl1_idx ON test_tbl1((test _tbl1))'. After the execution of the former sql statement, 4 rows are added to the pg_depend table, one of which is the index -> pg_type dependency. After the latter sql statement is executed, only one row is added to the pg_depend table, and there is no index -> pg_type dependency, so I guess this function doesn't detect all cases of index on whole-row expression. And I would suggest to do the detection when the index is created, because then we can get the details of the index and give a warning in the way you mentioned.Tom Lane <[email protected]> 于2023年12月13日周三 23:01写道:ywgrit <[email protected]> writes:\n> I forbid to create indexes on whole-row expression in the following patch.\n> I'd like to hear your opinions.\n\nAs I said in the previous thread, I don't think this can possibly\nbe acceptable. Surely there are people depending on the capability.\nI'm not worried so much about the exact case of an index column\nbeing a whole-row Var --- I agree that that's pretty useless ---\nbut an index column that is a function on a whole-row Var seems\nquite useful. (Your patch fails to detect that, BTW, which means\nit does not block the case presented in bug #18244.)\n\nI thought about extending the ALTER TABLE logic to disallow changes\nin composite types that appear in index expressions. We already have\nfind_composite_type_dependencies(), and it turns out that this already\nblocks ALTER for the case you want to forbid, but we concluded that we\ndidn't need to prevent it for the bug #18244 case:\n\n * If objsubid identifies a specific column, refer to that in error\n * messages. Otherwise, search to see if there's a user column of the\n * type. (We assume system columns are never of interesting types.)\n * The search is needed because an index containing an expression\n * column of the target type will just be recorded as a whole-relation\n * dependency. If we do not find a column of the type, the dependency\n * must indicate that the type is transiently referenced in an index\n * expression but not stored on disk, which we assume is OK, just as\n * we do for references in views. (It could also be that the target\n * type is embedded in some container type that is stored in an index\n * column, but the previous recursion should catch such cases.)\n\nPerhaps a reasonable answer would be to issue a WARNING (not error)\nin the case where an index has this kind of dependency. The index\nmight need to be reindexed --- but it might not, too, and in any case\nI doubt that flat-out forbidding the ALTER is a helpful idea.\n\n regards, tom lane",
"msg_date": "Mon, 18 Dec 2023 14:55:57 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix bug with indexes on whole-row expressions"
}
] |
[
{
"msg_contents": "I noticed that Logical Streaming Replication Protocol documentation\n[1] is missing the options added to \"pgoutput\" since version 14. A\npatch is attached to fix it together with another small one to give a\nnice error when \"proto_version\" parameter is not provided.\n\n[1] https://www.postgresql.org/docs/devel/protocol-logical-replication.html",
"msg_date": "Wed, 13 Dec 2023 16:54:33 +0100",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 9:25 PM Emre Hasegeli <[email protected]> wrote:\n>\n> I noticed that Logical Streaming Replication Protocol documentation\n> [1] is missing the options added to \"pgoutput\" since version 14. A\n> patch is attached to fix it together with another small one to give a\n> nice error when \"proto_version\" parameter is not provided.\n>\n\nI agree that we missed updating the parameters of the Logical\nStreaming Replication Protocol documentation. I haven't reviewed all\nthe details yet but one minor thing that caught my attention while\nlooking at your patch is that we can update the exact additional\ninformation we started to send for streaming mode parallel. We should\nupdate the following sentence: \"It accepts an additional value\n\"parallel\" to enable sending extra information with the \"Stream Abort\"\nmessages to be used for parallelisation.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 09:04:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "Hi, here are some initial review comments.\n\n//////\n\nPatch v00-0001\n\n1.\n+\n+ /* Check required parameters */\n+ if (!protocol_version_given)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"proto_version parameter missing\")));\n+ if (!publication_names_given)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"publication_names parameter missing\")));\n\nTo reduce translation efforts, perhaps it is better to arrange for\nthese to share a common message.\n\nFor example,\n\nereport(ERROR,\n errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n /* translator: %s is a pgoutput option */\n errmsg(\"missing pgoutput option: %s\", \"proto_version\"));\n\n~\n\nereport(ERROR,\n errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n /* translator: %s is a pgoutput option */\n errmsg(\"missing pgoutput option: %s\", \"publication_names\"));\n\nAlso, I am unsure whether to call these \"parameters\" or \"options\" -- I\nwanted to call them parameters like in the documentation, but every\nother message in this function refers to \"options\", so in my example,\nI mimicked the nearby code YMMV.\n\n//////\n\nPatch v00-0002\n\n2.\n- The logical replication <literal>START_REPLICATION</literal> command\n- accepts following parameters:\n+ The standard logical decoding plugin (<literal>pgoutput</literal>) accepts\n+ following parameters with <literal>START_REPLICATION</literal> command:\n\nSince the previous line already said pgoutput is the standard decoding\nplugin, maybe it's not necessary to repeat that.\n\nSUGGESTION\nUsing the <literal>START_REPLICATION</literal> command,\n<literal>pgoutput</literal>) accepts the following parameters:\n\n~~~\n\n3.\nI noticed in the protocol message formats docs [1] that those messages\nare grouped according to the protocol version that supports them.\nPerhaps this page should be arranged similarly for these parameters?\n\nFor example, document the parameters in the order they were introduced.\n\nSUGGESTION\n\n-proto_version\n ...\n-publication_names\n ...\n-binary\n ...\n-messages\n ...\n\nSince protocol version 2:\n\n-streaming (boolean)\n ...\n\nSince protocol version 3:\n\n-two_phase\n ...\n\nSince protocol version 4:\n\n-streaming (boolean/parallel)\n ...\n-origin\n ...\n\n~~~\n\n4.\n+ Boolean parameter to use the binary transfer mode. This is faster\n+ than the text mode, but slightly less robust\n\nSUGGESTION\nBoolean parameter to use binary transfer mode. Binary mode is faster\nthan the text mode but slightly less robust\n\n\n======\n[1] https://www.postgresql.org/docs/current/protocol-logicalrep-message-formats.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 14 Dec 2023 18:53:56 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> To reduce translation efforts, perhaps it is better to arrange for\n> these to share a common message.\n\nGood idea. I've done so.\n\n> Also, I am unsure whether to call these \"parameters\" or \"options\" -- I\n> wanted to call them parameters like in the documentation, but every\n> other message in this function refers to \"options\", so in my example,\n> I mimicked the nearby code YMMV.\n\nIt looks like they are called \"options\" in most places. I changed the\ndocumentation to be consistent too.\n\n> Since the previous line already said pgoutput is the standard decoding\n> plugin, maybe it's not necessary to repeat that.\n>\n> SUGGESTION\n> Using the <literal>START_REPLICATION</literal> command,\n> <literal>pgoutput</literal>) accepts the following parameters:\n\nChanged.\n\n> 3.\n> I noticed in the protocol message formats docs [1] that those messages\n> are grouped according to the protocol version that supports them.\n> Perhaps this page should be arranged similarly for these parameters?\n>\n> For example, document the parameters in the order they were introduced.\n>\n> SUGGESTION\n>\n> -proto_version\n> ...\n> -publication_names\n> ...\n> -binary\n> ...\n> -messages\n> ...\n>\n> Since protocol version 2:\n>\n> -streaming (boolean)\n> ...\n>\n> Since protocol version 3:\n>\n> -two_phase\n> ...\n>\n> Since protocol version 4:\n>\n> -streaming (boolean/parallel)\n> ...\n> -origin\n\nThis is not going to be correct because not all options do require a\nprotocol version. \"origin\" is added in version 16, but doesn't check\nfor any \"proto_version\". Perhaps we should fix this too.\n\n> 4.\n> + Boolean parameter to use the binary transfer mode. This is faster\n> + than the text mode, but slightly less robust\n>\n> SUGGESTION\n> Boolean parameter to use binary transfer mode. Binary mode is faster\n> than the text mode but slightly less robust\n\nDone.\n\nThanks for the review.\n\nThe new versions are attached.",
"msg_date": "Thu, 14 Dec 2023 11:14:33 +0100",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> I agree that we missed updating the parameters of the Logical\n> Streaming Replication Protocol documentation. I haven't reviewed all\n> the details yet but one minor thing that caught my attention while\n> looking at your patch is that we can update the exact additional\n> information we started to send for streaming mode parallel. We should\n> update the following sentence: \"It accepts an additional value\n> \"parallel\" to enable sending extra information with the \"Stream Abort\"\n> messages to be used for parallelisation.\"\n\nI changed this in the new version.\n\nThank you for picking this up.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 11:15:25 +0100",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "Thanks for the update. Here are some more review comments for the v01* patches.\n\n//////\n\nPatch v00-0001\n\nv01 modified the messages more than I was expecting, although what you\ndid looks fine to me.\n\n~~~\n\n1.\n+ /* Check required options */\n+ if (!protocol_version_given)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ /* translator: %s is a pgoutput option */\n+ errmsg(\"missing pgoutput option: %s\", \"proto_version\"));\n+ if (!publication_names_given)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ /* translator: %s is a pgoutput option */\n+ errmsg(\"missing pgoutput option: %s\", \"publication_names\"));\n\nI saw that the original \"publication_names\" error was using\nerrcode(ERRCODE_INVALID_PARAMETER_VALUE), but TBH since there is no\noption given at all I felt ERRCODE_SYNTAX_ERROR might be more\nappropriate errcode for those 2 mandatory option errors.\n\n//////\n\nPatch v00-0002\n\n2.\n\nI saw that the chapter \"55.4. Streaming Replication Protocol\" [1]\nintroduces \"START_REPLICATION SLOT slot_name LOGICAL ...\" command and\nit says\n---\noption_name\nThe name of an option passed to the slot's logical decoding plugin.\n---\n\nPerhaps that part should now include a reference to your new information:\n\nSUGGESTION\noption_name\nThe name of an option passed to the slot's logical decoding plugin.\nPlease see <XXX (55.5.1)> for details about options that are accepted\nby the standard (pgoutput) plugin.\n\n~~~\n\n3.\n <para>\n- The logical replication <literal>START_REPLICATION</literal> command\n- accepts following parameters:\n+ Using the <literal>START_REPLICATION</literal> command,\n+ <literal>pgoutput</literal>) accepts the following options:\n\n\nOops, you copied my typo. There is a spurious ')'.\n\n~~~\n\n4.\n+<!-- Backpack through version 16. -->\n+ <varlistentry>\n+ <term>\n+ origin\n+ </term>\n+ <listitem>\n+ <para>\n+ String option to send only changes by an origin. It also gets\n+ the option \"none\" to send the changes that have no origin associated,\n+ and the option \"any\" to send the changes regardless of their origin.\n+ This can be used to avoid loops (infinite replication of the same data)\n+ among replication nodes.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n </variablelist>\n\nAFAIK pgoutput only understands origin values \"any\" and \"none\" and\nnothing else; I felt the \"It also gets...\" part implies it is more\nflexible than it is.\n\nSUGGESTION\nPossible values are \"none\" (to only send the changes that have no\norigin associated), or \"any\" (to send the changes regardless of their\norigin).\n\n~~~\n\n5. Rearranging option details\n\n> > SUGGESTION\n> >\n> > -proto_version\n> > ...\n> > -publication_names\n> > ...\n> > -binary\n> > ...\n> > -messages\n> > ...\n> >\n> > Since protocol version 2:\n> >\n> > -streaming (boolean)\n> > ...\n> >\n> > Since protocol version 3:\n> >\n> > -two_phase\n> > ...\n> >\n> > Since protocol version 4:\n> >\n> > -streaming (boolean/parallel)\n> > ...\n> > -origin\n>\n> This is not going to be correct because not all options do require a\n> protocol version. \"origin\" is added in version 16, but doesn't check\n> for any \"proto_version\". Perhaps we should fix this too.\n>\n\nOK, to deal with that can't you just include \"origin\" in the first\ngroup which has no special protocol requirements?\n\nSUGGESTION\n-proto_version\n-publication_names\n-binary\n-messages\n-origin\n\nRequires minimum protocol version 2:\n-streaming (boolean)\n\nRequires minimum protocol version 3:\n-two_phase\n\nRequires minimum protocol version 4:\n-streaming (parallel)\n\n======\n[1] 55.4 https://www.postgresql.org/docs/devel/protocol-replication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 15 Dec 2023 16:43:00 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> I saw that the original \"publication_names\" error was using\n> errcode(ERRCODE_INVALID_PARAMETER_VALUE), but TBH since there is no\n> option given at all I felt ERRCODE_SYNTAX_ERROR might be more\n> appropriate errcode for those 2 mandatory option errors.\n\nIt makes sense to me. Changed.\n\n> 2.\n>\n> I saw that the chapter \"55.4. Streaming Replication Protocol\" [1]\n> introduces \"START_REPLICATION SLOT slot_name LOGICAL ...\" command and\n> it says\n> ---\n> option_name\n> The name of an option passed to the slot's logical decoding plugin.\n> ---\n>\n> Perhaps that part should now include a reference to your new information:\n>\n> SUGGESTION\n> option_name\n> The name of an option passed to the slot's logical decoding plugin.\n> Please see <XXX (55.5.1)> for details about options that are accepted\n> by the standard (pgoutput) plugin.\n\nGood idea. Incorporated.\n\n> 3.\n> <para>\n> - The logical replication <literal>START_REPLICATION</literal> command\n> - accepts following parameters:\n> + Using the <literal>START_REPLICATION</literal> command,\n> + <literal>pgoutput</literal>) accepts the following options:\n>\n>\n> Oops, you copied my typo. There is a spurious ')'.\n\nFixed.\n\n> 4.\n> +<!-- Backpack through version 16. -->\n> + <varlistentry>\n> + <term>\n> + origin\n> + </term>\n> + <listitem>\n> + <para>\n> + String option to send only changes by an origin. It also gets\n> + the option \"none\" to send the changes that have no origin associated,\n> + and the option \"any\" to send the changes regardless of their origin.\n> + This can be used to avoid loops (infinite replication of the same data)\n> + among replication nodes.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> </variablelist>\n>\n> AFAIK pgoutput only understands origin values \"any\" and \"none\" and\n> nothing else; I felt the \"It also gets...\" part implies it is more\n> flexible than it is.\n>\n> SUGGESTION\n> Possible values are \"none\" (to only send the changes that have no\n> origin associated), or \"any\" (to send the changes regardless of their\n> origin).\n\nOh, it's not how I understood it. I think you are right. Changed.\n\n> OK, to deal with that can't you just include \"origin\" in the first\n> group which has no special protocol requirements?\n\nI think it'd be confusing because the option is not available before\nversion 16. I think it should really check for the version number and\ncomplain if it's less than 4.\n\n> SUGGESTION\n> -proto_version\n> -publication_names\n> -binary\n> -messages\n> -origin\n>\n> Requires minimum protocol version 2:\n> -streaming (boolean)\n>\n> Requires minimum protocol version 3:\n> -two_phase\n>\n> Requires minimum protocol version 4:\n> -streaming (parallel)\n\nI am still not sure if this is any better. I don't like that\n\"streaming\" appears twice, and I wouldn't know how to format this\nnicely.\n\nThe new versions are attached.\n\nI also added \"Required.\" for \"proto_version\" and \"publication_names\".",
"msg_date": "Fri, 15 Dec 2023 14:35:46 +0100",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "Hi, I had a look at the latest v02 patches.\n\nOn Sat, Dec 16, 2023 at 12:36 AM Emre Hasegeli <[email protected]> wrote:\n>\n> > OK, to deal with that can't you just include \"origin\" in the first\n> > group which has no special protocol requirements?\n>\n> I think it'd be confusing because the option is not available before\n> version 16. I think it should really check for the version number and\n> complain if it's less than 4.\n\nHm. I don't think a proto_version check is required for \"origin\".\n\nIIUC, the protocol version number indicates the message byte format.\nIt's needed so that those messages bytes can be read/written in the\nsame/compatible way. OTOH I thought the \"origin\" option has nothing\nreally to do with actual message formats on the wire; I think it works\njust by filtering up-front to decide either to send the changes or not\nsend the changes. For example, so long as PostgreSQL >= v16, I expect\nyou could probably use \"origin\" with any proto_version you wanted.\n\n>\n> > SUGGESTION\n> > -proto_version\n> > -publication_names\n> > -binary\n> > -messages\n> > -origin\n> >\n> > Requires minimum protocol version 2:\n> > -streaming (boolean)\n> >\n> > Requires minimum protocol version 3:\n> > -two_phase\n> >\n> > Requires minimum protocol version 4:\n> > -streaming (parallel)\n>\n> I am still not sure if this is any better. I don't like that\n> \"streaming\" appears twice, and I wouldn't know how to format this\n> nicely.\n>\n\nI won't keep pushing to rearrange the docs. I think all the content is\nOK anyway, so let's see if other people have any opinions on how the\nnew information is best presented.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 18 Dec 2023 11:35:23 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 11:35 AM Peter Smith <[email protected]> wrote:\n>\n> Hi, I had a look at the latest v02 patches.\n>\n> On Sat, Dec 16, 2023 at 12:36 AM Emre Hasegeli <[email protected]> wrote:\n> >\n> > > OK, to deal with that can't you just include \"origin\" in the first\n> > > group which has no special protocol requirements?\n> >\n> > I think it'd be confusing because the option is not available before\n> > version 16. I think it should really check for the version number and\n> > complain if it's less than 4.\n>\n> Hm. I don't think a proto_version check is required for \"origin\".\n>\n> IIUC, the protocol version number indicates the message byte format.\n> It's needed so that those messages bytes can be read/written in the\n> same/compatible way. OTOH I thought the \"origin\" option has nothing\n> really to do with actual message formats on the wire; I think it works\n> just by filtering up-front to decide either to send the changes or not\n> send the changes. For example, so long as PostgreSQL >= v16, I expect\n> you could probably use \"origin\" with any proto_version you wanted.\n>\n\nBut, I don't know how the user would be able to arrange for such a\nmixture of PG/proto_version versions. because they do seem tightly\ncoupled for pgoutput.\n\ne.g.\nserver_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n options.proto.logical.proto_version =\n server_version >= 160000 ?\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n LOGICALREP_PROTO_VERSION_NUM;\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 18 Dec 2023 12:15:33 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 7:06 PM Emre Hasegeli <[email protected]> wrote:\n>\n> > I saw that the original \"publication_names\" error was using\n> > errcode(ERRCODE_INVALID_PARAMETER_VALUE), but TBH since there is no\n> > option given at all I felt ERRCODE_SYNTAX_ERROR might be more\n> > appropriate errcode for those 2 mandatory option errors.\n>\n> It makes sense to me. Changed.\n>\n\nI found the existing error code appropriate because for syntax\nspecification, either we need to mandate this at the grammar level or\nat the API level. Also, I think we should give a message similar to an\nexisting message: \"publication_names parameter missing\". For example,\nwe can say, \"proto_version parameter missing\". BTW, I also don't like\nthe other changes parse_output_parameters() done in 0001, if we want\nto improve all the similar messages there are other places in the code\nas well, so we can separately make the case for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 09:00:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 7:06 PM Emre Hasegeli <[email protected]> wrote:\n>\n>\n> > SUGGESTION\n> > -proto_version\n> > -publication_names\n> > -binary\n> > -messages\n> > -origin\n> >\n> > Requires minimum protocol version 2:\n> > -streaming (boolean)\n> >\n> > Requires minimum protocol version 3:\n> > -two_phase\n> >\n> > Requires minimum protocol version 4:\n> > -streaming (parallel)\n>\n> I am still not sure if this is any better. I don't like that\n> \"streaming\" appears twice, and I wouldn't know how to format this\n> nicely.\n>\n\nThe currently proposed way seems reasonable to me.\n\n> The new versions are attached.\n>\n> I also added \"Required.\" for \"proto_version\" and \"publication_names\".\n>\n\nComma separated list of publication names for which to subscribe\n- (receive changes). The individual publication names are treated\n- as standard objects names and can be quoted the same as needed.\n+ (receive changes). Required. The individual publication names are\n\nThis change (Required in between two sentences) looks slightly odd to\nme. Can we instead extend the second line to something like: \"This\nparameter is required, and the individual publication names are ...\".\nSimilarly we can adjust the proto_vesion explanation.\n\nOne minor comment:\n====================\n+ <para>\n+ <productname>PostgreSQL</productname> supports extensible logical decoding\n+ plugins. <literal>pgoutput</literal> is the standard one used for\n+ the built-in logical replication.\n+ </para>\n\nThis sounds like we are supporting more than one logical decoding\nplugin. Can we slightly rephrase it to something like:\n\"PostgreSQL</productname> supports extensible logical decoding plugin\n<literal>pgoutput</literal> which is used for the built-in logical\nreplication as well.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 09:57:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> I found the existing error code appropriate because for syntax\n> specification, either we need to mandate this at the grammar level or\n> at the API level. Also, I think we should give a message similar to an\n> existing message: \"publication_names parameter missing\". For example,\n> we can say, \"proto_version parameter missing\". BTW, I also don't like\n> the other changes parse_output_parameters() done in 0001, if we want\n> to improve all the similar messages there are other places in the code\n> as well, so we can separately make the case for the same.\n\nOkay, I am changing these back. I think we should keep the word\n\"option\". It is used on other error messages.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 10:38:04 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> This change (Required in between two sentences) looks slightly odd to\n> me. Can we instead extend the second line to something like: \"This\n> parameter is required, and the individual publication names are ...\".\n> Similarly we can adjust the proto_vesion explanation.\n\nI don't think it's an improvement to join 2 independent sentences with\na comma. I expanded these by mentioning what is required.\n\n> This sounds like we are supporting more than one logical decoding\n> plugin. Can we slightly rephrase it to something like:\n> \"PostgreSQL</productname> supports extensible logical decoding plugin\n> <literal>pgoutput</literal> which is used for the built-in logical\n> replication as well.\"\n\nI understand the confusion. I reworded it and dropped \"extensible\".\n\nThe new versions are attached.",
"msg_date": "Mon, 18 Dec 2023 10:38:16 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 1:08 PM Emre Hasegeli <[email protected]> wrote:\n>\n> > I found the existing error code appropriate because for syntax\n> > specification, either we need to mandate this at the grammar level or\n> > at the API level. Also, I think we should give a message similar to an\n> > existing message: \"publication_names parameter missing\". For example,\n> > we can say, \"proto_version parameter missing\". BTW, I also don't like\n> > the other changes parse_output_parameters() done in 0001, if we want\n> > to improve all the similar messages there are other places in the code\n> > as well, so we can separately make the case for the same.\n>\n> Okay, I am changing these back. I think we should keep the word\n> \"option\". It is used on other error messages.\n>\n\nFair enough. I think we should push your first patch only in HEAD as\nthis is a minor improvement over the current behaviour. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 18:34:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> Fair enough. I think we should push your first patch only in HEAD as\n> this is a minor improvement over the current behaviour. What do you\n> think?\n\nI agree.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:34:58 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 1:35 AM Emre Hasegeli <[email protected]> wrote:\n>\n> > Fair enough. I think we should push your first patch only in HEAD as\n> > this is a minor improvement over the current behaviour. What do you\n> > think?\n>\n> I agree.\n\nPatch 0001\n\nAFAICT parse_output_parameters possible errors are never tested. For\nexample, there is no code coverage [1] touching any of these ereports.\n\nIMO there should be some simple test cases -- I am happy to create\nsome tests if you agree they should exist.\n\n~~~\n\nWhile looking at the function parse_output_parameters() I noticed that\nif an unrecognised option is passed the function emits an elog instead\nof an ereport\n\n------\ntest_pub=# SELECT * FROM pg_logical_slot_get_changes('test_slot_v1',\nNULL, NULL, 'banana', '1');\n2023-12-19 17:08:21.627 AEDT [8921] ERROR: unrecognized pgoutput option: banana\n2023-12-19 17:08:21.627 AEDT [8921] CONTEXT: slot \"test_slot_v1\",\noutput plugin \"pgoutput\", in the startup callback\n2023-12-19 17:08:21.627 AEDT [8921] STATEMENT: SELECT * FROM\npg_logical_slot_get_changes('test_slot_v1', NULL, NULL, 'banana',\n'1');\nERROR: unrecognized pgoutput option: banana\nCONTEXT: slot \"test_slot_v1\", output plugin \"pgoutput\", in the startup callback\n------\n\nBut that doesn't seem right. AFAIK elog messages use errmsg_internal\nso this message would not get translated.\n\nPSA a patch to fix that.\n\n======\n[1] code coverage --\nhttps://coverage.postgresql.org/src/backend/replication/pgoutput/pgoutput.c.gcov.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 19 Dec 2023 17:36:55 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 12:07 PM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Dec 19, 2023 at 1:35 AM Emre Hasegeli <[email protected]> wrote:\n> >\n> > > Fair enough. I think we should push your first patch only in HEAD as\n> > > this is a minor improvement over the current behaviour. What do you\n> > > think?\n> >\n> > I agree.\n>\n> Patch 0001\n>\n> AFAICT parse_output_parameters possible errors are never tested. For\n> example, there is no code coverage [1] touching any of these ereports.\n>\n> IMO there should be some simple test cases -- I am happy to create\n> some tests if you agree they should exist.\n>\n\nI don't think having tests for all sorts of error checking will add\nmuch value as compared to the overhead they bring.\n\n> ~~~\n>\n> While looking at the function parse_output_parameters() I noticed that\n> if an unrecognised option is passed the function emits an elog instead\n> of an ereport\n>\n\nWe don't expect unrecognized option here and for such a thing, we use\nelog in the code. See the similar usage in\nparseCreateReplSlotOptions().\n\nI think we should move to 0002 patch now. In that, I suggest preparing\nseparate back branch patches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 12:55:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 6:25 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Dec 19, 2023 at 12:07 PM Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, Dec 19, 2023 at 1:35 AM Emre Hasegeli <[email protected]> wrote:\n> > >\n> > > > Fair enough. I think we should push your first patch only in HEAD as\n> > > > this is a minor improvement over the current behaviour. What do you\n> > > > think?\n> > >\n> > > I agree.\n> >\n> > Patch 0001\n> >\n> > AFAICT parse_output_parameters possible errors are never tested. For\n> > example, there is no code coverage [1] touching any of these ereports.\n> >\n> > IMO there should be some simple test cases -- I am happy to create\n> > some tests if you agree they should exist.\n> >\n>\n> I don't think having tests for all sorts of error checking will add\n> much value as compared to the overhead they bring.\n>\n> > ~~~\n> >\n> > While looking at the function parse_output_parameters() I noticed that\n> > if an unrecognised option is passed the function emits an elog instead\n> > of an ereport\n> >\n>\n> We don't expect unrecognized option here and for such a thing, we use\n> elog in the code. See the similar usage in\n> parseCreateReplSlotOptions().\n>\n\nIIUC the untranslated elog should be used for internal/sanity errors,\ndebugging, or stuff that cannot happen under any normal circumstances.\nWhile that may be the case for parseCreateReplSlotOptions() mentioned,\nIMO the scenario in the parse_output_parameters() is very different,\nbecause these options can come directly from user input so any user\ntypo can cause this error. Indeed, this is probably one of the more\nlikely reasons for getting any error in parse_output_parameters()\nfunction. I thought any errors that can be easily caused by some user\nactions ought to be translated.\n\nFor example, the user accidentally misspells 'proto_version':\n\ntest_pub=# SELECT * FROM pg_logical_slot_get_changes('test_slot_v1',\nNULL, NULL, 'protocol_version', '1');\nERROR: unrecognized pgoutput option: protocol_version\nCONTEXT: slot \"test_slot_v1\", output plugin \"pgoutput\", in the startup callback\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:24:42 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 12:55 PM Amit Kapila <[email protected]> wrote:\n>\n> I think we should move to 0002 patch now. In that, I suggest preparing\n> separate back branch patches.\n>\n\nEmre, are you planning to share back-branch patches?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 08:02:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> We don't expect unrecognized option here and for such a thing, we use\n> elog in the code. See the similar usage in\n> parseCreateReplSlotOptions().\n\n\"pgoutput\" is useful for a lot of applications other than our logical\nreplication subscriber. I think we should expect anything and handle\nerrors nicely.\n\n> I think we should move to 0002 patch now. In that, I suggest preparing\n> separate back branch patches.\n\nThey are attached.",
"msg_date": "Wed, 20 Dec 2023 18:58:17 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 2:58 AM Emre Hasegeli <[email protected]> wrote:\n>\n> > We don't expect unrecognized option here and for such a thing, we use\n> > elog in the code. See the similar usage in\n> > parseCreateReplSlotOptions().\n>\n> \"pgoutput\" is useful for a lot of applications other than our logical\n> replication subscriber. I think we should expect anything and handle\n> errors nicely.\n>\n> > I think we should move to 0002 patch now. In that, I suggest preparing\n> > separate back branch patches.\n>\n> They are attached.\n\nHi, I checked (just by visual inspection and diffs) the provided\nbackpatches and I have a couple of questions:\n\n======\n\n1.\n\nThe Chapter \"Streaming Replication Protocol\" START_REPLICATION /\n\"option_name\" part has an xref to the pgoutput options page\n\ne.g. master\n- The name of an option passed to the slot's logical decoding plugin.\n+ The name of an option passed to the slot's logical decoding output\n+ plugin. See <xref linkend=\"protocol-logical-replication\"/> for\n+ options that are accepted by the standard\n(<literal>pgoutput</literal>)\n+ plugin.\n\nBut the xref seems present only in the master/v16/v15 patches, but not\nfor the earlier patches v14/v13/v12. Why not?\n\n~~~\n\n2.\n\nThe proto_version part now says \"A valid version is required.\".\ne.g. master\n- <literal>3</literal>, and <literal>4</literal> are supported.\n+ <literal>3</literal>, and <literal>4</literal> are supported. A valid\n+ version is required.\n\nBut the change was only in the patches v14 onwards. Although the new\nerror message was only added for HEAD, isn't it still correct to say\n\"A valid version is required.\" for all the patches including v12 and\nv13?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:33:45 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "> But the xref seems present only in the master/v16/v15 patches, but not\n> for the earlier patches v14/v13/v12. Why not?\n\nI missed it.\n\n> But the change was only in the patches v14 onwards. Although the new\n> error message was only added for HEAD, isn't it still correct to say\n> \"A valid version is required.\" for all the patches including v12 and\n> v13?\n\nYes, it's still correct.\n\nFixed versions are attached.",
"msg_date": "Thu, 21 Dec 2023 16:45:47 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 7:16 PM Emre Hasegeli <[email protected]> wrote:\n>\n> Fixed versions are attached.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Dec 2023 13:36:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"pgoutput\" options missing on documentation"
}
] |
[
{
"msg_contents": "By chance I discovered that checks on large object ownership\nare broken in v16+. For example,\n\nregression=# create user alice;\nCREATE ROLE\nregression=# \\c - alice\nYou are now connected to database \"regression\" as user \"alice\".\nregression=> \\lo_import test\nlo_import 40378\nregression=> comment on large object 40378 is 'test';\nERROR: unrecognized class ID: 2613\n\nThis has been failing since commit afbfc0298, which refactored\nownership checks, replacing pg_largeobject_ownercheck and\nallied routines with object_ownercheck. That function lacks\nthe little dance that's been stuck into assorted crannies:\n\n if (classId == LargeObjectRelationId)\n classId = LargeObjectMetadataRelationId;\n\nwhich translates from the object-address representation with\nclassId LargeObjectRelationId to the catalog we actually need\nto look at.\n\nThe proximate cause of the failure is in get_object_property_data,\nso I first thought of making that function do this transposition.\nThat might be a good thing to do, but it wouldn't be enough to\nfix the problem, because we'd then reach this in object_ownercheck:\n\n\t\trel = table_open(classid, AccessShareLock);\n\nwhich is going to examine the wrong catalog. So AFAICS what\nwe have to do is put this substitution into object_ownercheck,\nadding to the four or five places that know about it already.\n\nThis is an absolutely horrid mess, of course. The big problem\nis that at this point I have exactly zero confidence that there\nare not other places with the same bug; and it's not apparent\nhow to find them.\n\nThere seems little choice but to make the hacky fix in v16,\nbut I wonder whether we shouldn't be more ambitious and try\nto fix this permanently in HEAD, by getting rid of the\ndiscrepancy in which OID to use. ISTM the correct fix\nis to change the ObjectAddress representation of large\nobjects to use classid LargeObjectMetadataRelationId.\nSomebody seems to have felt that that would create more\nproblems than it solves, but I have to disagree. If we\nstick with the current way, we are going to be hitting\nproblems of this ilk forevermore.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Dec 2023 14:53:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "LargeObjectRelationId vs LargeObjectMetadataRelationId, redux"
},
{
"msg_contents": "I wrote:\n> This is an absolutely horrid mess, of course. The big problem\n> is that at this point I have exactly zero confidence that there\n> are not other places with the same bug; and it's not apparent\n> how to find them.\n\nI took a look at every reference to LargeObjectRelationId and\nLargeObjectMetadataRelationId, and indeed found two more bugs\n(one very minor, and one only latent, but bugs nonetheless).\nI'm not entirely convinced that there are no others, but\nthis is the best I can do for now.\n\n> There seems little choice but to make the hacky fix in v16,\n> but I wonder whether we shouldn't be more ambitious and try\n> to fix this permanently in HEAD, by getting rid of the\n> discrepancy in which OID to use. ISTM the correct fix\n> is to change the ObjectAddress representation of large\n> objects to use classid LargeObjectMetadataRelationId.\n> Somebody seems to have felt that that would create more\n> problems than it solves, but I have to disagree. If we\n> stick with the current way, we are going to be hitting\n> problems of this ilk forevermore.\n\nI still kind of feel that way, but I realized that making\nsuch a change would be rather unpleasant for pg_dump:\nit'd have to cope with pg_depend contents that vary across\nversions, and probably do some translation if we'd like\ndump archive files to stay consistent. We'd also break\npost-create and post-alter hooks, and likely some other\nthird-party code. So maybe best to leave it alone.\n\nI've pushed fixes for the bugs I was able to find.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Dec 2023 14:01:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LargeObjectRelationId vs LargeObjectMetadataRelationId, redux"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nA recent case in the field in which a database session_authorization is\r\naltered to a non-superuser, non-owner of tables via alter database .. set session_authorization ..\r\ncaused autovacuum to skip tables.\r\n\r\nThe issue was discovered on 13.10, and the logs show such messages:\r\n\r\nwarning: skipping \"table1\" --- only table or database owner can vacuum it\r\n\r\nIn HEAD, I can repro, but the message is now a bit different due to [1].\r\n\r\nWARNING: permission denied to vacuum \"table1”, skipping it\r\n\r\nIt seems to me we should force an autovacuum worker to set the session userid to\r\na superuser.\r\n\r\nAttached is a repro and a patch which sets the session user to the BOOTSTRAP superuser\r\nat the start of the autovac worker.\r\n\r\nThoughts?\r\n\r\nRegards,\r\n\r\nSami\r\nAmazon Web Services (AWS)\r\n\r\n\r\n[1] https://postgr.es/m/[email protected]",
"msg_date": "Wed, 13 Dec 2023 20:42:45 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG] autovacuum may skip tables when session_authorization/role is\n set on database"
},
{
"msg_contents": "\"Imseih (AWS), Sami\" <[email protected]> writes:\n> A recent case in the field in which a database session_authorization is\n> altered to a non-superuser, non-owner of tables via alter database .. set session_authorization ..\n> caused autovacuum to skip tables.\n\nThat seems like an extremely not-bright idea. What is the actual\nuse case for such a setting? Doesn't it risk security problems?\n\n> Attached is a repro and a patch which sets the session user to the BOOTSTRAP superuser\n> at the start of the autovac worker.\n\nI'm rather unimpressed by this proposal, first because there are\nprobably ten other ways to break autovac with ill-considered settings,\nand second because if we do want to consider this a supported case,\nwhat about other background processes? They'd likely have issues\nas well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Dec 2023 16:18:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] autovacuum may skip tables when session_authorization/role\n is set on database"
},
{
"msg_contents": "> What is the actual\r\n> use case for such a setting? \r\n\r\nI don't have exact details on the use-case, bit this is not a common\r\nuse-case.\r\n\r\n> Doesn't it risk security problems?\r\n\r\nI cannot see how setting it on the database being more problematic than\r\nsetting it on a session level.\r\n\r\n\r\n> I'm rather unimpressed by this proposal, first because there are\r\n> probably ten other ways to break autovac with ill-considered settings,\r\n\r\nThere exists code in autovac that safeguard for such settings. For example,\r\nstatement_timeout, lock_timeout are disabled. There are a dozen or\r\nmore other settings that are overridden for autovac.\r\n\r\nI see this being just another one to ensure that autovacuum always runs\r\nas superuser.\r\n\r\n> and second because if we do want to consider this a supported case,\r\n> what about other background processes? They'd likely have issues\r\n> as well.\r\n\r\nI have not considered other background processes, but autovac is the only\r\none that I can think of which checks for relation permissions.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 13 Dec 2023 22:53:47 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] autovacuum may skip tables when session_authorization/role\n is set\n on database"
},
{
"msg_contents": "On Thu, 14 Dec 2023 at 02:13, Imseih (AWS), Sami <[email protected]> wrote:\n>\n> Hi,\n>\n>\n>\n> A recent case in the field in which a database session_authorization is\n>\n> altered to a non-superuser, non-owner of tables via alter database .. set session_authorization ..\n>\n> caused autovacuum to skip tables.\n>\n>\n>\n> The issue was discovered on 13.10, and the logs show such messages:\n>\n>\n>\n> warning: skipping \"table1\" --- only table or database owner can vacuum it\n>\n>\n>\n> In HEAD, I can repro, but the message is now a bit different due to [1].\n>\n>\n>\n> WARNING: permission denied to vacuum \"table1”, skipping it\n>\n>\n>\n> It seems to me we should force an autovacuum worker to set the session userid to\n>\n> a superuser.\n>\n>\n>\n> Attached is a repro and a patch which sets the session user to the BOOTSTRAP superuser\n>\n> at the start of the autovac worker.\n\nSince there is not much interest on this patch, I have changed the\nstatus with \"Returned with Feedback\". Feel free to propose a stronger\nuse case for the patch and add an entry for the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 11 Jan 2024 21:02:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] autovacuum may skip tables when session_authorization/role\n is set on database"
}
] |
[
{
"msg_contents": "Hi,\n\nIn all releases, if bootstrap mode's checkpoint gets an error (ENOSPC,\nEDQUOT, EIO, ...) or a short write in md.c, ERROR is promoted to FATAL\nand the shmem_exit resowner machinery reaches this:\n\nrunning bootstrap script ... 2023-12-14 10:38:02.320 NZDT [1409162]\nFATAL: could not write block 42 in file \"base/1/1255\": wrote only\n4096 of 8192 bytes\n2023-12-14 10:38:02.320 NZDT [1409162] HINT: Check free disk space.\n2023-12-14 10:38:02.320 NZDT [1409162] CONTEXT: writing block 42 of\nrelation base/1/1255\nTRAP: failed Assert(\"!LWLockHeldByMe(BufferDescriptorGetContentLock(buf))\"),\nFile: \"bufmgr.c\", Line: 2409, PID: 1409162\n\nIt's really hard to hit because we'd normally expect smgrextend() to\nget the error first, and when it does it looks something like this:\n\nrunning bootstrap script ... 2023-12-14 10:22:41.940 NZDT [1378512]\nFATAL: could not extend file \"base/1/1255\": wrote only 4096 of 8192\nbytes at block 42\n2023-12-14 10:22:41.940 NZDT [1378512] HINT: Check free disk space.\n2023-12-14 10:22:41.940 NZDT [1378512] PANIC: cannot abort\ntransaction 1, it was already committed\nAborted (core dumped)\n\nA COW system might succeed in smgrextend() and then fail in\nsmgrwrite(), and any system might fail here with other errno.\n\nIt's an extremely well hidden edge case and doesn't matter to users:\ninitdb failed for lack of space or worse, the message is clear and the\nrest is meaningless detail of interest to developers with assertion\nbuilds. I only happened to notice because I've been testing short\nwrite and error scenarios via artificially rigged up means for my\nvectored I/O work. No patch, I just wanted to flag this obscure\npre-existing problem spotted in passing.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 11:22:42 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Obscure lwlock assertion failure if write fails in initdb"
}
] |
[
{
"msg_contents": "The newNode() macro can be turned into a static inline function, which \nmakes it a lot simpler. See attached. This was not possible when the \nmacro was originally written, as we didn't require compiler to have \nstatic inline support, but nowadays we do.\n\nThis was last discussed in 2008, see discussion at \nhttps://www.postgresql.org/message-id/26133.1220037409%40sss.pgh.pa.us. \nIn those tests, Tom observed that gcc refused to inline the static \ninline function. That was weird, the function is very small and doesn't \ndo anything special. Whatever the problem was, I think we can dismiss it \nwith modern compilers. It does get inlined on gcc 12 and clang 14 that I \nhave installed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 14 Dec 2023 01:48:36 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simplify newNode()"
},
{
"msg_contents": "Hi,\n\nLGTM.\n\n+\tAssert(size >= sizeof(Node));\t/* need the tag, at least */\n+\tresult = (Node *) palloc0fast(size);\n+\tresult->type = tag;\n\n+\treturn result;\n+}\n\nHow about moving the comments /* need the tag, at least */ after result->type = tag; by the way?\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\n\n\n\n\nHi,\n\nLGTM.\n\n+\tAssert(size >= sizeof(Node));\t/* need the tag, at least */\n+\tresult = (Node *) palloc0fast(size);\n+\tresult->type = tag;\n\n+\treturn result;\n+}\n\nHow about moving the comments /* need the tag, at least */ after result->type = tag; by the way?\n\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Thu, 14 Dec 2023 09:34:26 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 9:34 AM Zhang Mingli <[email protected]> wrote:\n>\n> Hi,\n>\n> LGTM.\n>\n> + Assert(size >= sizeof(Node)); /* need the tag, at least */\n> + result = (Node *) palloc0fast(size);\n> + result->type = tag;\n>\n> + return result;\n> +}\n>\n> How about moving the comments /* need the tag, at least */ after result->type = tag; by the way?\n\nI don't think so, the comment has the meaning of the requested size\nshould at least the size\nof Node, which contains just a NodeTag.\n\ntypedef struct Node\n{\nNodeTag type;\n} Node;\n\n>\n>\n>\n> Zhang Mingli\n> www.hashdata.xyz\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 14 Dec 2023 10:19:13 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On 14.12.23 01:48, Heikki Linnakangas wrote:\n> The newNode() macro can be turned into a static inline function, which \n> makes it a lot simpler. See attached. This was not possible when the \n> macro was originally written, as we didn't require compiler to have \n> static inline support, but nowadays we do.\n> \n> This was last discussed in 2008, see discussion at \n> https://www.postgresql.org/message-id/26133.1220037409%40sss.pgh.pa.us. \n> In those tests, Tom observed that gcc refused to inline the static \n> inline function. That was weird, the function is very small and doesn't \n> do anything special. Whatever the problem was, I think we can dismiss it \n> with modern compilers. It does get inlined on gcc 12 and clang 14 that I \n> have installed.\n\nI notice that the existing comments point out that the size argument \nshould be a compile-time constant, but that is no longer the case for \nExtensibleNode(). Also, newNode() is the only caller of palloc0fast(), \nwhich also points out that the size argument should be a compile-time \nconstant, and palloc0fast() is the only caller of MemSetTest(). I can \nsee how an older compiler might have gotten too confused by all that. \nBut if we think that compilers are now smart enough, maybe we can unwind \nthis whole stack a bit more? Maybe we don't need MemSetTest() and/or \npalloc0fast() and/or newNode() at all?\n\n\n\n",
"msg_date": "Thu, 14 Dec 2023 09:32:09 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On 14/12/2023 10:32, Peter Eisentraut wrote:\r\n> I notice that the existing comments point out that the size argument\r\n> should be a compile-time constant, but that is no longer the case for\r\n> ExtensibleNode(). Also, newNode() is the only caller of palloc0fast(),\r\n> which also points out that the size argument should be a compile-time\r\n> constant, and palloc0fast() is the only caller of MemSetTest(). I can\r\n> see how an older compiler might have gotten too confused by all that.\r\n> But if we think that compilers are now smart enough, maybe we can unwind\r\n> this whole stack a bit more? Maybe we don't need MemSetTest() and/or\r\n> palloc0fast() and/or newNode() at all?\r\n\r\nGood point. Looking closer, modern compilers will actually turn the \r\nMemSetLoop() in MemoryContextAllocZeroAligned() into a call to memset() \r\nanyway! Funny. That is true for recent versions of gcc, clang, and MSVC. \r\nHere's a link to a godbolt snippet to play with this: \r\nhttps://godbolt.org/z/9b89P3c8x (full link at [0]).\r\n\r\nYeah, +1 on removing all that, including MemoryContextAllocZeroAligned. \r\nIt's not doing any good as it is, as it gets compiled to be identical to \r\nMemoryContextAllocZero. (There are small differences depending compiler \r\nand version, but e.g. on gcc 13.2, the code generated for \r\nMemoryContextAllocZero() is actually smaller even though both call memset())\r\n\r\nAnother approach would be to add more hints to \r\nMemoryContextAllocZeroAligned to dissuade the compiler from turning the \r\nloop into a memset() call. If you add an \"if (size > 1024) abort\" there, \r\nthen gcc 13 doesn't optimize into a memset() call, but clang still does. \r\nSome micro-benchmarks on that would be nice.\r\n\r\nBut given that the compiler has been doing this optimization for a while \r\nand we haven't noticed, I think memset() should be considered the status \r\nquo, and converting it to a loop again should be considered a new \r\noptimization.\r\n\r\nAlso, replacing MemoryContextAllocZeroAligned(CurrentMemoryContext, \r\nsize) with palloc0(size) has one fewer argument and the assembly code of \r\nthe call has one fewer instruction. That's something too.\r\n\r\n[0] \r\nhttps://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAMzwBtMA7AQwFtMQByARg9KtQYEAysib0QXACx8BBAKoBnTAAUAHpwAMvAFYTStJg1DIApACYAQuYukl9ZATwDKjdAGFUtAK4sGe1wAyeAyYAHI%2BAEaYxBIa0gAOqAqETgwe3r56icmOAkEh4SxRMVxxtpj2uQxCBEzEBOk%2Bflzllak1dQT5YZHRsdIKtfWNmS2Dnd2Fxf0AlLaoXsTI7BzmAMzByN5YANQma26D6BGongB0CPvYJhoAguub25h7B0e0eBEXVzf3ZhsMWy8u32hwI6CwVC%2Ba2udweAKeL1BxGCwChMPudwA9AAqHZUYioFg7ZAXHbYzE/B5UHYAfRpAHFQnI3HSXtc1gARHZrSl/CHBZ7vADWFQAnhBVDMbgBOOkRLx0RwMGmYVTxTAOKCSnZgMD7LkaUg7LhS2F8zA0EI7LwMYViiUzWk0%2BWK4IqtUaghax26/U7Q3%2B02/NYVJS8tb8q122jiyUy706vWcwPhyPPG3R2OOhO%2B5MaIPrVx4KiUs0Ri0CnYAWSYqlutFoqGQQjwAC9MPGIC3246NKo1lRB0OqI6ccadsA8MAmBFRQRngBacfk1MVq31xvNtuYACSCgAamI8OgIMke53u5hs2er4i/TW6w2m5eC3dBsQvA5q5gWCRRR5BFVAgOSYWp9isO4CFFdUIR2d9PwIb9f2If9ZCAkDajJKsfz/AD51UAhwJ%2BKCYItODtxpRDLyIyDoMwWD4K/bDkNQwCCI8G152IBQkNwtD2IWQDuJojFbhI%2BiyIAN1QY8dggbFmL/DoCAUZRkUEAAxG1kGzRSULwoDiX4ggjWk2TsXiJgFEGBAP1IUtpRlJyNEclznNcjydiMhhBmJBA6jJcYVJpd8UXs2E3Mijz3JilyvJOTwdnidSCEo1AQrBaJiClNYINE8SGIID8mJw/TjOwggEAwBRKQAdjymUzPQGUvLJeSxE3XTStY/DELQNiTJ2S9yPPXKHLHUQGx2Sq8B4/FMEwGl4h2YJiSs54FEJTABGedY0zADg5uIBaIEdFcIulJr3Pk%2BbFvibMmrJRJgi4nKGpcq64q87F5OOjqmwemT0CemShKNYabzehzPo837MCUAgupYgyCO83qoYumH42xLB6HnGl%2BvRuS9J6wzCaAjGMWlEmUcQ%2BTgEwVLkAQG0hQJ4zAfM56hMpmVqKi%2BnGYJlmGDZhRLOWTngYs0HXpEmUEtoa7sVmlUWHiKCkb4ga0Yp%2BWPqB5WgoULWyp18mCPCqnYuir6SeU1SUq0gEkpSqhtNMoGnqsmy7Icm33K8mnjI4oSeOxIhaloBQrYDqKWp2RXXZetKMqwYhsvlv5i1gqtsCrAB5AAlABNGk3AL0IABVsAADSr8uAAlsDcABpHdQnpaHDYF7FmY1IVTdJ1GLcRrOQwYfAS1hWquWDgaKqq9AarGiKxLowrisQ%2Bfeowpg6ryrzMUxJLgBpBgMEW0CiogGd3yYBxTQ8sdKuiZ46meC%2BdgvrAeNQakZo8XEmSCka8j6gOlKES%2BVcmDAHcuJGittpRjmPIwRwNB4Y7FVA/RCQpgjA3/rrVG50qYvwQPDZ4BAADuqA8R4AqMvHYH8koGGWMDchx1pq0JYMEPAPD2xMPeMABgbBBA7CoVZWoDMQAgIcordys0i4UMIqvZ%2BuIq4vC5F/cWD935PjYeRAEgpJE7GOgjWRF15FRX%2BlQncaRkTCE9KkcCOwxw2KSv9ZAK0GDEgcXgSacEnECAsVTfmHk2AsBpJ40C9FEEv2IA/IUOwIl/kEZuGJwN%2BDEGmggWaRDEIkJlP1XyO8gKL2qmSNglVqpxNxJJPA9QvBiDxNpKo00Zz0BCTKUpqNLLHUEIgmUY4mQBACCtakX8%2BloLkkQeI9BJIVHyWdUBVMemIRoNxAgzM6DNVUWOchTACHUneKLeiOx3i%2BUIds2g6B%2BldJcms12mBJLXN2RBZBuJkrPKcF4HirydiEIUKwZ4UyxGFIed1Wm38gKvNqdCkeuSbkAupECtgHj%2BkFJWUUgQvlmYBWxMwNggyooTWMt/YFcltC/PWSQHYWB5TAEnEYZZDlil9X8tklWWABmqPcqSnWO4uTFiYQwUUlLqV4lpfSrwjKUQsruOAtZbgOoRESWSMxQtkARBXu8scFzEKEI1QQTEuNGbPEmrQVVyAhThyxbPXiZtd6gX3qvUSq5LTPGYkIRmARTjxFPEMQakkxBGnoAwR0JgACsbgHLoFoVGmNF0TD1T2NG/2LlGxGDJLScY9RNFyUzcAMk15A1vVTYm62GaBBFtxBlVAy0/QQELcWuSEA8WcsdBlToexLByRfAWxgMwy0Jv9mmpNLkqGIueBALteaQQ5tmRGsdlaPLYlnYRSw1h80aBcSOpN9rJ10GnfmUsolHrYh%2BGsjcTYABa0RUD1inCEE8jzR7g23CNK8B9u7HgTuqxm%2BtSGYluNZaIXo1l7kPO8E8o8h362QcB0D9QoEEDsW4Pxk1vUOFSG2jmgGZTCogLqa9W52yQaPCeSGr5pReRnCQL0vMLqjwXFcRRyj81UDEGGV1DljqIT9Mxq4VSl4KBY9CTxuGBpGkhvBwjGY8AihjBAPjmi/QjICHB/d70aNMJOPUU68Hk0clPUMzE%2B5bgBHpEXDuHIaR5yrMoAuBcAg0gswEAubhJO9SNHx6T25GNAa9T6v1ynGZGgDDJ11rVkk/gRqFwaEX/P4ZcnxxYPi%2BPy1nj8DgcxaCcEjbwPwHAtCkFQJwGNm7e2bUWMsHtaweCkAIJoHLcwhQgEjYaPLHBJCFea6VzgvAFAgENE14rOXSBwFgEgNA6sj1kAoLh2b9AYjAC4GsMwfBFTRCGxACIfWIjBDqKKTgDWDvMBQgXCI2hPQnd4DN0RBAC62mO2N0g0rgDKobEN7gvAsAsEMMAcQr38DHWwws77JXVQai8POW75BAJdZK%2B8CICSypYD60VPht25hUAMMAA89CqEF3VEVhr/BBAiDEOwKQMhBCKBUOoV7ugWgGCMCgaw1h9AfCG5AOY9aqjfYXAXMwvBUALIzqgnnp1WhBL8BAVwIxmikECAKKYfQWjZBSAIRXWQkha4YJMXoJQZfYYEB0YYngmh6DsLL83XRVdG%2Bt4GnXYxA2G6KOruY1WljU9y/l3rr2yscB2KoAAHAANgXOHyQE5kBeLW2cMwclcCEFpesE0vBRtaCHaQNrHX9CcB66QIrJWg%2BDeG415rOeusi%2BL31svlexs5/F8kZwkggA%3D%3D\r\n\r\n-- \r\nHeikki Linnakangas\r\nNeon (https://neon.tech)",
"msg_date": "Thu, 14 Dec 2023 14:53:27 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 14/12/2023 10:32, Peter Eisentraut wrote:\n>> But if we think that compilers are now smart enough, maybe we can unwind\n>> this whole stack a bit more? Maybe we don't need MemSetTest() and/or\n>> palloc0fast() and/or newNode() at all?\n\n> Good point. Looking closer, modern compilers will actually turn the \n> MemSetLoop() in MemoryContextAllocZeroAligned() into a call to memset() \n> anyway! Funny. That is true for recent versions of gcc, clang, and MSVC. \n\nI experimented with the same planner-intensive test case that I used\nin the last discussion back in 2008. I got these results:\n\nHEAD:\ntps = 144.974195 (without initial connection time)\n\nv1 patch:\ntps = 146.302004 (without initial connection time)\n\nv2 patch:\ntps = 144.882208 (without initial connection time)\n\nWhile there's not much daylight between these numbers, the times are\nquite reproducible for me. This is with RHEL8's gcc 8.5.0 on x86_64.\nThat's probably a bit trailing-edge in terms of what people might be\nusing with v17, but I don't think it's discountable.\n\nI also looked at the backend's overall code size per size(1):\n\nHEAD:\n text data bss dec hex filename\n8613007 100192 220176 8933375 884fff testversion.stock/bin/postgres\n\nv1 patch:\n text data bss dec hex filename\n8615126 100192 220144 8935462 885826 testversion.v1/bin/postgres\n\nv2 patch:\n text data bss dec hex filename\n8595322 100192 220144 8915658 880aca testversion.v2/bin/postgres\n\nI did check that the v1 patch successfully inlines newNode() and\nreduces it to just a MemoryContextAllocZeroAligned call, so it's\ncorrect that modern compilers do that better than whatever I tested\nin 2008. But I wonder what is happening in v2 to reduce the code\nsize so much. MemoryContextAllocZeroAligned is not 20kB by itself.\n\n> Good point. Looking closer, modern compilers will actually turn the \n> MemSetLoop() in MemoryContextAllocZeroAligned() into a call to memset() \n> anyway! Funny. That is true for recent versions of gcc, clang, and MSVC. \n\nNot here ...\n\n> Yeah, +1 on removing all that, including MemoryContextAllocZeroAligned. \n> It's not doing any good as it is, as it gets compiled to be identical to \n> MemoryContextAllocZero.\n\nAlso not so here. Admittedly, my results don't make much of a case\nfor keeping the two code paths, even on compilers where it matters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Dec 2023 17:44:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 11:44 AM Tom Lane <[email protected]> wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> > Yeah, +1 on removing all that, including MemoryContextAllocZeroAligned.\n> > It's not doing any good as it is, as it gets compiled to be identical to\n> > MemoryContextAllocZero.\n>\n> Also not so here. Admittedly, my results don't make much of a case\n> for keeping the two code paths, even on compilers where it matters.\n\nFWIW here is what I figured out once about why it gets compiled the\nsame these days:\n\nhttps://www.postgresql.org/message-id/CA+hUKGLfa6ANa0vs7Lf0op0XBH05HE8SyX8NFhDyT7k2CHYLXw@mail.gmail.com\n\n\n",
"msg_date": "Fri, 15 Dec 2023 11:56:25 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 5:44 AM Tom Lane <[email protected]> wrote:\n>\n> I did check that the v1 patch successfully inlines newNode() and\n> reduces it to just a MemoryContextAllocZeroAligned call, so it's\n> correct that modern compilers do that better than whatever I tested\n> in 2008. But I wonder what is happening in v2 to reduce the code\n> size so much. MemoryContextAllocZeroAligned is not 20kB by itself.\n\nI poked at this a bit and it seems to come from what Heikki said\nupthread about fewer instructions before the calls: Running objdump on\nv1 and v2 copyfuncs.o and diff'ing shows there are fewer MOV\ninstructions (some extraneous stuff removed):\n\n e9 da 5f 00 00 jmp <_copyReindexStmt>\n- 48 8b 05 00 00 00 00 mov rax,QWORD PTR [rip+0x0]\n- be 18 00 00 00 mov esi,0x18\n- 48 8b 38 mov rdi,QWORD PTR [rax]\n- e8 00 00 00 00 call MemoryContextAllocZeroAligned-0x4\n+ bf 18 00 00 00 mov edi,0x18\n+ e8 00 00 00 00 call palloc0-0x4\n\nThat's 10 bytes savings.\n\n- 48 8b 05 00 00 00 00 mov rax,QWORD PTR [rip+0x0]\n- 48 8b 38 mov rdi,QWORD PTR [rax]\n- e8 00 00 00 00 call MemoryContextAllocZeroAligned-0x4\n+ e8 00 00 00 00 call palloc0-0x4\n\n...another 10 bytes. Over and over again.\n\nBecause of the size differences, the compiler is inlining more: e.g.\nin v1 _copyFieldStore has 4 call sites, but in v2 it got inlined.\n\nAbout the patch, I'm wondering if this whitespace is intentional, but\nit's otherwise straightforward:\n\n--- a/src/include/nodes/nodes.h\n+++ b/src/include/nodes/nodes.h\n@@ -132,6 +132,7 @@ typedef struct Node\n\n #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\n\n+\n /*\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:55:18 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On 15/12/2023 00:44, Tom Lane wrote:\n>> Good point. Looking closer, modern compilers will actually turn the\n>> MemSetLoop() in MemoryContextAllocZeroAligned() into a call to memset()\n>> anyway! Funny. That is true for recent versions of gcc, clang, and MSVC.\n> Not here ...\n\nHmm, according to godbolt, the change happened in GCC version 10.1. \nStarting with gcc 10.1, it is turned into a memset(). On clang, the same \nchange happened in version 3.4.1.\n\nI think we have consensus on patch v2. It's simpler and not less \nperformant than what we have now, at least on modern compilers. Barring \nobjections, I'll commit that.\n\nI'm not planning to spend more time on this, but there might be some \nroom for further optimization if someone is interested to do the \nmicro-benchmarking. The obvious thing would be to persuade modern \ncompilers to not switch to memset() in MemoryContextAllocZeroAligned \n(*), making the old macro logic work the same way it used to on old \ncompilers.\n\nAlso, instead of palloc0, it might be better for newNode() to call \npalloc followed by memset. That would allow the compiler to partially \noptimize away the memset. Most callers fill at least some of the fields \nafter calling makeNode(), so the compiler could generate code that \nclears only the uninitialized fields and padding bytes.\n\n(*) or rather, a new function like MemoryContextAllocZeroAligned but \nwithout the 'context' argument. We want to keep the savings in the \ncallers from eliminating the extra argument.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:28:31 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify newNode()"
},
{
"msg_contents": "On 18/12/2023 16:28, Heikki Linnakangas wrote:\n> I think we have consensus on patch v2. It's simpler and not less\n> performant than what we have now, at least on modern compilers. Barring\n> objections, I'll commit that.\n\nCommitted that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 19 Dec 2023 12:13:12 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify newNode()"
}
] |
[
{
"msg_contents": "Hi, all\n\nBy reading the codes, I found that we process limit option as LIMIT_OPTION_WITH_TIES when using WITH TIES\nand all others are LIMIT_OPTION_COUNT by commit 357889eb17bb9c9336c4f324ceb1651da616fe57.\nAnd check actual limit node in limit_needed().\nThere is no need to maintain a useless default limit enum.\nI remove it and have an install check to verify.\n\nAre there any considerations behind this?\nShall we remove it for clear as it’s not actually the default option.\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Thu, 14 Dec 2023 09:17:33 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "useless LIMIT_OPTION_DEFAULT"
},
{
"msg_contents": "Zhang Mingli <[email protected]> writes:\n> By reading the codes, I found that we process limit option as LIMIT_OPTION_WITH_TIES when using WITH TIES\n> and all others are LIMIT_OPTION_COUNT by commit 357889eb17bb9c9336c4f324ceb1651da616fe57.\n> And check actual limit node in limit_needed().\n> There is no need to maintain a useless default limit enum.\n\nI agree, that looks pretty useless. Our normal convention for\nrepresenting not having any LIMIT clause would be to not create\na SelectLimit node at all. I don't see why allowing a second\nrepresentation is a good idea, especially when the code is not\nin fact doing that.\n\ngit blame shows that this came in with 357889eb1. Alvaro,\nSurafel, do you want to argue for keeping things as-is?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Dec 2023 16:47:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: useless LIMIT_OPTION_DEFAULT"
},
{
"msg_contents": "On 2023-Dec-14, Tom Lane wrote:\n\n> Zhang Mingli <[email protected]> writes:\n> > By reading the codes, I found that we process limit option as LIMIT_OPTION_WITH_TIES when using WITH TIES\n> > and all others are LIMIT_OPTION_COUNT by commit 357889eb17bb9c9336c4f324ceb1651da616fe57.\n> > And check actual limit node in limit_needed().\n> > There is no need to maintain a useless default limit enum.\n> \n> I agree, that looks pretty useless. Our normal convention for\n> representing not having any LIMIT clause would be to not create\n> a SelectLimit node at all. I don't see why allowing a second\n> representation is a good idea, especially when the code is not\n> in fact doing that.\n> \n> git blame shows that this came in with 357889eb1. Alvaro,\n> Surafel, do you want to argue for keeping things as-is?\n\nI looked at the history of this. That enum member first appeared as a\nresult of your review at [1], and the accompanying code at that time did\nuse it a few times. The problem is that I later ([2]) proposed to rewrite\nthat code and remove most of the uses of that enum member, but failed to\nremove it completely.\n\nSo, I think we're OK to remove it. I'm going to push Zhang's patch\nsoon unless there are objections.\n\n[1] https://postgr.es/m/[email protected]\n[2] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Sat, 16 Dec 2023 17:42:28 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: useless LIMIT_OPTION_DEFAULT"
}
] |
[
{
"msg_contents": "Hi,\n\nOur pg_serial truncation logic is a bit broken, as described by the\ncomments in CheckPointPredicate() (a sort of race between xid cycles\nand checkpointing). We've seen a system with ~30GB of files in there\n(note: full/untruncated be would be 2³² xids × sizeof(uint64_t) =\n32GB). It's not just a gradual disk space leak: according to disk\nspace monitoring, this system suddenly wrote ~half of that data, which\nI think must be the while loop in SerialAdd() zeroing out pages.\nOuch.\n\nI see a few questions:\n1. How should we fix this fundamentally in future releases? One\nanswer is to key SSI's xid lookup with FullTransactionId (conceptually\ncleaner IMHO but I'm not sure how far fxids need to 'spread' through\nthe system to do it right). Another already mentioned in comments is\nto move some logic into vacuum so it can stay in sync with the xid\ncycle (maybe harder to think about and prove correct).\n2. Could there be worse consequences than wasted disk and I/O?\n3. Once a system reaches a bloated state like this, what can an\nadministrator do?\n\nI looked into question 3. I convinced myself that it must be safe to\nunlink all the files under pg_serial while the cluster is down,\nbecause:\n\n * we don't need the data across restarts, it's just for spilling\n * we don't need the 'head' file because slru.c opens with O_CREAT\n * open(O_CREAT) followed by pwrite(..., offset) will create a harmless hole\n * we never read files outside the tailXid/headXid range we've written\n * we zero out pages as we add them in SerialAdd(), without reading\n\nIf I have that right, perhaps we should not merely advise that it is\nsafe to do that manually, but proactively do it in SerialInit(). That\nis where we establish in shared memory that we don't expect there to\nbe any files on disk, so it must be a good spot to make that true if\nit is not:\n\n if (!found)\n {\n /*\n * Set control information to reflect empty SLRU.\n */\n serialControl->headPage = -1;\n serialControl->headXid = InvalidTransactionId;\n serialControl->tailXid = InvalidTransactionId;\n+\n+ /* Also delete any files on disk. */\n+ SlruScanDirectory(SerialSlruCtl, SlruScanDirCbDeleteAll, NULL);\n }\n\nIn common cases that would just readdir() an empty directory.\n\nFor testing, it is quite hard to convince predicate.c to write any\nfiles there: normally you have to overflow its transaction tracking,\nwhich requires more than (max backends + max prepared xacts) × 10\nSERIALIZABLE transactions in just the right sort of overlapping\npattern, so that the committed ones need to be spilled to disk. I\nmight try to write a test for that, but it gets easier if you define\nTEST_SUMMARIZE_SERIAL. Then you don't need many transactions -- but\nyou still need a slightly finicky schedule. Start with a couple of\noverlapping SSI transactions, then commit them, to get a non-empty\nFinishedSerializableTransaction list. Then create some more SSI\ntransactions, which will call SerialAdd() due to the TEST_ macro.\nThen run a checkpoint, and you should see eg \"0000\" being created on\ndemand during SLRU writeback, demonstrating that starting from an\nempty pg_serial directory is always OK. I wanted to try that to\nremind myself of how it all works, but I suppose it should be obvious\nthat it's OK: initdb's initial state is an empty directory.\n\nTo create a bunch of junk files that are really just thin links for\nthe above change to unlink, or to test the truncate code when it sees\na 'full' directory, you can do:\n\ncd pg_serial\ndd if=/dev/zero of=0000 bs=256k count=1\nawk 'BEGIN { for (i = 1; i <= 131071; i++) { printf(\"%04X\\n\", i); } }'\n| xargs -r -I {} ln 0000 {}\n\n\n",
"msg_date": "Fri, 15 Dec 2023 09:53:36 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_serial bloat"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 9:53 AM Thomas Munro <[email protected]> wrote:\n> ... We've seen a system with ~30GB of files in there\n> (note: full/untruncated be would be 2³² xids × sizeof(uint64_t) =\n> 32GB). It's not just a gradual disk space leak: according to disk\n> space monitoring, this system suddenly wrote ~half of that data, which\n> I think must be the while loop in SerialAdd() zeroing out pages.\n\nAttempt at an analysis of this rare anti-social I/O pattern:\n\nSerialAdd() writes zero pages in a range from the old headPage up to\nsome target page, but headPage can be any number, arbitrarily far in\nthe past (or apparently, the future). It only keeps up with the\nprogress of the xid clock and spreads that work out if we happen to\ncall SerialAdd() often enough. If we call SerialAdd() only every\ncouple of billion xids (eg very occasionally you leave a transaction\nopen and go out to lunch on a very busy system using SERIALIZABLE\neverywhere), you might find yourself suddenly needing to write out\nmany gigabytes of zeroes there.\n\nOne observation is that headPage gets periodically zapped to -1 by\ncheckpoints, near the comment \"SLRU is no longer needed\", providing a\nperiodic dice-roll that chops the range down. Unfortunately the\nhistorical \"apparent wraparound\" bug prevents that from being reached.\nThat bug was fixed by commit d6b0c2b (master only, no back-patch). On\nthe system where we saw pg_serial going bananas, that message appeared\nregularly.\n\nAttempts to find a solution:\n\nI think it might make sense to clamp firstZeroPage into the page range\nimplied by tailXid, headXid. Those values are eagerly maintained and\ninterlock with snapshots and global xmin (correctly but\nunder-documented-ly, AFAICS so far), and we will never try to look up\nthe CSN for any xid outside that range. I think that should exclude\nthe pathological zero-writing cases. I wouldn't want to do this\nwithout a working reproducer though, which will take some effort.\n\nAnother thought is that in the glorious 64 bit future, we might be\nable to invent a \"sparse\" SLRU, where if the file or page doesn't\nexist, we just return a zero CSN, and when we write a new page we just\nlet the OS provide filesystem holes as required. The reason I\nwouldn't want to invent sparse SLRUs with 32 bit indexing is that we\nhave no confidence in the truncation logic, which might leave stray\nfiles from earlier epochs. So I think we need zero'd pages (or\nperhaps at least to confirm that there is nothing already there, but I\nhave zero desire to make the current wraparound-ridden system more\ncomplex).\n\n\n",
"msg_date": "Fri, 22 Dec 2023 11:05:14 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_serial bloat"
}
] |
[
{
"msg_contents": "Hi Richard Guo I see that the test samples are all (exists)\nsubqueries ,I think semi join should also support ( in) and ( any)\nsubqueries. would you do more test on ( in) and ( any) subqueries?\n\n\nBest whish\n\nHi Richard Guo I see that the test samples are all (exists) subqueries ,I think semi join should also support ( in) and ( any) subqueries. would you do more test on ( in) and ( any) subqueries?Best whish",
"msg_date": "Fri, 15 Dec 2023 14:40:31 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Richard Guo\n I did a simple test ,Subqueries of type (in) can be supported, There\nis a test sql that doesn't support it, and I think that's because it can't\npull up the subqueries.\n```\ntest=# explain (costs off) SELECT t1.* FROM prt1_adv t1 WHERE EXISTS\n(SELECT 1 FROM prt2_adv t2 WHERE t1.a = t2.b) AND t1.b = 0 ORDER BY t1.a;\n QUERY PLAN\n------------------------------------------------------\n Sort\n Sort Key: t1.a\n -> Hash Right Semi Join\n Hash Cond: (t2.b = t1.a)\n -> Append\n -> Seq Scan on prt2_adv_p1 t2_1\n -> Seq Scan on prt2_adv_p2 t2_2\n -> Seq Scan on prt2_adv_p3 t2_3\n -> Hash\n -> Append\n -> Seq Scan on prt1_adv_p1 t1_1\n Filter: (b = 0)\n -> Seq Scan on prt1_adv_p2 t1_2\n Filter: (b = 0)\n -> Seq Scan on prt1_adv_p3 t1_3\n Filter: (b = 0)\n(16 rows)\n\ntest=# explain (costs off) SELECT t1.* FROM prt1_adv t1 WHERE t1.a IN\n(SELECT t2.b FROM prt2_adv t2) AND t1.b = 0 ORDER BY t1.a;\n QUERY PLAN\n------------------------------------------------------\n Sort\n Sort Key: t1.a\n -> Hash Right Semi Join\n Hash Cond: (t2.b = t1.a)\n -> Append\n -> Seq Scan on prt2_adv_p1 t2_1\n -> Seq Scan on prt2_adv_p2 t2_2\n -> Seq Scan on prt2_adv_p3 t2_3\n -> Hash\n -> Append\n -> Seq Scan on prt1_adv_p1 t1_1\n Filter: (b = 0)\n -> Seq Scan on prt1_adv_p2 t1_2\n Filter: (b = 0)\n -> Seq Scan on prt1_adv_p3 t1_3\n Filter: (b = 0)\n(16 rows)\n\ntest=#\n\ntest=# explain (costs off) SELECT t1.* FROM plt1_adv t1 WHERE EXISTS\n(SELECT 1 FROM plt2_adv t2 WHERE t1.a = t2.a AND t1.c = t2.c) AND t1.b < 10\nORDER BY t1.a;\n QUERY PLAN\n------------------------------------------------------\n Sort\n Sort Key: t1.a\n -> Hash Right Semi Join\n Hash Cond: ((t2.a = t1.a) AND (t2.c = t1.c))\n -> Append\n -> Seq Scan on plt2_adv_p1 t2_1\n -> Seq Scan on plt2_adv_p2 t2_2\n -> Seq Scan on plt2_adv_p3 t2_3\n -> Hash\n -> Append\n -> Seq Scan on plt1_adv_p1 t1_1\n Filter: (b < 10)\n -> Seq Scan on plt1_adv_p2 t1_2\n Filter: (b < 10)\n -> Seq Scan on plt1_adv_p3 t1_3\n Filter: (b < 10)\n(16 rows)\n\ntest=#\ntest=# explain (costs off) SELECT t1.* FROM plt1_adv t1 WHERE (t1.a, t1.c)\nIN (SELECT t2.a, t2.c FROM plt2_adv t2) AND t1.b < 10 ORDER BY t1.a;\n QUERY PLAN\n------------------------------------------------------\n Sort\n Sort Key: t1.a\n -> Hash Right Semi Join\n Hash Cond: ((t2.a = t1.a) AND (t2.c = t1.c))\n -> Append\n -> Seq Scan on plt2_adv_p1 t2_1\n -> Seq Scan on plt2_adv_p2 t2_2\n -> Seq Scan on plt2_adv_p3 t2_3\n -> Hash\n -> Append\n -> Seq Scan on plt1_adv_p1 t1_1\n Filter: (b < 10)\n -> Seq Scan on plt1_adv_p2 t1_2\n Filter: (b < 10)\n -> Seq Scan on plt1_adv_p3 t1_3\n Filter: (b < 10)\n(16 rows)\n\n\n```\n\n```\ntest=# explain (costs off) select * from int4_tbl i4, tenk1 a\n where exists(select * from tenk1 b\n where a.twothousand = b.twothousand and a.fivethous <>\nb.fivethous)\n and i4.f1 = a.tenthous;\n QUERY PLAN\n-------------------------------------------------\n Hash Right Semi Join\n Hash Cond: (b.twothousand = a.twothousand)\n Join Filter: (a.fivethous <> b.fivethous)\n -> Seq Scan on tenk1 b\n -> Hash\n -> Hash Join\n Hash Cond: (a.tenthous = i4.f1)\n -> Seq Scan on tenk1 a\n -> Hash\n -> Seq Scan on int4_tbl i4\n(10 rows)\n\ntest=# explain (costs off ) SELECT *\nFROM int4_tbl i4, tenk1 a\nWHERE (a.twothousand, a.fivethous) IN (\n SELECT b.twothousand, b.fivethous\n FROM tenk1 b\n WHERE a.twothousand = b.twothousand and a.fivethous <> b.fivethous\n)\nAND i4.f1 = a.tenthous;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Nested Loop\n Join Filter: (i4.f1 = a.tenthous)\n -> Seq Scan on tenk1 a\n Filter: (SubPlan 1)\n SubPlan 1\n -> Seq Scan on tenk1 b\n Filter: ((a.fivethous <> fivethous) AND (a.twothousand =\ntwothousand))\n -> Materialize\n -> Seq Scan on int4_tbl i4\n(9 rows)\ntest=# set enable_nestloop =off;\nSET\ntest=# explain (costs off ) SELECT *\nFROM int4_tbl i4, tenk1 a\nWHERE (a.twothousand, a.fivethous) IN (\n SELECT b.twothousand, b.fivethous\n FROM tenk1 b\n WHERE a.twothousand = b.twothousand and a.fivethous <> b.fivethous\n)\nAND i4.f1 = a.tenthous;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Hash Join\n Hash Cond: (a.tenthous = i4.f1)\n -> Seq Scan on tenk1 a\n Filter: (SubPlan 1)\n SubPlan 1\n -> Seq Scan on tenk1 b\n Filter: ((a.fivethous <> fivethous) AND (a.twothousand =\ntwothousand))\n -> Hash\n -> Seq Scan on int4_tbl i4\n(9 rows)\n\n\n```\n\nwenhui qiu <[email protected]> 于2023年12月15日周五 14:40写道:\n\n> Hi Richard Guo I see that the test samples are all (exists)\n> subqueries ,I think semi join should also support ( in) and ( any)\n> subqueries. would you do more test on ( in) and ( any) subqueries?\n>\n>\n> Best whish\n>\n\nHi Richard Guo I did a simple test ,Subqueries of type (in) can be supported, There is a test sql that doesn't support it, and I think that's because it can't pull up the subqueries.```test=# explain (costs off) SELECT t1.* FROM prt1_adv t1 WHERE EXISTS (SELECT 1 FROM prt2_adv t2 WHERE t1.a = t2.b) AND t1.b = 0 ORDER BY t1.a; QUERY PLAN------------------------------------------------------ Sort Sort Key: t1.a -> Hash Right Semi Join Hash Cond: (t2.b = t1.a) -> Append -> Seq Scan on prt2_adv_p1 t2_1 -> Seq Scan on prt2_adv_p2 t2_2 -> Seq Scan on prt2_adv_p3 t2_3 -> Hash -> Append -> Seq Scan on prt1_adv_p1 t1_1 Filter: (b = 0) -> Seq Scan on prt1_adv_p2 t1_2 Filter: (b = 0) -> Seq Scan on prt1_adv_p3 t1_3 Filter: (b = 0)(16 rows)test=# explain (costs off) SELECT t1.* FROM prt1_adv t1 WHERE t1.a IN (SELECT t2.b FROM prt2_adv t2) AND t1.b = 0 ORDER BY t1.a; QUERY PLAN------------------------------------------------------ Sort Sort Key: t1.a -> Hash Right Semi Join Hash Cond: (t2.b = t1.a) -> Append -> Seq Scan on prt2_adv_p1 t2_1 -> Seq Scan on prt2_adv_p2 t2_2 -> Seq Scan on prt2_adv_p3 t2_3 -> Hash -> Append -> Seq Scan on prt1_adv_p1 t1_1 Filter: (b = 0) -> Seq Scan on prt1_adv_p2 t1_2 Filter: (b = 0) -> Seq Scan on prt1_adv_p3 t1_3 Filter: (b = 0)(16 rows)test=#test=# explain (costs off) SELECT t1.* FROM plt1_adv t1 WHERE EXISTS (SELECT 1 FROM plt2_adv t2 WHERE t1.a = t2.a AND t1.c = t2.c) AND t1.b < 10 ORDER BY t1.a; QUERY PLAN------------------------------------------------------ Sort Sort Key: t1.a -> Hash Right Semi Join Hash Cond: ((t2.a = t1.a) AND (t2.c = t1.c)) -> Append -> Seq Scan on plt2_adv_p1 t2_1 -> Seq Scan on plt2_adv_p2 t2_2 -> Seq Scan on plt2_adv_p3 t2_3 -> Hash -> Append -> Seq Scan on plt1_adv_p1 t1_1 Filter: (b < 10) -> Seq Scan on plt1_adv_p2 t1_2 Filter: (b < 10) -> Seq Scan on plt1_adv_p3 t1_3 Filter: (b < 10)(16 rows)test=#test=# explain (costs off) SELECT t1.* FROM plt1_adv t1 WHERE (t1.a, t1.c) IN (SELECT t2.a, t2.c FROM plt2_adv t2) AND t1.b < 10 ORDER BY t1.a; QUERY PLAN------------------------------------------------------ Sort Sort Key: t1.a -> Hash Right Semi Join Hash Cond: ((t2.a = t1.a) AND (t2.c = t1.c)) -> Append -> Seq Scan on plt2_adv_p1 t2_1 -> Seq Scan on plt2_adv_p2 t2_2 -> Seq Scan on plt2_adv_p3 t2_3 -> Hash -> Append -> Seq Scan on plt1_adv_p1 t1_1 Filter: (b < 10) -> Seq Scan on plt1_adv_p2 t1_2 Filter: (b < 10) -> Seq Scan on plt1_adv_p3 t1_3 Filter: (b < 10)(16 rows)``````test=# explain (costs off) select * from int4_tbl i4, tenk1 a where exists(select * from tenk1 b where a.twothousand = b.twothousand and a.fivethous <> b.fivethous) and i4.f1 = a.tenthous; QUERY PLAN------------------------------------------------- Hash Right Semi Join Hash Cond: (b.twothousand = a.twothousand) Join Filter: (a.fivethous <> b.fivethous) -> Seq Scan on tenk1 b -> Hash -> Hash Join Hash Cond: (a.tenthous = i4.f1) -> Seq Scan on tenk1 a -> Hash -> Seq Scan on int4_tbl i4(10 rows)test=# explain (costs off ) SELECT *FROM int4_tbl i4, tenk1 aWHERE (a.twothousand, a.fivethous) IN ( SELECT b.twothousand, b.fivethous FROM tenk1 b WHERE a.twothousand = b.twothousand and a.fivethous <> b.fivethous)AND i4.f1 = a.tenthous; QUERY PLAN---------------------------------------------------------------------------------------- Nested Loop Join Filter: (i4.f1 = a.tenthous) -> Seq Scan on tenk1 a Filter: (SubPlan 1) SubPlan 1 -> Seq Scan on tenk1 b Filter: ((a.fivethous <> fivethous) AND (a.twothousand = twothousand)) -> Materialize -> Seq Scan on int4_tbl i4(9 rows)test=# set enable_nestloop =off;SETtest=# explain (costs off ) SELECT *FROM int4_tbl i4, tenk1 aWHERE (a.twothousand, a.fivethous) IN ( SELECT b.twothousand, b.fivethous FROM tenk1 b WHERE a.twothousand = b.twothousand and a.fivethous <> b.fivethous)AND i4.f1 = a.tenthous; QUERY PLAN---------------------------------------------------------------------------------------- Hash Join Hash Cond: (a.tenthous = i4.f1) -> Seq Scan on tenk1 a Filter: (SubPlan 1) SubPlan 1 -> Seq Scan on tenk1 b Filter: ((a.fivethous <> fivethous) AND (a.twothousand = twothousand)) -> Hash -> Seq Scan on int4_tbl i4(9 rows)```wenhui qiu <[email protected]> 于2023年12月15日周五 14:40写道:Hi Richard Guo I see that the test samples are all (exists) subqueries ,I think semi join should also support ( in) and ( any) subqueries. would you do more test on ( in) and ( any) subqueries?Best whish",
"msg_date": "Thu, 28 Dec 2023 11:02:45 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I am working on \"shared detoast value\"[0], where I want to avoid\ndetoast the same datum over and over again, I have to decide which\nmemory context should be used to hold the detoast value. later I \nfound I have to use different MemoryContexts for the OuterTuple and\ninnerTuple since OuterTuple usually have a longer lifespan.\n\nI found the following code in nodeMergeJoin.c which has pretty similar\nsituation, just that it uses ExprContext rather than MemoryContext.\n\nMergeJoinState *\nExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)\n\n\t/*\n\t * we need two additional econtexts in which we can compute the join\n\t * expressions from the left and right input tuples. The node's regular\n\t * econtext won't do because it gets reset too often.\n\t */\n\tmergestate->mj_OuterEContext = CreateExprContext(estate);\n\tmergestate->mj_InnerEContext = CreateExprContext(estate);\n\nIIUC, we needs a MemoryContext rather than ExprContext in fact. In the\nattachment, I just use two MemoryContext instead of the two ExprContexts\nwhich should be less memory and more precise semantics, and works\nfine. shall we go in this direction? I attached the 2 MemoryContext in\nJoinState rather than MergeJoinState, which is for the \"shared detoast\nvalue\"[0] more or less. \n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n\n\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Fri, 15 Dec 2023 15:35:15 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is a clearer memory lifespan for outerTuple and innerTuple useful?"
},
{
"msg_contents": "\nAndy Fan <[email protected]> writes:\n\n> ..., I attached the 2 MemoryContext in\n> JoinState rather than MergeJoinState, which is for the \"shared detoast\n> value\"[0] more or less. \n>\n\nAfter thinking more, if it is designed for \"shared detoast value\" patch\n(happens on ExecInterpExpr stage), the inner_tuple_memory and\nouter_tuple_memory should be attached to ExprContext rather than\nJoinState since it is more natual to access ExprConext (compared with\nJoinState) in ExecInterpExpr. I didn't attach a new version for this,\nany feedback will be appreciated. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 15 Dec 2023 15:51:10 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is a clearer memory lifespan for outerTuple and innerTuple\n useful?"
},
{
"msg_contents": "Andy Fan <[email protected]> writes:\n\n> Andy Fan <[email protected]> writes:\n>\n>> ..., I attached the 2 MemoryContext in\n>> JoinState rather than MergeJoinState, which is for the \"shared detoast\n>> value\"[0] more or less. \n>>\n\nIn order to delimit the scope of this discussion, I attached the 2\nMemoryContext to MergeJoinState. Since the code was writen by Tom at\n2005, so add Tom to the cc-list. \n\n\n\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 17 Dec 2023 21:46:52 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is a clearer memory lifespan for outerTuple and innerTuple\n useful?"
},
{
"msg_contents": "Hi!\n\nMaybe, the alternative way is using a separate kind of context, say name it\n'ToastContext' for all custom data related to Toasted values? What do you\nthink?\n\nOn Sun, Dec 17, 2023 at 4:52 PM Andy Fan <[email protected]> wrote:\n\n>\n> Andy Fan <[email protected]> writes:\n>\n> > Andy Fan <[email protected]> writes:\n> >\n> >> ..., I attached the 2 MemoryContext in\n> >> JoinState rather than MergeJoinState, which is for the \"shared detoast\n> >> value\"[0] more or less.\n> >>\n>\n> In order to delimit the scope of this discussion, I attached the 2\n> MemoryContext to MergeJoinState. Since the code was writen by Tom at\n> 2005, so add Tom to the cc-list.\n>\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Maybe, the alternative way is using a separate kind of context, say name it'ToastContext' for all custom data related to Toasted values? What do you think?On Sun, Dec 17, 2023 at 4:52 PM Andy Fan <[email protected]> wrote:\nAndy Fan <[email protected]> writes:\n\n> Andy Fan <[email protected]> writes:\n>\n>> ..., I attached the 2 MemoryContext in\n>> JoinState rather than MergeJoinState, which is for the \"shared detoast\n>> value\"[0] more or less. \n>>\n\nIn order to delimit the scope of this discussion, I attached the 2\nMemoryContext to MergeJoinState. Since the code was writen by Tom at\n2005, so add Tom to the cc-list. \n\n\n-- \nBest Regards\nAndy Fan\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 18 Dec 2023 00:20:22 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is a clearer memory lifespan for outerTuple and innerTuple\n useful?"
},
{
"msg_contents": "\nNikita Malakhov <[email protected]> writes:\n\n> Hi!\n>\n> Maybe, the alternative way is using a separate kind of context, say name it\n> 'ToastContext' for all custom data related to Toasted values? What do\n> you think?\n\nThat should be a candidate. The latest research makes me think the\n'detoast_values' should have the same life cycles as tts_values, so the\nmemory should be managed by TupleTuleSlot (rather than ExprContext) and\nbe handled in ExecCopySlot / ExecClearSlot stuff.\n\nIn TupleTableSlot we already have a tts_mctx MemoryContext, reusing it\nneeds using 'pfree' to free the detoast values and but a dedicated\nmemory context pays more costs on the setup, but a more efficient\nMemoryContextReset. \n\n>\n> On Sun, Dec 17, 2023 at 4:52 PM Andy Fan <[email protected]> wrote:\n>\n> Andy Fan <[email protected]> writes:\n>\n> > Andy Fan <[email protected]> writes:\n> >\n> >> ..., I attached the 2 MemoryContext in\n> >> JoinState rather than MergeJoinState, which is for the \"shared detoast\n> >> value\"[0] more or less. \n> >>\n>\n> In order to delimit the scope of this discussion, I attached the 2\n> MemoryContext to MergeJoinState. Since the code was writen by Tom at\n> 2005, so add Tom to the cc-list. \n>\n\nHowever this patch can be discussed seperately. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 18 Dec 2023 07:22:31 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is a clearer memory lifespan for outerTuple and innerTuple\n useful?"
}
] |
[
{
"msg_contents": "Hello. I'm working on the support of autonomous transactions in Postgres.\nCould you please make a preliminary review and give advices (see section \n#TODO)\n\n# Patch\nv0001-Autonomous-transactions.patch\n\n# Introduction\nThis patch implements Autonomous Transactions for PL/pgSQL.\nAutonomous transaction is a transaction that can be succesfully commited \neven if base transaction is rolled back. Common use cases: \nlogging/auditing/tracking progress in tables, so that information about \nthe execution attempt is preserved even when the main transaction is \nrolled back — for example, due to an error.\n\n# Glossary\nSession - entity that groups multiple related SQL commands into a single \ntransaction.\nMain session (backend, foreground session) - session through which the \nuser interacts.\nMain transaction (parent) - transaction that runs in the main session.\nAutonomous session - session that performs an offline transaction. It \nstarts from the main session.\nAutonomous transaction - independent transaction that runs inside an \nautonomous session.\nAutonomous function - function with the pragma AUTONOMOUS_TRANSACTION. \nWhen it is executed, an autonomous session is created in it.\nBackground worker - background process that performs some actions in the \nbackground, without the user's participation.\ndsm - dynamic shared memory.\nshm_mq - shared memory message queue.\n\n# Internals\nThis patch introduces a \"pragma AUTONOMOUS_TRANSACTION\" to functions. \nWhen one such function is executed all (at the current time not all, \nWIP) statements from it are executed in an autonomous session.\n\n* Example *\n*SQL-request:*\n\n```sql\nCREATE TABLE tbl (a int);\nCREATE OR REPLACE FUNCTION func() RETURNS void\nLANGUAGE plpgsql\nAS $$\nDECLARE\n PRAGMA AUTONOMOUS_TRANSACTION;\nBEGIN\n INSERT INTO tbl VALUES (1);\nEND;\n$$;\n\nSTART TRANSACTION;\nSELECT func();\nROLLBACK;\n\nSELECT * FROM tbl;\n\nDROP FUNCTION func;\nDROP TABLE tbl;\n```\n\n*Output:*\n\n```bash\n a\n---\n 1\n(1 row)\n```\n\n\nFor each backend the patch lazily creates a pool of autonomous sessions. \nWhen backend calls autonomous function, backend takes one autonomous \nsession from this pool and sends there function's statements for \nexecution. When execution is finished backend returns session to pool. \nLazily means that pool is created only when first autonomous session is \nneeded.\nBackend and autonomous session communicate with the help of Postgres \nclient-server protocol. Messages are sent through dynamic shared memory. \nExecution of backend and autonomous session is synchronous: autonomous \nsession waits for messages from backendand backend waits for messages \nfrom autonomous session.\nAutonomous session uses Background workers internally. As it's a \nseparate process, it contains caches, etc. In order to prevent infinite \ngrow of resources usage we reset all caches by timeout using restart of \nautonomous sessions. This timeout is set by guc setting \nautonomous_session_lifetime.\nSource code contains more detailed comments.\n\n# Alternatives\nAt the current time for this functionality may be uses extensions: \ndblink and pg_background. But they have shortcomings:\n 1) not in the Postgres core, they are extensions\n 2) lower performance. Each call creates new process that is \ndestroyed immediately after transaction is finished.\n\n# TODO\nCould you please give advices how implement public pool shared between \nall backends?\n1) Support execution of remaining statements in autonomous sessions.\n2) Public pool shared between all backends. At the current time for each \nbackend private pool is created.\n\n# Tests\nImplementation contains many regression tests of varying complexity, \nwhich check supported features.\n\n# Platform\nThis patch was checkouted from tag 15.4. This is WIP. I've developed in \nLinux, code doesn't contain platfrom-specific code, only Postgres \ninternal data structures and functions.\n\n# Documentation\nRegression tests contain many examples\n* Describe the effect your patch has on performance, if any.\nIt adds a new feature and increase performance compared to dblink and \npg_background\n\n# History\n## 1st feature requests and discussions in pgsql-hackers (without code)\n1) 2008\nhttps://www.postgresql.org/message-id/flat/1A6E6D554222284AB25ABE3229A9276271549A%40nrtexcus702.int.asurion.com\n2) 2010\nhttps://www.postgresql.org/message-id/flat/AANLkTi%3DuogmYxLKWmUfFSg-Ki2bejsQiO2g5GTMxvdW2%40mail.gmail.com\n3) 2011\nhttps://www.postgresql.org/message-id/flat/1303399444.9126.8.camel%40vanquo.pezone.net\n4) 2011\nhttps://wiki.postgresql.org/wiki/Autonomous_subtransactions\n5) 2011\nhttps://www.postgresql.org/message-id/flat/20111218082812.GA14355%40leggeri.gi.lan\nhttps://wiki.postgresql.org/wiki/Autonomous_subtransactions\n## Implementaion\n1) 2014, Rajeev Rastogi, implementation based on subtransactions\nhttps://www.postgresql.org/message-id/flat/BF2827DCCE55594C8D7A8F7FFD3AB7713DDDEF59%40SZXEML508-MBX.china.huawei.com\n2) 2015, Rajeev Rastogi, new theme, continues discussion about semantics \nand syntax of autonomous transactions\nhttps://www.postgresql.org/message-id/flat/BF2827DCCE55594C8D7A8F7FFD3AB7715990499A%40szxeml521-mbs.china.huawei.com\n3) 2016, Peter Eisentraut, implementation based on background workers\nhttps://www.postgresql.org/message-id/flat/659a2fce-b6ee-06de-05c0-c8ed6a01979e%402ndquadrant.com\n\n# Summary\n* Add pragma AUTONOMOUS_TRANSACTION in the functions. When function \ncontains this pragma, the it's executed autonomously\n* Background workers are used to run autonomous sessions.\n* Synchronous execution between backend and autonomous session\n* Postgres Client-Server Protocol is used to communicate between them\n* Pool of autonomous sessions. Pool is created lazily.\n* Infinite nested calls of autonomous functions are allowed. Limited \nonly by computer resources.\n* If another 2nd autonomous function is called in the 1st autonomous \nfunction, the 2nd is executed at the beginning, and then the 1st \ncontinues execution.\n\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC",
"msg_date": "Fri, 15 Dec 2023 14:28:12 +0300",
"msg_from": "Ivan Kush <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autonomous transactions 2023, WIP"
},
{
"msg_contents": "Hi\n\nalthough I like the idea related to autonomous transactions, I don't think\nso this way is the best\n\n1. The solution based on background workers looks too fragile - it can be\neasy to exhaust all background workers, and because this feature is\nproposed mainly for logging, then it is a little bit dangerous, because it\nmeans loss of possibility of logging.\n\n2. although the Oracle syntax is interesting, and I proposed PRAGMA more\ntimes, it doesn't allow this functionality in other PL\n\nI don't propose exactly firebird syntax\nhttps://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html,\nbut I think this solution is better than ADA's PRAGMAs. I can imagine some\nspecial flag for function like\n\nCREATE OR REPLACE FUNCTION ...\nAS $$\n$$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n\nas another possibility.\n\n3. Heikki wrote about the possibility to support threads in Postgres. One\nsignificant part of this project is elimination of global variables. It can\nbe common with autonomous transactions.\n\nSurely, the first topic should be the method of implementation. Maybe I\nmissed it, but there is no agreement of background worker based.\n\nRegards\n\nPavel\n\nHialthough I like the idea related to autonomous transactions, I don't think so this way is the best1. The solution based on background workers looks too fragile - it can be easy to exhaust all background workers, and because this feature is proposed mainly for logging, then it is a little bit dangerous, because it means loss of possibility of logging. 2. although the Oracle syntax is interesting, and I proposed PRAGMA more times, it doesn't allow this functionality in other PLI don't propose exactly firebird syntax https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html, but I think this solution is better than ADA's PRAGMAs. I can imagine some special flag for function likeCREATE OR REPLACE FUNCTION ...AS $$$$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;as another possibility. 3. Heikki wrote about the possibility to support threads in Postgres. One significant part of this project is elimination of global variables. It can be common with autonomous transactions.Surely, the first topic should be the method of implementation. Maybe I missed it, but there is no agreement of background worker based.RegardsPavel",
"msg_date": "Thu, 21 Dec 2023 10:35:47 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "\n\n> On 15 Dec 2023, at 16:28, Ivan Kush <[email protected]> wrote:\n> \n> \n> \n> Hello. I'm working on the support of autonomous transactions in Postgres.\n> \n\n> # Summary\n> * Add pragma AUTONOMOUS_TRANSACTION in the functions. When function \n> contains this pragma, the it's executed autonomously\n> * Background workers are used to run autonomous sessions.\n> * Synchronous execution between backend and autonomous session\n> * Postgres Client-Server Protocol is used to communicate between them\n> * Pool of autonomous sessions. Pool is created lazily.\n> * Infinite nested calls of autonomous functions are allowed. Limited \n> only by computer resources.\n> * If another 2nd autonomous function is called in the 1st autonomous \n> function, the 2nd is executed at the beginning, and then the 1st \n> continues execution.\n\nCool, looks interesting! As far as I know EnterpriseDB, Postgres Pro and OracleDB have this functionality. So, seems like the stuff is in demand.\nHow does your version compare to this widely used databases? Is anyone else using backgroud connections? Which syntax is used by other DBMS'?\n\nLooking into the code it seems like an easy way for PL\\pgSQL function to have a client connection. I think this might work for other PLs too.\n\nThe patch touches translations ( src/backend/po/). I think we typically do not do this in code patches, because this work is better handled by translators.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 21 Dec 2023 16:26:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": " > 1. The solution based on background workers looks too fragile - it \ncan be easy to exhaust all background workers, and because this feature \nis proposed mainly for logging, then it is a little bit dangerous, \nbecause it means loss of possibility of logging.\n\n1. We could add types for background workers. For each type add \nguc-settings, like max workers of each type.\nFor examaple, for `common` leave `max_worker_processes` setting for \nbackward compatibility\nenum bgw_type {\n common,\n autonomous,\n etc....\n};\n\n\n > 2. although the Oracle syntax is interesting, and I proposed PRAGMA \nmore times, it doesn't allow this functionality in other PL\n\n2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`? `BEGIN \nAUTONOMOUS`.\nIt shows immediately that we are in autonomous session, no need to \nsearch in subsequent lines for keyword.\n\n```\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN AUTONOMOUS\n INSERT INTO tbl VALUES (1);\n BEGIN AUTONOMOUS\n ....\n END;\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n > CREATE OR REPLACE FUNCTION ...\n > AS $$\n > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n\nThe downside with the keyword in function declaration, that we will not \nbe able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or \n`BEGIN AUTONOMOUS` it's possible to create them.\n\n```\n-- BEGIN AUTONOMOUS\n\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN\n INSERT INTO tbl VALUES (1);\n BEGIN AUTONOMOUS\n INSERT INTO tbl VALUES (2);\n END;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n-- or PRAGMA AUTONOMOUS\n\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN\n INSERT INTO tbl VALUES (1);\n BEGIN\n DECLARE AUTONOMOUS_TRANSACTION;\n INSERT INTO tbl VALUES (2);\n END;\nEND;\n$$ LANGUAGE plpgsql;\n\n\nSTART TRANSACTION;\nfoo();\nROLLBACK;\n```\n\n```\nOutput:\n2\n```\n\n > it doesn't allow this functionality in other PL\n\nI didn't work out on other PLs at the current time, but...\n\n## Python\n\nIn plpython we could use context managers, like was proposed in Peter's \npatch. ```\n\nwith plpy.autonomous() as a:\n a.execute(\"INSERT INTO tbl VALUES (1) \");\n\n```\n\n## Perl\n\nI don't programm in Perl. But googling shows Perl supports subroutine \nattributes. Maybe add `autonomous` attribute for autonomous execution?\n\n```\nsub foo :autonomous {\n}\n```\n\nhttps://www.perl.com/article/untangling-subroutine-attributes/\n\n\n > Heikki wrote about the possibility to support threads in Postgres.\n\n3. Do you mean this thread?\nhttps://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\nThanks for info. Will watch it. Unfortunately it takes many years to \nimplement threads =(\n\n > Surely, the first topic should be the method of implementation. Maybe \nI missed it, but there is no agreement of background worker based.\nI agree. No consensus at the current time.\nPros of bgworkers are:\n1. this entity is already in Postgres.\n2. possibility of asynchronous execution of autonomous session in the \nfuture. Like in pg_background extension. For asynchronous execution we \nneed a separate process, bgworkers are this separate process.\n\nAlso maybe later create autonomous workers themselves without using \nbgworkers internally: launch of separate process, etc. But I think will \nbe many common code with bgworkers.\n\n\nOn 21.12.2023 12:35, Pavel Stehule wrote:\n> Hi\n>\n> although I like the idea related to autonomous transactions, I don't \n> think so this way is the best\n>\n> 1. The solution based on background workers looks too fragile - it can \n> be easy to exhaust all background workers, and because this feature is \n> proposed mainly for logging, then it is a little bit dangerous, \n> because it means loss of possibility of logging.\n>\n> 2. although the Oracle syntax is interesting, and I proposed PRAGMA \n> more times, it doesn't allow this functionality in other PL\n>\n> I don't propose exactly firebird syntax \n> https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html, \n> but I think this solution is better than ADA's PRAGMAs. I can imagine \n> some special flag for function like\n>\n> CREATE OR REPLACE FUNCTION ...\n> AS $$\n> $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n>\n> as another possibility.\n>\n> 3. Heikki wrote about the possibility to support threads in Postgres. \n> One significant part of this project is elimination of global \n> variables. It can be common with autonomous transactions.\n>\n> Surely, the first topic should be the method of implementation. Maybe \n> I missed it, but there is no agreement of background worker based.\n>\n> Regards\n>\n> Pavel\n>\n>\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC\n\n\n\n",
"msg_date": "Sun, 24 Dec 2023 14:27:47 +0300",
"msg_from": "Ivan Kush <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": " > Is anyone else using backgroud connections?\n\nDon't know at the current time. Maybe EnterpriseDB uses bgworkers as \nPeter Eisentraut works there currently (LinkedIn says =)) And in 2016 \nhe has proposed a patch with autonomous transactions with bgworkers.\nhttps://www.postgresql.org/message-id/flat/659a2fce-b6ee-06de-05c0-c8ed6a01979e%402ndquadrant.com\n\n > Which syntax is used by other DBMS'?\n\nMain databases use:\n1) PRAGMA in block declaration: Oracle, EnterpriseDB, this patch\n2) AUTONOMOUS keyword near BEGIN keyword: PostgresPro, SAP HANA\n3) AUTONOMOUS keyword in function declaration: IBM DB2\n4) сompletely new syntax of autonomous block: Firebird\n1 and 2 cases are the same, autonomicity by sub-blocks. Difference only \nin syntax, added to existing block definition\n3 case autonomicity only by function (as keyword in function declaration)\n4 case should we add completely new block definitions?\n\n# Oracle\n\nUses PRAGMA AUTONOMOUS_TRANSACTION\n\n```\nCREATE FUNCTION foo() RETURNS void AS $$\nPRAGMA AUTONOMOUS_TRANSACTION;\nBEGIN\n INSERT INTO tbl VALUES (1);\nEND;\n$$ LANGUAGE plpgsql;\n```\n\nhttps://docs.oracle.com/cd/B13789_01/appdev.101/b10807/13_elems002.htm\n\n\n# EnterpriseDB\n\nUses PRAGMA AUTONOMOUS_TRANSACTION; as in Oracle\n\n```\nCREATE FUNCTION foo() RETURNS void AS $$\nPRAGMA AUTONOMOUS_TRANSACTION;\nBEGIN\n INSERT INTO tbl VALUES (1);\nEND;\n$$ LANGUAGE plpgsql;\n```\n\nhttps://www.enterprisedb.com/docs/epas/latest/application_programming/epas_compat_spl/06_transaction_control/03_pragma_autonomous_transaction/\n\n\n# PostgresPro\n\n* plpgsql\nBlock construction in PL/pgSQL is extended by the optional autonomous \nkeyword.\n\n```\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN AUTONOMOUS\n INSERT INTO tbl VALUES (1);\n BEGIN AUTONOMOUS\n ....\n END;\nEND;\n$$ LANGUAGE plpgsql;\n```\n\nhttps://postgrespro.com/docs/enterprise/15/ch16s04\n\n* plpython\n\nautonomous method that can be used in the WITH clause to start an \nautonomous transaction\n\n```\nwith plpy.autonomous() as a:\n a.execute(\"INSERT INTO tbl VALUES (1);\")\n```\n\nhttps://postgrespro.com/docs/enterprise/15/ch16s05\n\n\n# IBM DB2\n\nAUTONOMOUS keyword in function declaration\n\n```\nCREATE PROCEDURE foo()\nAUTONOMOUS\nLANGUAGE SQL\nBEGIN\n BEGIN AUTONOMOUS TRANSACTION;\n INSERT INTO tbl VALUES (1);\n END:\nEND;\n$$ LANGUAGE plpgsql;\n```\n\nhttps://github.com/IBM/db2-samples/blob/master/admin_scripts/autonomous_transaction.db2\nhttps://subscription.packtpub.com/book/programming/9781849683968/1/ch01lvl1sec09/using-autonomous-transactions\n\n\n# SAP HANA\n\nAlso AUTONOMOUS_TRANSACTION option for blocks\n\n```\nCREATE PROCEDURE foo() LANGUAGE SQLSCRIPT AS\nBEGIN\n BEGIN AUTONOMOUS TRANSACTION\n INSERT INTO tbl VALUES (1);\n END;\nEND;\n```\n\nhttps://help.sap.com/docs/SAP_HANA_PLATFORM/de2486ee947e43e684d39702027f8a94/4ad70daee8b64b90ab162565ed6f73ef.html\n\n# Firebird\n\nCompletely new block definition `IN AUTONOMOUS TRANSACTION DO`\n\n```\n\nCREATE PROCEDURE foo() AS\nBEGIN\n IN AUTONOMOUS TRANSACTION DO\n INSERT INTO tbl VALUES (1);\n END;\nEND;\n\n```\n\nhttps://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html\n\n\nOn 21.12.2023 14:26, Andrey M. Borodin wrote:\n>\n>> On 15 Dec 2023, at 16:28, Ivan Kush <[email protected]> wrote:\n>>\n>>\n>>\n>> Hello. I'm working on the support of autonomous transactions in Postgres.\n>>\n>> # Summary\n>> * Add pragma AUTONOMOUS_TRANSACTION in the functions. When function\n>> contains this pragma, the it's executed autonomously\n>> * Background workers are used to run autonomous sessions.\n>> * Synchronous execution between backend and autonomous session\n>> * Postgres Client-Server Protocol is used to communicate between them\n>> * Pool of autonomous sessions. Pool is created lazily.\n>> * Infinite nested calls of autonomous functions are allowed. Limited\n>> only by computer resources.\n>> * If another 2nd autonomous function is called in the 1st autonomous\n>> function, the 2nd is executed at the beginning, and then the 1st\n>> continues execution.\n> Cool, looks interesting! As far as I know EnterpriseDB, Postgres Pro and OracleDB have this functionality. So, seems like the stuff is in demand.\n> How does your version compare to this widely used databases? Is anyone else using backgroud connections? Which syntax is used by other DBMS'?\n>\n> Looking into the code it seems like an easy way for PL\\pgSQL function to have a client connection. I think this might work for other PLs too.\n>\n> The patch touches translations ( src/backend/po/). I think we typically do not do this in code patches, because this work is better handled by translators.\n>\n>\n> Best regards, Andrey Borodin.\n\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC\n\n\n\n",
"msg_date": "Sun, 24 Dec 2023 14:32:48 +0300",
"msg_from": "Ivan Kush <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "Hi\n\nne 24. 12. 2023 v 12:27 odesílatel Ivan Kush <[email protected]>\nnapsal:\n\n> > 1. The solution based on background workers looks too fragile - it\n> can be easy to exhaust all background workers, and because this feature\n> is proposed mainly for logging, then it is a little bit dangerous,\n> because it means loss of possibility of logging.\n>\n> 1. We could add types for background workers. For each type add\n> guc-settings, like max workers of each type.\n> For examaple, for `common` leave `max_worker_processes` setting for\n> backward compatibility\n> enum bgw_type {\n> common,\n> autonomous,\n> etc....\n> };\n>\n\nCan you show some benchmarks? I don't like this system too much but maybe\nit can work enough.\n\nStill I am interested in possible use cases. If it should be used only for\nlogging, then we can implement something less generic, but surely with\nbetter performance and stability. Logging to tables is a little bit\noutdated.\n\nRegards\n\nPavel\n\n\n>\n>\n> > 2. although the Oracle syntax is interesting, and I proposed PRAGMA\n> more times, it doesn't allow this functionality in other PL\n>\n> 2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`? `BEGIN\n> AUTONOMOUS`.\n> It shows immediately that we are in autonomous session, no need to\n> search in subsequent lines for keyword.\n>\n> ```\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> ....\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n> ```\n>\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n>\n> The downside with the keyword in function declaration, that we will not\n> be able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or\n> `BEGIN AUTONOMOUS` it's possible to create them.\n>\n> ```\n> -- BEGIN AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> -- or PRAGMA AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN\n> DECLARE AUTONOMOUS_TRANSACTION;\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> START TRANSACTION;\n> foo();\n> ROLLBACK;\n> ```\n>\n> ```\n> Output:\n> 2\n> ```\n>\n> > it doesn't allow this functionality in other PL\n>\n> I didn't work out on other PLs at the current time, but...\n>\n> ## Python\n>\n> In plpython we could use context managers, like was proposed in Peter's\n> patch. ```\n>\n> with plpy.autonomous() as a:\n> a.execute(\"INSERT INTO tbl VALUES (1) \");\n>\n> ```\n>\n> ## Perl\n>\n> I don't programm in Perl. But googling shows Perl supports subroutine\n> attributes. Maybe add `autonomous` attribute for autonomous execution?\n>\n> ```\n> sub foo :autonomous {\n> }\n> ```\n>\n> https://www.perl.com/article/untangling-subroutine-attributes/\n>\n>\n> > Heikki wrote about the possibility to support threads in Postgres.\n>\n> 3. Do you mean this thread?\n>\n> https://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\n> Thanks for info. Will watch it. Unfortunately it takes many years to\n> implement threads =(\n>\n> > Surely, the first topic should be the method of implementation. Maybe\n> I missed it, but there is no agreement of background worker based.\n> I agree. No consensus at the current time.\n> Pros of bgworkers are:\n> 1. this entity is already in Postgres.\n> 2. possibility of asynchronous execution of autonomous session in the\n> future. Like in pg_background extension. For asynchronous execution we\n> need a separate process, bgworkers are this separate process.\n>\n> Also maybe later create autonomous workers themselves without using\n> bgworkers internally: launch of separate process, etc. But I think will\n> be many common code with bgworkers.\n>\n>\n> On 21.12.2023 12:35, Pavel Stehule wrote:\n> > Hi\n> >\n> > although I like the idea related to autonomous transactions, I don't\n> > think so this way is the best\n> >\n> > 1. The solution based on background workers looks too fragile - it can\n> > be easy to exhaust all background workers, and because this feature is\n> > proposed mainly for logging, then it is a little bit dangerous,\n> > because it means loss of possibility of logging.\n> >\n> > 2. although the Oracle syntax is interesting, and I proposed PRAGMA\n> > more times, it doesn't allow this functionality in other PL\n> >\n> > I don't propose exactly firebird syntax\n> > https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html,\n>\n> > but I think this solution is better than ADA's PRAGMAs. I can imagine\n> > some special flag for function like\n> >\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n> >\n> > as another possibility.\n> >\n> > 3. Heikki wrote about the possibility to support threads in Postgres.\n> > One significant part of this project is elimination of global\n> > variables. It can be common with autonomous transactions.\n> >\n> > Surely, the first topic should be the method of implementation. Maybe\n> > I missed it, but there is no agreement of background worker based.\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> --\n> Best wishes,\n> Ivan Kush\n> Tantor Labs LLC\n>\n>\n\nHine 24. 12. 2023 v 12:27 odesílatel Ivan Kush <[email protected]> napsal: > 1. The solution based on background workers looks too fragile - it \ncan be easy to exhaust all background workers, and because this feature \nis proposed mainly for logging, then it is a little bit dangerous, \nbecause it means loss of possibility of logging.\n\n1. We could add types for background workers. For each type add \nguc-settings, like max workers of each type.\nFor examaple, for `common` leave `max_worker_processes` setting for \nbackward compatibility\nenum bgw_type {\n common,\n autonomous,\n etc....\n};Can you show some benchmarks? I don't like this system too much but maybe it can work enough. Still I am interested in possible use cases. If it should be used only for logging, then we can implement something less generic, but surely with better performance and stability. Logging to tables is a little bit outdated.RegardsPavel \n\n\n > 2. although the Oracle syntax is interesting, and I proposed PRAGMA \nmore times, it doesn't allow this functionality in other PL\n\n2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`? `BEGIN \nAUTONOMOUS`.\nIt shows immediately that we are in autonomous session, no need to \nsearch in subsequent lines for keyword.\n\n```\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN AUTONOMOUS\n INSERT INTO tbl VALUES (1);\n BEGIN AUTONOMOUS\n ....\n END;\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n > CREATE OR REPLACE FUNCTION ...\n > AS $$\n > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n\nThe downside with the keyword in function declaration, that we will not \nbe able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or \n`BEGIN AUTONOMOUS` it's possible to create them.\n\n```\n-- BEGIN AUTONOMOUS\n\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN\n INSERT INTO tbl VALUES (1);\n BEGIN AUTONOMOUS\n INSERT INTO tbl VALUES (2);\n END;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n-- or PRAGMA AUTONOMOUS\n\nCREATE FUNCTION foo() RETURNS void AS $$\nBEGIN\n INSERT INTO tbl VALUES (1);\n BEGIN\n DECLARE AUTONOMOUS_TRANSACTION;\n INSERT INTO tbl VALUES (2);\n END;\nEND;\n$$ LANGUAGE plpgsql;\n\n\nSTART TRANSACTION;\nfoo();\nROLLBACK;\n```\n\n```\nOutput:\n2\n```\n\n > it doesn't allow this functionality in other PL\n\nI didn't work out on other PLs at the current time, but...\n\n## Python\n\nIn plpython we could use context managers, like was proposed in Peter's \npatch. ```\n\nwith plpy.autonomous() as a:\n a.execute(\"INSERT INTO tbl VALUES (1) \");\n\n```\n\n## Perl\n\nI don't programm in Perl. But googling shows Perl supports subroutine \nattributes. Maybe add `autonomous` attribute for autonomous execution?\n\n```\nsub foo :autonomous {\n}\n```\n\nhttps://www.perl.com/article/untangling-subroutine-attributes/\n\n\n > Heikki wrote about the possibility to support threads in Postgres.\n\n3. Do you mean this thread?\nhttps://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\nThanks for info. Will watch it. Unfortunately it takes many years to \nimplement threads =(\n\n > Surely, the first topic should be the method of implementation. Maybe \nI missed it, but there is no agreement of background worker based.\nI agree. No consensus at the current time.\nPros of bgworkers are:\n1. this entity is already in Postgres.\n2. possibility of asynchronous execution of autonomous session in the \nfuture. Like in pg_background extension. For asynchronous execution we \nneed a separate process, bgworkers are this separate process.\n\nAlso maybe later create autonomous workers themselves without using \nbgworkers internally: launch of separate process, etc. But I think will \nbe many common code with bgworkers.\n\n\nOn 21.12.2023 12:35, Pavel Stehule wrote:\n> Hi\n>\n> although I like the idea related to autonomous transactions, I don't \n> think so this way is the best\n>\n> 1. The solution based on background workers looks too fragile - it can \n> be easy to exhaust all background workers, and because this feature is \n> proposed mainly for logging, then it is a little bit dangerous, \n> because it means loss of possibility of logging.\n>\n> 2. although the Oracle syntax is interesting, and I proposed PRAGMA \n> more times, it doesn't allow this functionality in other PL\n>\n> I don't propose exactly firebird syntax \n> https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html, \n> but I think this solution is better than ADA's PRAGMAs. I can imagine \n> some special flag for function like\n>\n> CREATE OR REPLACE FUNCTION ...\n> AS $$\n> $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n>\n> as another possibility.\n>\n> 3. Heikki wrote about the possibility to support threads in Postgres. \n> One significant part of this project is elimination of global \n> variables. It can be common with autonomous transactions.\n>\n> Surely, the first topic should be the method of implementation. Maybe \n> I missed it, but there is no agreement of background worker based.\n>\n> Regards\n>\n> Pavel\n>\n>\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC",
"msg_date": "Sun, 24 Dec 2023 13:38:14 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "\nOn 24.12.2023 15:38, Pavel Stehule wrote:\n> Can you show some benchmarks? I don't like this system too much but \n> maybe it can work enough.\n>\n> Still I am interested in possible use cases. If it should be used only \n> for logging, then we can implement something less generic, but surely \n> with better performance and stability. Logging to tables is a little \n> bit outdated.\n>\n> Regards\n>\n> Pavel\n\nAll use cases of pg_background, except asynchronous execution. If later \nadd asynchronous execution, then all =)\n\nFor example, also:\n\n* conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n\n* possibility to create functions that calls utility statements, like \nVACUUM, etc.\n\nI don't have good benchmarks now. Some simple, like many INSERTs. Pool \ngives advantage, more tps compared to pg_background with increasing \nnumber of backends.\n\nThe main advantage over pg_background is pool of workers. In this patch \nseparate pool is created for each backend. At the current time I'm \ncoding one shared pool for all backends.\n\n>\n>\n> > 2. although the Oracle syntax is interesting, and I proposed\n> PRAGMA\n> more times, it doesn't allow this functionality in other PL\n>\n> 2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`?\n> `BEGIN\n> AUTONOMOUS`.\n> It shows immediately that we are in autonomous session, no need to\n> search in subsequent lines for keyword.\n>\n> ```\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> ....\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n> ```\n>\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n>\n> The downside with the keyword in function declaration, that we\n> will not\n> be able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or\n> `BEGIN AUTONOMOUS` it's possible to create them.\n>\n> ```\n> -- BEGIN AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> -- or PRAGMA AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN\n> DECLARE AUTONOMOUS_TRANSACTION;\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> START TRANSACTION;\n> foo();\n> ROLLBACK;\n> ```\n>\n> ```\n> Output:\n> 2\n> ```\n>\n> > it doesn't allow this functionality in other PL\n>\n> I didn't work out on other PLs at the current time, but...\n>\n> ## Python\n>\n> In plpython we could use context managers, like was proposed in\n> Peter's\n> patch. ```\n>\n> with plpy.autonomous() as a:\n> a.execute(\"INSERT INTO tbl VALUES (1) \");\n>\n> ```\n>\n> ## Perl\n>\n> I don't programm in Perl. But googling shows Perl supports subroutine\n> attributes. Maybe add `autonomous` attribute for autonomous execution?\n>\n> ```\n> sub foo :autonomous {\n> }\n> ```\n>\n> https://www.perl.com/article/untangling-subroutine-attributes/\n>\n>\n> > Heikki wrote about the possibility to support threads in Postgres.\n>\n> 3. Do you mean this thread?\n> https://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\n> Thanks for info. Will watch it. Unfortunately it takes many years to\n> implement threads =(\n>\n> > Surely, the first topic should be the method of implementation.\n> Maybe\n> I missed it, but there is no agreement of background worker based.\n> I agree. No consensus at the current time.\n> Pros of bgworkers are:\n> 1. this entity is already in Postgres.\n> 2. possibility of asynchronous execution of autonomous session in the\n> future. Like in pg_background extension. For asynchronous\n> execution we\n> need a separate process, bgworkers are this separate process.\n>\n> Also maybe later create autonomous workers themselves without using\n> bgworkers internally: launch of separate process, etc. But I think\n> will\n> be many common code with bgworkers.\n>\n>\n> On 21.12.2023 12:35, Pavel Stehule wrote:\n> > Hi\n> >\n> > although I like the idea related to autonomous transactions, I\n> don't\n> > think so this way is the best\n> >\n> > 1. The solution based on background workers looks too fragile -\n> it can\n> > be easy to exhaust all background workers, and because this\n> feature is\n> > proposed mainly for logging, then it is a little bit dangerous,\n> > because it means loss of possibility of logging.\n> >\n> > 2. although the Oracle syntax is interesting, and I proposed PRAGMA\n> > more times, it doesn't allow this functionality in other PL\n> >\n> > I don't propose exactly firebird syntax\n> >\n> https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html,\n>\n> > but I think this solution is better than ADA's PRAGMAs. I can\n> imagine\n> > some special flag for function like\n> >\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n> >\n> > as another possibility.\n> >\n> > 3. Heikki wrote about the possibility to support threads in\n> Postgres.\n> > One significant part of this project is elimination of global\n> > variables. It can be common with autonomous transactions.\n> >\n> > Surely, the first topic should be the method of implementation.\n> Maybe\n> > I missed it, but there is no agreement of background worker based.\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> -- \n> Best wishes,\n> Ivan Kush\n> Tantor Labs LLC\n>\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC\n\n\n\n",
"msg_date": "Sun, 31 Dec 2023 17:15:33 +0300",
"msg_from": "Ivan Kush <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "Hi\n\nne 31. 12. 2023 v 15:15 odesílatel Ivan Kush <[email protected]>\nnapsal:\n\n>\n> On 24.12.2023 15:38, Pavel Stehule wrote:\n> > Can you show some benchmarks? I don't like this system too much but\n> > maybe it can work enough.\n> >\n> > Still I am interested in possible use cases. If it should be used only\n> > for logging, then we can implement something less generic, but surely\n> > with better performance and stability. Logging to tables is a little\n> > bit outdated.\n> >\n> > Regards\n> >\n> > Pavel\n>\n> All use cases of pg_background, except asynchronous execution. If later\n> add asynchronous execution, then all =)\n>\nFor example, also:\n>\n> * conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n>\n> * possibility to create functions that calls utility statements, like\n> VACUUM, etc.\n>\n\nalmost all these tasks are more or less dirty. It is a serious question if\nwe want to integrate pg_background to core.\n\nI don't have good benchmarks now. Some simple, like many INSERTs. Pool\n> gives advantage, more tps compared to pg_background with increasing\n> number of backends.\n>\n> The main advantage over pg_background is pool of workers. In this patch\n> separate pool is created for each backend. At the current time I'm\n> coding one shared pool for all backends.\n>\n\nI afraid so this solution can be very significantly slower than logging to\npostgres log or forwarding to syslog\n\n\n>\n> >\n> >\n> > > 2. although the Oracle syntax is interesting, and I proposed\n> > PRAGMA\n> > more times, it doesn't allow this functionality in other PL\n> >\n> > 2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`?\n> > `BEGIN\n> > AUTONOMOUS`.\n> > It shows immediately that we are in autonomous session, no need to\n> > search in subsequent lines for keyword.\n> >\n> > ```\n> > CREATE FUNCTION foo() RETURNS void AS $$\n> > BEGIN AUTONOMOUS\n> > INSERT INTO tbl VALUES (1);\n> > BEGIN AUTONOMOUS\n> > ....\n> > END;\n> > END;\n> > $$ LANGUAGE plpgsql;\n> > ```\n> >\n> > > CREATE OR REPLACE FUNCTION ...\n> > > AS $$\n> > > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n> >\n> > The downside with the keyword in function declaration, that we\n> > will not\n> > be able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or\n> > `BEGIN AUTONOMOUS` it's possible to create them.\n> >\n> > ```\n> > -- BEGIN AUTONOMOUS\n> >\n> > CREATE FUNCTION foo() RETURNS void AS $$\n> > BEGIN\n> > INSERT INTO tbl VALUES (1);\n> > BEGIN AUTONOMOUS\n> > INSERT INTO tbl VALUES (2);\n> > END;\n> > END;\n> > $$ LANGUAGE plpgsql;\n> >\n> >\n> > -- or PRAGMA AUTONOMOUS\n> >\n> > CREATE FUNCTION foo() RETURNS void AS $$\n> > BEGIN\n> > INSERT INTO tbl VALUES (1);\n> > BEGIN\n> > DECLARE AUTONOMOUS_TRANSACTION;\n> > INSERT INTO tbl VALUES (2);\n> > END;\n> > END;\n> > $$ LANGUAGE plpgsql;\n> >\n> >\n> > START TRANSACTION;\n> > foo();\n> > ROLLBACK;\n> > ```\n> >\n> > ```\n> > Output:\n> > 2\n> > ```\n> >\n> > > it doesn't allow this functionality in other PL\n> >\n> > I didn't work out on other PLs at the current time, but...\n> >\n> > ## Python\n> >\n> > In plpython we could use context managers, like was proposed in\n> > Peter's\n> > patch. ```\n> >\n> > with plpy.autonomous() as a:\n> > a.execute(\"INSERT INTO tbl VALUES (1) \");\n> >\n> > ```\n> >\n> > ## Perl\n> >\n> > I don't programm in Perl. But googling shows Perl supports subroutine\n> > attributes. Maybe add `autonomous` attribute for autonomous\n> execution?\n> >\n> > ```\n> > sub foo :autonomous {\n> > }\n> > ```\n> >\n> > https://www.perl.com/article/untangling-subroutine-attributes/\n> >\n> >\n> > > Heikki wrote about the possibility to support threads in Postgres.\n> >\n> > 3. Do you mean this thread?\n> >\n> https://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\n> > Thanks for info. Will watch it. Unfortunately it takes many years to\n> > implement threads =(\n> >\n> > > Surely, the first topic should be the method of implementation.\n> > Maybe\n> > I missed it, but there is no agreement of background worker based.\n> > I agree. No consensus at the current time.\n> > Pros of bgworkers are:\n> > 1. this entity is already in Postgres.\n> > 2. possibility of asynchronous execution of autonomous session in the\n> > future. Like in pg_background extension. For asynchronous\n> > execution we\n> > need a separate process, bgworkers are this separate process.\n> >\n> > Also maybe later create autonomous workers themselves without using\n> > bgworkers internally: launch of separate process, etc. But I think\n> > will\n> > be many common code with bgworkers.\n> >\n> >\n> > On 21.12.2023 12:35, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > although I like the idea related to autonomous transactions, I\n> > don't\n> > > think so this way is the best\n> > >\n> > > 1. The solution based on background workers looks too fragile -\n> > it can\n> > > be easy to exhaust all background workers, and because this\n> > feature is\n> > > proposed mainly for logging, then it is a little bit dangerous,\n> > > because it means loss of possibility of logging.\n> > >\n> > > 2. although the Oracle syntax is interesting, and I proposed PRAGMA\n> > > more times, it doesn't allow this functionality in other PL\n> > >\n> > > I don't propose exactly firebird syntax\n> > >\n> >\n> https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html,\n> >\n> > > but I think this solution is better than ADA's PRAGMAs. I can\n> > imagine\n> > > some special flag for function like\n> > >\n> > > CREATE OR REPLACE FUNCTION ...\n> > > AS $$\n> > > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n> > >\n> > > as another possibility.\n> > >\n> > > 3. Heikki wrote about the possibility to support threads in\n> > Postgres.\n> > > One significant part of this project is elimination of global\n> > > variables. It can be common with autonomous transactions.\n> > >\n> > > Surely, the first topic should be the method of implementation.\n> > Maybe\n> > > I missed it, but there is no agreement of background worker based.\n> > >\n> > > Regards\n> > >\n> > > Pavel\n> > >\n> > >\n> > --\n> > Best wishes,\n> > Ivan Kush\n> > Tantor Labs LLC\n> >\n> --\n> Best wishes,\n> Ivan Kush\n> Tantor Labs LLC\n>\n>\n\nHine 31. 12. 2023 v 15:15 odesílatel Ivan Kush <[email protected]> napsal:\nOn 24.12.2023 15:38, Pavel Stehule wrote:\n> Can you show some benchmarks? I don't like this system too much but \n> maybe it can work enough.\n>\n> Still I am interested in possible use cases. If it should be used only \n> for logging, then we can implement something less generic, but surely \n> with better performance and stability. Logging to tables is a little \n> bit outdated.\n>\n> Regards\n>\n> Pavel\n\nAll use cases of pg_background, except asynchronous execution. If later \nadd asynchronous execution, then all =)\nFor example, also:\n\n* conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n\n* possibility to create functions that calls utility statements, like \nVACUUM, etc.almost all these tasks are more or less dirty. It is a serious question if we want to integrate pg_background to core.\nI don't have good benchmarks now. Some simple, like many INSERTs. Pool \ngives advantage, more tps compared to pg_background with increasing \nnumber of backends.\n\nThe main advantage over pg_background is pool of workers. In this patch \nseparate pool is created for each backend. At the current time I'm \ncoding one shared pool for all backends.I afraid so this solution can be very significantly slower than logging to postgres log or forwarding to syslog \n\n>\n>\n> > 2. although the Oracle syntax is interesting, and I proposed\n> PRAGMA\n> more times, it doesn't allow this functionality in other PL\n>\n> 2. Add `AUTONOMOUS` to `BEGIN` instead of `PRAGMA` in `DECLARE`?\n> `BEGIN\n> AUTONOMOUS`.\n> It shows immediately that we are in autonomous session, no need to\n> search in subsequent lines for keyword.\n>\n> ```\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> ....\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n> ```\n>\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n>\n> The downside with the keyword in function declaration, that we\n> will not\n> be able to create autonomous subblocks. With `PRAGMA AUTONOMOUS` or\n> `BEGIN AUTONOMOUS` it's possible to create them.\n>\n> ```\n> -- BEGIN AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN AUTONOMOUS\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> -- or PRAGMA AUTONOMOUS\n>\n> CREATE FUNCTION foo() RETURNS void AS $$\n> BEGIN\n> INSERT INTO tbl VALUES (1);\n> BEGIN\n> DECLARE AUTONOMOUS_TRANSACTION;\n> INSERT INTO tbl VALUES (2);\n> END;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> START TRANSACTION;\n> foo();\n> ROLLBACK;\n> ```\n>\n> ```\n> Output:\n> 2\n> ```\n>\n> > it doesn't allow this functionality in other PL\n>\n> I didn't work out on other PLs at the current time, but...\n>\n> ## Python\n>\n> In plpython we could use context managers, like was proposed in\n> Peter's\n> patch. ```\n>\n> with plpy.autonomous() as a:\n> a.execute(\"INSERT INTO tbl VALUES (1) \");\n>\n> ```\n>\n> ## Perl\n>\n> I don't programm in Perl. But googling shows Perl supports subroutine\n> attributes. Maybe add `autonomous` attribute for autonomous execution?\n>\n> ```\n> sub foo :autonomous {\n> }\n> ```\n>\n> https://www.perl.com/article/untangling-subroutine-attributes/\n>\n>\n> > Heikki wrote about the possibility to support threads in Postgres.\n>\n> 3. Do you mean this thread?\n> https://www.postgresql.org/message-id/flat/31cc6df9-53fe-3cd9-af5b-ac0d801163f4%40iki.fi\n> Thanks for info. Will watch it. Unfortunately it takes many years to\n> implement threads =(\n>\n> > Surely, the first topic should be the method of implementation.\n> Maybe\n> I missed it, but there is no agreement of background worker based.\n> I agree. No consensus at the current time.\n> Pros of bgworkers are:\n> 1. this entity is already in Postgres.\n> 2. possibility of asynchronous execution of autonomous session in the\n> future. Like in pg_background extension. For asynchronous\n> execution we\n> need a separate process, bgworkers are this separate process.\n>\n> Also maybe later create autonomous workers themselves without using\n> bgworkers internally: launch of separate process, etc. But I think\n> will\n> be many common code with bgworkers.\n>\n>\n> On 21.12.2023 12:35, Pavel Stehule wrote:\n> > Hi\n> >\n> > although I like the idea related to autonomous transactions, I\n> don't\n> > think so this way is the best\n> >\n> > 1. The solution based on background workers looks too fragile -\n> it can\n> > be easy to exhaust all background workers, and because this\n> feature is\n> > proposed mainly for logging, then it is a little bit dangerous,\n> > because it means loss of possibility of logging.\n> >\n> > 2. although the Oracle syntax is interesting, and I proposed PRAGMA\n> > more times, it doesn't allow this functionality in other PL\n> >\n> > I don't propose exactly firebird syntax\n> >\n> https://firebirdsql.org/refdocs/langrefupd25-psql-autonomous-trans.html,\n>\n> > but I think this solution is better than ADA's PRAGMAs. I can\n> imagine\n> > some special flag for function like\n> >\n> > CREATE OR REPLACE FUNCTION ...\n> > AS $$\n> > $$ LANGUAGE plpgsql AUTONOMOUS TRANSACTION;\n> >\n> > as another possibility.\n> >\n> > 3. Heikki wrote about the possibility to support threads in\n> Postgres.\n> > One significant part of this project is elimination of global\n> > variables. It can be common with autonomous transactions.\n> >\n> > Surely, the first topic should be the method of implementation.\n> Maybe\n> > I missed it, but there is no agreement of background worker based.\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> -- \n> Best wishes,\n> Ivan Kush\n> Tantor Labs LLC\n>\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC",
"msg_date": "Mon, 1 Jan 2024 07:47:21 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "\nOn 01.01.2024 09:47, Pavel Stehule wrote:\n>\n>\n> All use cases of pg_background, except asynchronous execution. If\n> later\n> add asynchronous execution, then all =)\n>\n> For example, also:\n>\n> * conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n>\n> * possibility to create functions that calls utility statements, like\n> VACUUM, etc.\n>\n>\n> almost all these tasks are more or less dirty. It is a serious \n> question if we want to integrate pg_background to core.\n\nWhat do you mean by the \"dirty\"?\n\n>\n> I don't have good benchmarks now. Some simple, like many INSERTs.\n> Pool\n> gives advantage, more tps compared to pg_background with increasing\n> number of backends.\n>\n> The main advantage over pg_background is pool of workers. In this\n> patch\n> separate pool is created for each backend. At the current time I'm\n> coding one shared pool for all backends.\n>\n>\n> I afraid so this solution can be very significantly slower than \n> logging to postgres log or forwarding to syslog\n\nMaybe. Need to benchmark. Also OLAP like ClickHouse is better for \nstoring logs.\n\nBut in this case (log file -> database) a company needs to write a \ncustom utility to load log file to the database:\n\n* detect file size,\n\n* load to database\n\n* autorotate file by timeout of filesize\n\n* etc.\n\nSome of our customers use \"Autonomous transactions\" for logging =)\n\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC\n\n\n\n",
"msg_date": "Mon, 1 Jan 2024 14:15:27 +0300",
"msg_from": "Ivan Kush <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
},
{
"msg_contents": "po 1. 1. 2024 v 12:15 odesílatel Ivan Kush <[email protected]>\nnapsal:\n\n>\n> On 01.01.2024 09:47, Pavel Stehule wrote:\n> >\n> >\n> > All use cases of pg_background, except asynchronous execution. If\n> > later\n> > add asynchronous execution, then all =)\n> >\n> > For example, also:\n> >\n> > * conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n> >\n> > * possibility to create functions that calls utility statements, like\n> > VACUUM, etc.\n> >\n> >\n> > almost all these tasks are more or less dirty. It is a serious\n> > question if we want to integrate pg_background to core.\n>\n> What do you mean by the \"dirty\"?\n>\n\nI don't think so these task should be implemented in stored procedures\n\n\n>\n> >\n> > I don't have good benchmarks now. Some simple, like many INSERTs.\n> > Pool\n> > gives advantage, more tps compared to pg_background with increasing\n> > number of backends.\n> >\n> > The main advantage over pg_background is pool of workers. In this\n> > patch\n> > separate pool is created for each backend. At the current time I'm\n> > coding one shared pool for all backends.\n> >\n> >\n> > I afraid so this solution can be very significantly slower than\n> > logging to postgres log or forwarding to syslog\n>\n> Maybe. Need to benchmark. Also OLAP like ClickHouse is better for\n> storing logs.\n>\n> But in this case (log file -> database) a company needs to write a\n> custom utility to load log file to the database:\n>\n> * detect file size,\n>\n> * load to database\n>\n> * autorotate file by timeout of filesize\n>\n> * etc.\n>\n> Some of our customers use \"Autonomous transactions\" for logging =)\n>\n\nI understand the motivation. But it was designed 20 years ago, and I don't\nsee a reason why we need to implement the same \"bad\" patterns, mainly when\nthe proposed implementation is not fully robust or can have performance\nissues.\n\n\n\n\n>\n> --\n> Best wishes,\n> Ivan Kush\n> Tantor Labs LLC\n>\n>\n\npo 1. 1. 2024 v 12:15 odesílatel Ivan Kush <[email protected]> napsal:\nOn 01.01.2024 09:47, Pavel Stehule wrote:\n>\n>\n> All use cases of pg_background, except asynchronous execution. If\n> later\n> add asynchronous execution, then all =)\n>\n> For example, also:\n>\n> * conversion from Oracle's `PRAGMA AUTONOMOUS` to Postgres.\n>\n> * possibility to create functions that calls utility statements, like\n> VACUUM, etc.\n>\n>\n> almost all these tasks are more or less dirty. It is a serious \n> question if we want to integrate pg_background to core.\n\nWhat do you mean by the \"dirty\"?I don't think so these task should be implemented in stored procedures \n\n>\n> I don't have good benchmarks now. Some simple, like many INSERTs.\n> Pool\n> gives advantage, more tps compared to pg_background with increasing\n> number of backends.\n>\n> The main advantage over pg_background is pool of workers. In this\n> patch\n> separate pool is created for each backend. At the current time I'm\n> coding one shared pool for all backends.\n>\n>\n> I afraid so this solution can be very significantly slower than \n> logging to postgres log or forwarding to syslog\n\nMaybe. Need to benchmark. Also OLAP like ClickHouse is better for \nstoring logs.\n\nBut in this case (log file -> database) a company needs to write a \ncustom utility to load log file to the database:\n\n* detect file size,\n\n* load to database\n\n* autorotate file by timeout of filesize\n\n* etc.\n\nSome of our customers use \"Autonomous transactions\" for logging =)I understand the motivation. But it was designed 20 years ago, and I don't see a reason why we need to implement the same \"bad\" patterns, mainly when the proposed implementation is not fully robust or can have performance issues. \n\n-- \nBest wishes,\nIvan Kush\nTantor Labs LLC",
"msg_date": "Mon, 1 Jan 2024 13:49:43 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous transactions 2023, WIP"
}
] |
[
{
"msg_contents": "Hello!\n\nInspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\nof ideas for PostgreSQL\n(https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\nquick patch to do SQL syntax validation.\n\nIt is also heavily inspired by the \"ruby -c\" command, useful to check\nsyntax of Ruby programs without executing them.\n\nFor now, to keep it simple and to open discussion, I have added new\n\"--syntax\" option into \"postgres\" command, since it is currently the\nonly one using needed parser dependency (at least per my\nunderstanding). I tried to add this into psql or separate pg_syntax\ncommands, but parser is not exposed in \"postgres_fe.h\" and including\nbackend into those tools would not make most likely sense. Also syntax\ncould vary per backend, it makes sense to keep it in there.\n\nIt expects input on STDIN, prints out error if any and prints out\nsummary message (Valid SQL/Invalid SQL). On valid input it exits with\n0 (success), otherwise it exits with 1 (error).\n\nExample usage:\n\n$ echo \"SELECT 1\" | src/backend/postgres --syntax\nValid SQL\n\n$ echo \"SELECT 1abc\" | src/backend/postgres --syntax\nERROR: trailing junk after numeric literal at or near \"1a\" at character 8\nInvalid SQL\n\n$ cat ../src/test/regress/sql/alter_operator.sql | src/backend/postgres --syntax\nValid SQL\n\n$ cat ../src/test/regress/sql/advisory_lock.sql | src/backend/postgres --syntax\nERROR: syntax error at or near \"\\\" at character 99\nInvalid SQL\n\nThis could be useful for manual script checks, automated script checks\nand code editor integrations.\n\nNotice it just parses the SQL, it doesn't detect any \"runtime\"\nproblems like unexisting table names, etc.\n\nI have various ideas around this (like exposing similar functionality\ndirectly in SQL using custom function like pg_check_syntax), but I\nwould like to get some feedback first.\n\nWhat do you think?\nenhnace\nPS: I wasn't able to find any automated tests for \"postgres\" command\nto enhance with, are there any?\n\nPS2: Patch could be found at https://github.com/simi/postgres/pull/8 as well.",
"msg_date": "Fri, 15 Dec 2023 13:21:55 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> of ideas for PostgreSQL\n> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> quick patch to do SQL syntax validation.\n> \n> What do you think?\n\nI like the idea. But what will happen if the SQL statement references\ntables or other objects, since we have no database?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 15 Dec 2023 14:09:10 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Dne pá 15. 12. 2023 14:09 uživatel Laurenz Albe <[email protected]>\nnapsal:\n\n> On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> > Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> > of ideas for PostgreSQL\n> > (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> > quick patch to do SQL syntax validation.\n> >\n> > What do you think?\n>\n> I like the idea. But what will happen if the SQL statement references\n> tables or other objects, since we have no database?\n>\n\nIt checks just the identifier is valid from parser perspective, like it is\nvalid table name.\n\n\n> Yours,\n> Laurenz Albe\n>\n\nDne pá 15. 12. 2023 14:09 uživatel Laurenz Albe <[email protected]> napsal:On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> of ideas for PostgreSQL\n> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> quick patch to do SQL syntax validation.\n> \n> What do you think?\n\nI like the idea. But what will happen if the SQL statement references\ntables or other objects, since we have no database?It checks just the identifier is valid from parser perspective, like it is valid table name.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 15 Dec 2023 14:14:11 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Dne pá 15. 12. 2023 14:14 uživatel Josef Šimánek <[email protected]>\nnapsal:\n\n>\n>\n> Dne pá 15. 12. 2023 14:09 uživatel Laurenz Albe <[email protected]>\n> napsal:\n>\n>> On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n>> > Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n>> > of ideas for PostgreSQL\n>> > (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n>> > quick patch to do SQL syntax validation.\n>> >\n>> > What do you think?\n>>\n>> I like the idea. But what will happen if the SQL statement references\n>> tables or other objects, since we have no database?\n>>\n>\n> It checks just the identifier is valid from parser perspective, like it is\n> valid table name.\n>\n\nThere can by two levels, like plpgsql or like pllgsql_check\n\nRegards\n\nPavel\n\n>\n>\n>> Yours,\n>> Laurenz Albe\n>>\n>\n\nDne pá 15. 12. 2023 14:14 uživatel Josef Šimánek <[email protected]> napsal:Dne pá 15. 12. 2023 14:09 uživatel Laurenz Albe <[email protected]> napsal:On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> of ideas for PostgreSQL\n> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> quick patch to do SQL syntax validation.\n> \n> What do you think?\n\nI like the idea. But what will happen if the SQL statement references\ntables or other objects, since we have no database?It checks just the identifier is valid from parser perspective, like it is valid table name.There can by two levels, like plpgsql or like pllgsql_checkRegardsPavel\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 15 Dec 2023 14:19:07 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n>> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n>> of ideas for PostgreSQL\n>> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n>> quick patch to do SQL syntax validation.\n>> \n>> What do you think?\n\n> I like the idea. But what will happen if the SQL statement references\n> tables or other objects, since we have no database?\n\nThis seems like a fairly useless wart to me. What does it do that\nyou can't do better with existing facilities (psql etc)?\n\nIn the big picture a command line switch in the postgres executable\ndoesn't feel like the right place for this. There's no good reason\nto assume that the server executable will be installed where you want\nthis capability; not to mention the possibility of version skew\nbetween that executable and whatever installation you're actually\nrunning on.\n\nAnother thing I don't like is that this exposes to the user what ought\nto be purely an implementation detail, namely the division of labor\nbetween gram.y (raw_parser()) and the rest of the parser. There are\nchecks that a user would probably see as \"syntax checks\" that don't\nhappen in gram.y, and conversely there are some things we reject there\nthat seem more like semantic than syntax issues.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Dec 2023 09:50:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "pá 15. 12. 2023 v 15:50 odesílatel Tom Lane <[email protected]> napsal:\n>\n> Laurenz Albe <[email protected]> writes:\n> > On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> >> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> >> of ideas for PostgreSQL\n> >> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> >> quick patch to do SQL syntax validation.\n> >>\n> >> What do you think?\n>\n> > I like the idea. But what will happen if the SQL statement references\n> > tables or other objects, since we have no database?\n>\n> This seems like a fairly useless wart to me. What does it do that\n> you can't do better with existing facilities (psql etc)?\n\nPer my experience, this is mostly useful during development to catch\nsyntax errors super early. For SELECT/INSERT/UPDATE/DELETE queries, it\nis usually enough to prepend with EXPLAIN and run. But EXPLAIN doesn't\nsupport all SQL like DDL statements. Let's say I have a long SQL\nscript I'm working on and there is typo in the middle like ALTERR\ninstead of ALTER. Is there any simple way to detect this without\nactually running the statement in psql or other existing facilities?\nThis check could be simply combined with editor capabilities to be run\non each SQL file save to get quick feedback on this kind of mistakes\nfor great developer experience.\n\n> In the big picture a command line switch in the postgres executable\n> doesn't feel like the right place for this. There's no good reason\n> to assume that the server executable will be installed where you want\n> this capability; not to mention the possibility of version skew\n> between that executable and whatever installation you're actually\n> running on.\n\nThis is mostly intended for SQL developers and CI systems where\nPostgreSQL backend is usually installed and in the actual version\nneeded. I agree postgres is not the best place (even it makes\npartially sense to me), but as I mentioned, I wasn't able to craft a\nquick patch with a better place to put this in. What would you\nrecommend? Separate executable like pg_syntaxcheck?\n\n> Another thing I don't like is that this exposes to the user what ought\n> to be purely an implementation detail, namely the division of labor\n> between gram.y (raw_parser()) and the rest of the parser. There are\n> checks that a user would probably see as \"syntax checks\" that don't\n> happen in gram.y, and conversely there are some things we reject there\n> that seem more like semantic than syntax issues.\n\nI have no big insight into SQL parsing. Can you please share examples\nof given concerns? Is there anything better than raw_parser() for this\npurpose?\n\n> regards, tom lane\n\n\n",
"msg_date": "Fri, 15 Dec 2023 16:05:22 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 8:05 AM Josef Šimánek <[email protected]>\nwrote:\n\n> pá 15. 12. 2023 v 15:50 odesílatel Tom Lane <[email protected]> napsal:\n> >\n> > Laurenz Albe <[email protected]> writes:\n> > > On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> > >> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> > >> of ideas for PostgreSQL\n> > >> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted\n> a\n> > >> quick patch to do SQL syntax validation.\n> > >>\n> > >> What do you think?\n> >\n> > > I like the idea. But what will happen if the SQL statement references\n> > > tables or other objects, since we have no database?\n> >\n> > This seems like a fairly useless wart to me. What does it do that\n> > you can't do better with existing facilities (psql etc)?\n>\n> Per my experience, this is mostly useful during development to catch\n> syntax errors super early.\n\n\nI'd suggest helping this project get production capable since it already is\ntrying to integrate with the development ecosystem you describe here.\n\nhttps://github.com/supabase/postgres_lsp\n\nI agree that developing this as a new executable for the purpose is needed\nin order to best conform to existing conventions.\n\nDavid J.\n\nOn Fri, Dec 15, 2023 at 8:05 AM Josef Šimánek <[email protected]> wrote:pá 15. 12. 2023 v 15:50 odesílatel Tom Lane <[email protected]> napsal:\n>\n> Laurenz Albe <[email protected]> writes:\n> > On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> >> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> >> of ideas for PostgreSQL\n> >> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> >> quick patch to do SQL syntax validation.\n> >>\n> >> What do you think?\n>\n> > I like the idea. But what will happen if the SQL statement references\n> > tables or other objects, since we have no database?\n>\n> This seems like a fairly useless wart to me. What does it do that\n> you can't do better with existing facilities (psql etc)?\n\nPer my experience, this is mostly useful during development to catch\nsyntax errors super early.I'd suggest helping this project get production capable since it already is trying to integrate with the development ecosystem you describe here.https://github.com/supabase/postgres_lspI agree that developing this as a new executable for the purpose is needed in order to best conform to existing conventions.David J.",
"msg_date": "Fri, 15 Dec 2023 08:16:12 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "pá 15. 12. 2023 v 16:16 odesílatel David G. Johnston\n<[email protected]> napsal:\n>\n> On Fri, Dec 15, 2023 at 8:05 AM Josef Šimánek <[email protected]> wrote:\n>>\n>> pá 15. 12. 2023 v 15:50 odesílatel Tom Lane <[email protected]> napsal:\n>> >\n>> > Laurenz Albe <[email protected]> writes:\n>> > > On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n>> > >> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n>> > >> of ideas for PostgreSQL\n>> > >> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n>> > >> quick patch to do SQL syntax validation.\n>> > >>\n>> > >> What do you think?\n>> >\n>> > > I like the idea. But what will happen if the SQL statement references\n>> > > tables or other objects, since we have no database?\n>> >\n>> > This seems like a fairly useless wart to me. What does it do that\n>> > you can't do better with existing facilities (psql etc)?\n>>\n>> Per my experience, this is mostly useful during development to catch\n>> syntax errors super early.\n>\n>\n> I'd suggest helping this project get production capable since it already is trying to integrate with the development ecosystem you describe here.\n>\n> https://github.com/supabase/postgres_lsp\n\nIndeed LSP is the one of the targets of this feature. Currently it is\nusing https://github.com/pganalyze/libpg_query under the hood probably\nbecause of the same reasons I have described (parser is not available\nin public APIs of postgres_fe.h or libpq). Exposing basic parser\ncapabilities in postgres binary itself and also on SQL level by\npg_check_syntax function can prevent the need of \"hacking\" pg parser\nto be accessible outside of server binary.\n\n> I agree that developing this as a new executable for the purpose is needed in order to best conform to existing conventions.\n>\n> David J.\n>\n\n\n",
"msg_date": "Fri, 15 Dec 2023 16:20:43 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 8:20 AM Josef Šimánek <[email protected]>\nwrote:\n\n> (parser is not available\n> in public APIs of postgres_fe.h or libpq).\n>\n\nWhat about building \"libpg\" that does expose and exports some public APIs\nfor the parser? We can include a reference CLI implementation for basic\nusage of the functionality while leaving the actual language server project\noutside of core.\n\nDavid J.\n\nOn Fri, Dec 15, 2023 at 8:20 AM Josef Šimánek <[email protected]> wrote:(parser is not available\nin public APIs of postgres_fe.h or libpq).What about building \"libpg\" that does expose and exports some public APIs for the parser? We can include a reference CLI implementation for basic usage of the functionality while leaving the actual language server project outside of core.David J.",
"msg_date": "Fri, 15 Dec 2023 08:31:32 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "pá 15. 12. 2023 v 16:32 odesílatel David G. Johnston\n<[email protected]> napsal:\n>\n> On Fri, Dec 15, 2023 at 8:20 AM Josef Šimánek <[email protected]> wrote:\n>>\n>> (parser is not available\n>> in public APIs of postgres_fe.h or libpq).\n>\n>\n> What about building \"libpg\" that does expose and exports some public APIs for the parser? We can include a reference CLI implementation for basic usage of the functionality while leaving the actual language server project outside of core.\n\nLanguage server (LSP) can just benefit from this feature, but it\ndoesn't cover all possibilities since LSP is meant for one purpose ->\nrun in developer's code editor. Actual syntax check is more generic,\nable to cover CI checks and more. I would not couple this feature and\nLSP, LSP can just benefit from it (and it is usually built in a way\nthat uses other tools and packs them into LSP). Exposing this kind of\nSQL check doesn't mean something LSP related being implemented in\ncore. LSP can just benefit from this.\n\nExposing parser to libpg seems good idea, but I'm not sure how simple\nthat could be done since during my journey I have found out there are\na lot of dependencies which are not present in usual frontend code per\nmy understanding like memory contexts and removal of those\n(decoupling) would be huge project IMHO.\n\n> David J.\n\n\n",
"msg_date": "Fri, 15 Dec 2023 16:38:16 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "\n\nOn 12/15/23 16:38, Josef Šimánek wrote:\n> pá 15. 12. 2023 v 16:32 odesílatel David G. Johnston\n> <[email protected]> napsal:\n>>\n>> On Fri, Dec 15, 2023 at 8:20 AM Josef Šimánek <[email protected]> wrote:\n>>>\n>>> (parser is not available\n>>> in public APIs of postgres_fe.h or libpq).\n>>\n>>\n>> What about building \"libpg\" that does expose and exports some public APIs for the parser? We can include a reference CLI implementation for basic usage of the functionality while leaving the actual language server project outside of core.\n> \n> Language server (LSP) can just benefit from this feature, but it\n> doesn't cover all possibilities since LSP is meant for one purpose ->\n> run in developer's code editor. Actual syntax check is more generic,\n> able to cover CI checks and more. I would not couple this feature and\n> LSP, LSP can just benefit from it (and it is usually built in a way\n> that uses other tools and packs them into LSP). Exposing this kind of\n> SQL check doesn't mean something LSP related being implemented in\n> core. LSP can just benefit from this.\n> \n\nI don't know enough about LSP to have a good idea how to implement this\nfor PG, but my assumption would be we'd have some sort of library\nexposing \"parser\" to frontend tools, and also an in-core binary using\nthat library (say, src/bin/pg_syntax_check). And LSP would use that\nparser library too ...\n\nI think there's about 0% chance we'd add this to \"postgres\" binary.\n\n> Exposing parser to libpg seems good idea, but I'm not sure how simple\n> that could be done since during my journey I have found out there are\n> a lot of dependencies which are not present in usual frontend code per\n> my understanding like memory contexts and removal of those\n> (decoupling) would be huge project IMHO.\n> \n\nYou're right the grammar/parser expects a lot of backend infrastructure,\nso making it available to frontend is going to be challenging. But I\ndoubt there's a better way than starting with gram.y and either removing\nor adding the missing pieces (maybe only a mock version of it).\n\nI'm not a bison expert, but considering your goal seems to be a basic\nsyntax check, maybe you could simply rip out most of the bits depending\non backend stuff, or maybe replace them with some trivial no-op code?\n\nBut as Tom mentioned, the question is how far gram.y gets you. There's\nplenty of ereport(ERROR) calls in src/backend/parser/*.c our users might\neasily consider as parse errors ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 23:24:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "ne 25. 2. 2024 v 23:24 odesílatel Tomas Vondra\n<[email protected]> napsal:\n>\n>\n>\n> On 12/15/23 16:38, Josef Šimánek wrote:\n> > pá 15. 12. 2023 v 16:32 odesílatel David G. Johnston\n> > <[email protected]> napsal:\n> >>\n> >> On Fri, Dec 15, 2023 at 8:20 AM Josef Šimánek <[email protected]> wrote:\n> >>>\n> >>> (parser is not available\n> >>> in public APIs of postgres_fe.h or libpq).\n> >>\n> >>\n> >> What about building \"libpg\" that does expose and exports some public APIs for the parser? We can include a reference CLI implementation for basic usage of the functionality while leaving the actual language server project outside of core.\n> >\n> > Language server (LSP) can just benefit from this feature, but it\n> > doesn't cover all possibilities since LSP is meant for one purpose ->\n> > run in developer's code editor. Actual syntax check is more generic,\n> > able to cover CI checks and more. I would not couple this feature and\n> > LSP, LSP can just benefit from it (and it is usually built in a way\n> > that uses other tools and packs them into LSP). Exposing this kind of\n> > SQL check doesn't mean something LSP related being implemented in\n> > core. LSP can just benefit from this.\n> >\n>\n> I don't know enough about LSP to have a good idea how to implement this\n> for PG, but my assumption would be we'd have some sort of library\n> exposing \"parser\" to frontend tools, and also an in-core binary using\n> that library (say, src/bin/pg_syntax_check). And LSP would use that\n> parser library too ...\n>\n> I think there's about 0% chance we'd add this to \"postgres\" binary.\n\nExposing parser to frontend tools makes no sense to me and even if it\nwould, it is a huge project not worth to be done just because of\nsyntax check. I can update the patch to prepare a new binary, but\nstill on the backend side. This syntax check should be equivalent to\nrunning a server locally, running a query and caring only about\nparsing part finished successfully. In my thinking, this belongs to\nbackend tools.\n\n> > Exposing parser to libpg seems good idea, but I'm not sure how simple\n> > that could be done since during my journey I have found out there are\n> > a lot of dependencies which are not present in usual frontend code per\n> > my understanding like memory contexts and removal of those\n> > (decoupling) would be huge project IMHO.\n> >\n>\n> You're right the grammar/parser expects a lot of backend infrastructure,\n> so making it available to frontend is going to be challenging. But I\n> doubt there's a better way than starting with gram.y and either removing\n> or adding the missing pieces (maybe only a mock version of it).\n>\n> I'm not a bison expert, but considering your goal seems to be a basic\n> syntax check, maybe you could simply rip out most of the bits depending\n> on backend stuff, or maybe replace them with some trivial no-op code?\n>\n> But as Tom mentioned, the question is how far gram.y gets you. There's\n> plenty of ereport(ERROR) calls in src/backend/parser/*.c our users might\n> easily consider as parse errors ...\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 23:34:10 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Sun, 25 Feb 2024 at 23:34, Josef Šimánek <[email protected]> wrote:\n> Exposing parser to frontend tools makes no sense to me\n\nNot everyone seems to agree with that, it's actually already done by\nLukas from pganalyze: https://github.com/pganalyze/libpg_query\n\n\n",
"msg_date": "Mon, 26 Feb 2024 08:20:05 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "po 26. 2. 2024 v 8:20 odesílatel Jelte Fennema-Nio <[email protected]> napsal:\n>\n> On Sun, 25 Feb 2024 at 23:34, Josef Šimánek <[email protected]> wrote:\n> > Exposing parser to frontend tools makes no sense to me\n>\n> Not everyone seems to agree with that, it's actually already done by\n> Lukas from pganalyze: https://github.com/pganalyze/libpg_query\n\nI did a quick look. That's clearly amazing work, but it is not parser\nbeing exposed to frontend (refactored). It is (per my understanding)\nbackend code isolated to minimum to be able to parse query. It is\nstill bound to individual backend version and to backend source code.\nAnd it is close to my effort (I was about to start with a simpler\nversion not providing tokens, just the result), but instead of copying\nfiles from backend into separate project and shave off to minimum,\nprovide the same tool with backend utils directly.\n\n\n",
"msg_date": "Mon, 26 Feb 2024 10:31:53 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 5:24 PM Tomas Vondra\n<[email protected]> wrote:\n> I think there's about 0% chance we'd add this to \"postgres\" binary.\n\nSeveral people have taken this position, so I'm going to mark\nhttps://commitfest.postgresql.org/48/4704/ as Rejected.\n\nMy own opinion is that syntax checking is a useful thing to expose,\nbut I don't believe that this is a useful way to expose it. I think\nthis comment that Tom made upthread is right on target:\n\n# Another thing I don't like is that this exposes to the user what ought\n# to be purely an implementation detail, namely the division of labor\n# between gram.y (raw_parser()) and the rest of the parser. There are\n# checks that a user would probably see as \"syntax checks\" that don't\n# happen in gram.y, and conversely there are some things we reject there\n# that seem more like semantic than syntax issues.\n\nI think that what that means in practice is that, while this patch may\nseem to give reasonable results in simple tests, as soon as you try to\ndo slightly more complicated things with it, it's going to give weird\nresults, either failing to flag things that you'd expect it to flag,\nor flagging things you'd expect it not to flag. Fixing that would be\neither impossible or a huge amount of work depending on your point of\nview. If you take the point of view that we need to adjust things so\nthat the raw parser reports all the things that ought to be reported\nby a tool like this and none of the things that it shouldn't, then\nit's probably just a huge amount of work. If you take the point of\nview that what goes into the raw parser and what goes into parse\nanalysis ought to be an implementation decision untethered to what a\ntool like this ought to report, then fixing the problems would be\nimpossible, or at least, it would amount to throwing away this patch\nand starting over. I think the latter point of view, which Tom has\nalready taken, would be the more prevalent view among hackers by far,\nbut even if the former view prevailed, who is going to do all that\nwork?\n\nI strongly suspect that doing something useful in this area requires\nabout two orders of magnitude more code than are included in this\npatch, and a completely different design. If it actually worked well\nto do something this simple, somebody probably would have done it\nalready. In fact, there are already tools out there for validation,\nlike https://github.com/okbob/plpgsql_check for example. That tool\ndoesn't do exactly the same thing as this patch is trying to do, but\nit does do other kinds of validation, and it's 19k lines of code, vs.\nthe 45 lines of code in this patch, which I think reinforces the point\nthat you need to do something much more complicated than this to\ncreate real value.\n\nAlso, the patch as proposed, besides being 45 lines, also has zero\nlines of comments, no test cases, no documentation, and doesn't follow\nthe PostgreSQL coding standards. I'm not saying that to be mean, nor\nam I suggesting that Josef should go fix that stuff. It's perfectly\nreasonable to propose a small patch without a lot of research to see\nwhat people think -- but now we have the answer to that question:\npeople think it won't work. So Josef should now decide to either give\nup, or try a new approach, or if he's really sure that all the\nfeedback that has been given so far is completely wrong, he can try to\ndemonstrate that the patch does all kinds of wonderful things with\nvery few disadvantages. But I think if he does that last, he's going\nto find that it's not really possible, because I'm pretty sure that\nTom is right.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 13:42:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "st 15. 5. 2024 v 19:43 odesílatel Robert Haas <[email protected]> napsal:\n>\n> On Sun, Feb 25, 2024 at 5:24 PM Tomas Vondra\n> <[email protected]> wrote:\n> > I think there's about 0% chance we'd add this to \"postgres\" binary.\n>\n> Several people have taken this position, so I'm going to mark\n> https://commitfest.postgresql.org/48/4704/ as Rejected.\n>\n> My own opinion is that syntax checking is a useful thing to expose,\n> but I don't believe that this is a useful way to expose it. I think\n> this comment that Tom made upthread is right on target:\n>\n> # Another thing I don't like is that this exposes to the user what ought\n> # to be purely an implementation detail, namely the division of labor\n> # between gram.y (raw_parser()) and the rest of the parser. There are\n> # checks that a user would probably see as \"syntax checks\" that don't\n> # happen in gram.y, and conversely there are some things we reject there\n> # that seem more like semantic than syntax issues.\n>\n> I think that what that means in practice is that, while this patch may\n> seem to give reasonable results in simple tests, as soon as you try to\n> do slightly more complicated things with it, it's going to give weird\n> results, either failing to flag things that you'd expect it to flag,\n> or flagging things you'd expect it not to flag. Fixing that would be\n> either impossible or a huge amount of work depending on your point of\n> view. If you take the point of view that we need to adjust things so\n> that the raw parser reports all the things that ought to be reported\n> by a tool like this and none of the things that it shouldn't, then\n> it's probably just a huge amount of work. If you take the point of\n> view that what goes into the raw parser and what goes into parse\n> analysis ought to be an implementation decision untethered to what a\n> tool like this ought to report, then fixing the problems would be\n> impossible, or at least, it would amount to throwing away this patch\n> and starting over. I think the latter point of view, which Tom has\n> already taken, would be the more prevalent view among hackers by far,\n> but even if the former view prevailed, who is going to do all that\n> work?\n>\n> I strongly suspect that doing something useful in this area requires\n> about two orders of magnitude more code than are included in this\n> patch, and a completely different design. If it actually worked well\n> to do something this simple, somebody probably would have done it\n> already. In fact, there are already tools out there for validation,\n> like https://github.com/okbob/plpgsql_check for example. That tool\n> doesn't do exactly the same thing as this patch is trying to do, but\n> it does do other kinds of validation, and it's 19k lines of code, vs.\n> the 45 lines of code in this patch, which I think reinforces the point\n> that you need to do something much more complicated than this to\n> create real value.\n>\n> Also, the patch as proposed, besides being 45 lines, also has zero\n> lines of comments, no test cases, no documentation, and doesn't follow\n> the PostgreSQL coding standards. I'm not saying that to be mean, nor\n> am I suggesting that Josef should go fix that stuff. It's perfectly\n> reasonable to propose a small patch without a lot of research to see\n> what people think -- but now we have the answer to that question:\n> people think it won't work. So Josef should now decide to either give\n> up, or try a new approach, or if he's really sure that all the\n> feedback that has been given so far is completely wrong, he can try to\n> demonstrate that the patch does all kinds of wonderful things with\n> very few disadvantages. But I think if he does that last, he's going\n> to find that it's not really possible, because I'm pretty sure that\n> Tom is right.\n\nI'm totally OK to mark this as rejected and indeed 45 lines were just\nminimal patch to create PoC (I have coded this during last PGConf.eu\nlunch break) and mainly to start discussion.\n\nI'm not sure everyone in this thread understands the reason for this\npatch, which is clearly my fault, since I have failed to explain. Main\nidea is to make a tool to validate query can be parsed, that's all.\nSimilar to running \"EXPLAIN query\", but not caring about the result\nand not caring about the DB structure (ignoring missing tables, ...),\njust checking it was successfully executed. This definitely belongs to\nthe server side and not to the client side, it is just a tool to\nvalidate that for this running PostgreSQL backend it is a \"parseable\"\nquery.\n\nI'm not giving up on this, but I hear there are various problems to\nexplore. If I understand it well, just running the parser to query\ndoesn't guarantee the query is valid, since it can fail later for\nvarious reasons (I mean other than missing table, ...). I wasn't aware\nof that. Also exposing this inside postgres binary seems\ncontroversial. I had an idea to expose parser result at SQL level with\na new command (similar to EXPLAIN), but you'll need running PostgreSQL\nbackend to be able to use this capability, which is against one of the\noriginal ideas. On the otherside PostgreSQL exposes a lot of \"meta\"\nfunctionality already and this could be a nice addition.\n\nI'll revisit the discussion again and try to submit another try once\nI'll get more context and experience.\n\nThanks everyone for constructive comments!\n\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 20:12:30 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "pá 15. 12. 2023 v 15:50 odesílatel Tom Lane <[email protected]> napsal:\n>\n> Laurenz Albe <[email protected]> writes:\n> > On Fri, 2023-12-15 at 13:21 +0100, Josef Šimánek wrote:\n> >> Inspired by Simon Riggs' keynote talk at PGCounf.eu 2023 sharing list\n> >> of ideas for PostgreSQL\n> >> (https://riggs.business/blog/f/postgresql-todo-2023), I have crafted a\n> >> quick patch to do SQL syntax validation.\n> >>\n> >> What do you think?\n>\n> > I like the idea. But what will happen if the SQL statement references\n> > tables or other objects, since we have no database?\n>\n> This seems like a fairly useless wart to me.\n\nthis hurts :'(\n\n>\n> In the big picture a command line switch in the postgres executable\n> doesn't feel like the right place for this. There's no good reason\n> to assume that the server executable will be installed where you want\n> this capability; not to mention the possibility of version skew\n> between that executable and whatever installation you're actually\n> running on.\n>\n> Another thing I don't like is that this exposes to the user what ought\n> to be purely an implementation detail, namely the division of labor\n> between gram.y (raw_parser()) and the rest of the parser. There are\n> checks that a user would probably see as \"syntax checks\" that don't\n> happen in gram.y, and conversely there are some things we reject there\n> that seem more like semantic than syntax issues.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 20:13:11 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> I'm not sure everyone in this thread understands the reason for this\n> patch, which is clearly my fault, since I have failed to explain. Main\n> idea is to make a tool to validate query can be parsed, that's all.\n> Similar to running \"EXPLAIN query\", but not caring about the result\n> and not caring about the DB structure (ignoring missing tables, ...),\n> just checking it was successfully executed. This definitely belongs to\n> the server side and not to the client side, it is just a tool to\n> validate that for this running PostgreSQL backend it is a \"parseable\"\n> query.\n\nThe thing that was bothering me most about this is that I don't\nunderstand why that's a useful check. If I meant to type\n\n\tUPDATE mytab SET mycol = 42;\n\nand instead I type\n\n\tUPDATEE mytab SET mycol = 42;\n\nyour proposed feature would catch that; great. But if I type\n\n\tUPDATE mytabb SET mycol = 42;\n\nit won't. How does that make sense? I'm not entirely sure where\nto draw the line about what a \"syntax check\" should catch, but this\nseems a bit south of what I'd want in a syntax-checking editor.\n\nBTW, if you do feel that a pure grammar check is worthwhile, you\nshould look at the ecpg preprocessor, which does more or less that\nwith the SQL portions of its input. ecpg could be a better model\nto follow because it doesn't bring all the dependencies of the server\nand so is much more likely to appear in a client-side installation.\nIt's kind of an ugly, unmaintained mess at the moment, sadly.\n\nThe big knock on doing this client-side is that there might be\nversion skew compared to the server you're using --- but if you\nare not doing anything beyond a grammar-level check then your\nresults are pretty approximate anyway, ISTM. We've not heard\nanything suggesting that version skew is a huge problem for\necpg users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 14:39:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Tom Lane:\n> The thing that was bothering me most about this is that I don't\n> understand why that's a useful check. If I meant to type\n> \n> \tUPDATE mytab SET mycol = 42;\n> \n> and instead I type\n> \n> \tUPDATEE mytab SET mycol = 42;\n> \n> your proposed feature would catch that; great. But if I type\n> \n> \tUPDATE mytabb SET mycol = 42;\n> \n> it won't. How does that make sense? I'm not entirely sure where\n> to draw the line about what a \"syntax check\" should catch, but this\n> seems a bit south of what I'd want in a syntax-checking editor.\n> \n> BTW, if you do feel that a pure grammar check is worthwhile, you\n> should look at the ecpg preprocessor, which does more or less that\n> with the SQL portions of its input. ecpg could be a better model\n> to follow because it doesn't bring all the dependencies of the server\n> and so is much more likely to appear in a client-side installation.\n> It's kind of an ugly, unmaintained mess at the moment, sadly.\n\nWould working with ecpg allow to get back a parse tree of the query to \ndo stuff with that?\n\nThis is really what is missing for the ecosystem. A libpqparser for \ntools to use: Formatters, linters, query rewriters, simple syntax \ncheckers... they are all missing access to postgres' own parser.\n\nBest,\n\nWolfgang\n\n\n\n",
"msg_date": "Wed, 15 May 2024 20:49:17 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "st 15. 5. 2024 v 20:39 odesílatel Tom Lane <[email protected]> napsal:\n>\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> > I'm not sure everyone in this thread understands the reason for this\n> > patch, which is clearly my fault, since I have failed to explain. Main\n> > idea is to make a tool to validate query can be parsed, that's all.\n> > Similar to running \"EXPLAIN query\", but not caring about the result\n> > and not caring about the DB structure (ignoring missing tables, ...),\n> > just checking it was successfully executed. This definitely belongs to\n> > the server side and not to the client side, it is just a tool to\n> > validate that for this running PostgreSQL backend it is a \"parseable\"\n> > query.\n>\n> The thing that was bothering me most about this is that I don't\n> understand why that's a useful check. If I meant to type\n>\n> UPDATE mytab SET mycol = 42;\n>\n> and instead I type\n>\n> UPDATEE mytab SET mycol = 42;\n>\n> your proposed feature would catch that; great. But if I type\n>\n> UPDATE mytabb SET mycol = 42;\n>\n> it won't. How does that make sense? I'm not entirely sure where\n> to draw the line about what a \"syntax check\" should catch, but this\n> seems a bit south of what I'd want in a syntax-checking editor.\n\nThis is exactly where the line is drawn. My motivation is not to use\nthis feature for syntax check in editor (even could be used to find\nthose banalities). I'm looking to improve automation to be able to\ndetect those banalities as early as possible. Let's say there is\ncomplex CI automation configuring and starting PostgreSQL backend,\nloading some data, ... and in the end all this is useless, since there\nis this kind of simple mistake like UPDATEE. I would like to detect\nthis problem as early as possible and stop the complex CI pipeline to\nsave time and also to save resources (= money) by failing super-early\nand reporting back. This kind of mistake could be simply introduced by\nlike wrongly resolved git conflict, human typing error ...\n\nThis kind of mistake (typo, ...) can be usually spotted super early.\nIn compiled languages during compilation, in interpreted languages\n(like Ruby) at program start (since code is parsed as one of the first\nsteps). There is no such early detection possible for PostgreSQL\ncurrently IMHO.\n\n> BTW, if you do feel that a pure grammar check is worthwhile, you\n> should look at the ecpg preprocessor, which does more or less that\n> with the SQL portions of its input. ecpg could be a better model\n> to follow because it doesn't bring all the dependencies of the server\n> and so is much more likely to appear in a client-side installation.\n> It's kind of an ugly, unmaintained mess at the moment, sadly.\n>\n> The big knock on doing this client-side is that there might be\n> version skew compared to the server you're using --- but if you\n> are not doing anything beyond a grammar-level check then your\n> results are pretty approximate anyway, ISTM. We've not heard\n> anything suggesting that version skew is a huge problem for\n> ecpg users.\n\nThanks for the info, I'll check.\n\n> regards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 21:00:05 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "[email protected] writes:\n> Tom Lane:\n>> BTW, if you do feel that a pure grammar check is worthwhile, you\n>> should look at the ecpg preprocessor, which does more or less that\n>> with the SQL portions of its input.\n\n> Would working with ecpg allow to get back a parse tree of the query to \n> do stuff with that?\n\nNo, ecpg isn't interested in building a syntax tree.\n\n> This is really what is missing for the ecosystem. A libpqparser for \n> tools to use: Formatters, linters, query rewriters, simple syntax \n> checkers... they are all missing access to postgres' own parser.\n\nTo get to that, you'd need some kind of agreement on what the syntax\ntree is. I doubt our existing implementation would be directly useful\nto very many tools, and even if it is, do they want to track constant\nversion-to-version changes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 15:01:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 2:39 PM Tom Lane <[email protected]> wrote:\n> The thing that was bothering me most about this is that I don't\n> understand why that's a useful check. If I meant to type\n>\n> UPDATE mytab SET mycol = 42;\n>\n> and instead I type\n>\n> UPDATEE mytab SET mycol = 42;\n>\n> your proposed feature would catch that; great. But if I type\n>\n> UPDATE mytabb SET mycol = 42;\n>\n> it won't. How does that make sense? I'm not entirely sure where\n> to draw the line about what a \"syntax check\" should catch, but this\n> seems a bit south of what I'd want in a syntax-checking editor.\n\nI don't agree with this, actually. The first wrong example can never\nbe valid, while the second one can be valid given the right table\ndefinitions. That lines up quite nicely with the distinction between\nparsing and parse analysis. There is a problem with actually getting\nall the way there, I'm fairly sure, because we've got thousands of\nlines of gram.y and thousands of lines of parse analysis code that\nweren't all written with the idea of making a crisp distinction here.\nFor example, I'd like both EXPLAIN (WAFFLES) SELECT 1 and EXPLAIN\nWAFFLES SELECT 1 to be flagged as syntactically invalid, and with\nthings as they are that would not happen. Even for plannable\nstatements I would not be at all surprised to hear that there are a\nbunch of corner cases that we'd get wrong.\n\nBut I don't understand the idea that the concept doesn't make sense. I\nthink it is perfectly reasonable to imagine a world in which the\ninitial parsing takes care of reporting everything that can be\ndetermined by static analysis without knowing anything about catalog\ncontents, and parse analysis checks all the things that require\ncatalog access, and you can run the first set of checks and then\ndecide whether to proceed further. I think if I were designing a new\nsystem from scratch, I'd definitely want to set it up that way, and I\nthink moving our existing system in that direction would probably let\nus clean up a variety of warts along the way. Really, the only\nargument I see against it is that getting from where we are to there\nwould be a gigantic amount of work for the value we'd derive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:03:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 2:12 PM Josef Šimánek <[email protected]> wrote:\n> I'm totally OK to mark this as rejected and indeed 45 lines were just\n> minimal patch to create PoC (I have coded this during last PGConf.eu\n> lunch break) and mainly to start discussion.\n\nThanks for understanding.\n\n> I'm not sure everyone in this thread understands the reason for this\n> patch, which is clearly my fault, since I have failed to explain. Main\n> idea is to make a tool to validate query can be parsed, that's all.\n> Similar to running \"EXPLAIN query\", but not caring about the result\n> and not caring about the DB structure (ignoring missing tables, ...),\n> just checking it was successfully executed. This definitely belongs to\n> the server side and not to the client side, it is just a tool to\n> validate that for this running PostgreSQL backend it is a \"parseable\"\n> query.\n\nI don't think it's at all obvious that this belongs on the server side\nrather than the client side. I think it could be done in either place,\nor both. I just think we don't have the infrastructure to do it\ncleanly, at present.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:05:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Tom Lane:\n>> This is really what is missing for the ecosystem. A libpqparser for\n>> tools to use: Formatters, linters, query rewriters, simple syntax\n>> checkers... they are all missing access to postgres' own parser.\n> \n> To get to that, you'd need some kind of agreement on what the syntax\n> tree is. I doubt our existing implementation would be directly useful\n> to very many tools, and even if it is, do they want to track constant\n> version-to-version changes?\n\nCorrect, on top of what the syntax tree currently has, one would \nprobably need:\n- comments\n- locations (line number / character) for everything, including those of \ncomments\n\nOtherwise it's impossible to print proper SQL again without losing \ninformation.\n\nAnd then on top of that, to be really really useful, you'd need to be \nable to parse partial statements, too, to support all kinds of \"language \nserver\" applications.\n\nTracking version-to-version changes is exactly the reason why it would \nbe good to have that from upstream, imho. New syntax is added in \n(almost?) every release and everyone outside core trying to write their \nown parser and staying up2date with **all** the new syntax.. will \neventually fail.\n\nYes, there could be changes to the produced parse tree as well and you'd \nalso need to adjust, for example, your SQL-printers. But it should be \neasier to stay up2date than right now.\n\nBest,\n\nWolfgang\n\n\n",
"msg_date": "Wed, 15 May 2024 21:18:41 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, May 15, 2024 at 2:39 PM Tom Lane <[email protected]> wrote:\n>> The thing that was bothering me most about this is that I don't\n>> understand why that's a useful check. ...\n\n> But I don't understand the idea that the concept doesn't make sense.\n\nSorry: \"make sense\" was a poorly chosen phrase there. What I was\ndoubting, and continue to doubt, is that 100% checking of what\nyou can check without catalog access and 0% checking of the rest\nis a behavior that end users will find especially useful.\n\n> I think it is perfectly reasonable to imagine a world in which the\n> initial parsing takes care of reporting everything that can be\n> determined by static analysis without knowing anything about catalog\n> contents, and parse analysis checks all the things that require\n> catalog access, and you can run the first set of checks and then\n> decide whether to proceed further. I think if I were designing a new\n> system from scratch, I'd definitely want to set it up that way, and I\n> think moving our existing system in that direction would probably let\n> us clean up a variety of warts along the way. Really, the only\n> argument I see against it is that getting from where we are to there\n> would be a gigantic amount of work for the value we'd derive.\n\nI'm less enthusiatic about this than you are. I think it would likely\nproduce a slower and less maintainable system. Slower because we'd\nneed more passes over the query: what parse analysis does today would\nhave to be done in at least two separate steps. Less maintainable\nbecause knowledge about certain things would end up being more spread\naround the system. Taking your example of getting syntax checking to\nrecognize invalid EXPLAIN keywords: right now there's just one piece\nof code that knows what those options are, but this'd require two.\nAlso, \"run the first set of checks and then decide whether to proceed\nfurther\" seems like optimizing the system for broken queries over\nvalid ones, which I don't think is an appropriate goal.\n\nNow, I don't say that there'd be *no* benefit to reorganizing the\nsystem that way. But it wouldn't be an unalloyed win, and so I\nshare your bottom line that the costs would be out of proportion\nto the benefits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 15:27:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:18 PM <[email protected]> wrote:\n\n> Tom Lane:\n> >> This is really what is missing for the ecosystem. A libpqparser for\n> >> tools to use: Formatters, linters, query rewriters, simple syntax\n> >> checkers... they are all missing access to postgres' own parser.\n> >\n> > To get to that, you'd need some kind of agreement on what the syntax\n> > tree is. I doubt our existing implementation would be directly useful\n> > to very many tools, and even if it is, do they want to track constant\n> > version-to-version changes?\n>\n> Correct, on top of what the syntax tree currently has, one would\n> probably need:\n> - comments\n> - locations (line number / character) for everything, including those of\n> comments\n>\n>\nI'm with the original patch idea at this point. A utility that simply runs\ntext through the parser, not parse analysis, and answers the question:\n\"Were you able to parse this?\" has both value and seems like something that\ncan be patched into core in a couple of hundred lines, not thousands, as\nhas already been demonstrated.\n\nSure, other questions are valid and other goals exist in the ecosystem, but\nthat doesn't diminish this one which is sufficiently justified for my +1 on\nthe idea.\n\nNow, in my ideal world something like this could be made as an extension so\nthat it can work on older versions and not have to be maintained by core.\nAnd likely grow more features over time. Is there anything fundamental\nabout this that prevents it being implemented in an extension and, if so,\nwhat can we add to core in v18 to alleviate that limitation?\n\nDavid J.\n\nOn Wed, May 15, 2024 at 12:18 PM <[email protected]> wrote:Tom Lane:\n>> This is really what is missing for the ecosystem. A libpqparser for\n>> tools to use: Formatters, linters, query rewriters, simple syntax\n>> checkers... they are all missing access to postgres' own parser.\n> \n> To get to that, you'd need some kind of agreement on what the syntax\n> tree is. I doubt our existing implementation would be directly useful\n> to very many tools, and even if it is, do they want to track constant\n> version-to-version changes?\n\nCorrect, on top of what the syntax tree currently has, one would \nprobably need:\n- comments\n- locations (line number / character) for everything, including those of \ncommentsI'm with the original patch idea at this point. A utility that simply runs text through the parser, not parse analysis, and answers the question: \"Were you able to parse this?\" has both value and seems like something that can be patched into core in a couple of hundred lines, not thousands, as has already been demonstrated.Sure, other questions are valid and other goals exist in the ecosystem, but that doesn't diminish this one which is sufficiently justified for my +1 on the idea.Now, in my ideal world something like this could be made as an extension so that it can work on older versions and not have to be maintained by core. And likely grow more features over time. Is there anything fundamental about this that prevents it being implemented in an extension and, if so, what can we add to core in v18 to alleviate that limitation?David J.",
"msg_date": "Wed, 15 May 2024 12:32:37 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "st 15. 5. 2024 v 21:28 odesílatel Tom Lane <[email protected]> napsal:\n>\n> Robert Haas <[email protected]> writes:\n> > On Wed, May 15, 2024 at 2:39 PM Tom Lane <[email protected]> wrote:\n> >> The thing that was bothering me most about this is that I don't\n> >> understand why that's a useful check. ...\n>\n> > But I don't understand the idea that the concept doesn't make sense.\n>\n> Sorry: \"make sense\" was a poorly chosen phrase there. What I was\n> doubting, and continue to doubt, is that 100% checking of what\n> you can check without catalog access and 0% checking of the rest\n> is a behavior that end users will find especially useful.\n\nBut that's completely different feature which is not exclusive and\nshouldn't block this other feature to do only exactly as specified.\n\n> > I think it is perfectly reasonable to imagine a world in which the\n> > initial parsing takes care of reporting everything that can be\n> > determined by static analysis without knowing anything about catalog\n> > contents, and parse analysis checks all the things that require\n> > catalog access, and you can run the first set of checks and then\n> > decide whether to proceed further. I think if I were designing a new\n> > system from scratch, I'd definitely want to set it up that way, and I\n> > think moving our existing system in that direction would probably let\n> > us clean up a variety of warts along the way. Really, the only\n> > argument I see against it is that getting from where we are to there\n> > would be a gigantic amount of work for the value we'd derive.\n>\n> I'm less enthusiatic about this than you are. I think it would likely\n> produce a slower and less maintainable system. Slower because we'd\n> need more passes over the query: what parse analysis does today would\n> have to be done in at least two separate steps. Less maintainable\n> because knowledge about certain things would end up being more spread\n> around the system. Taking your example of getting syntax checking to\n> recognize invalid EXPLAIN keywords: right now there's just one piece\n> of code that knows what those options are, but this'd require two.\n> Also, \"run the first set of checks and then decide whether to proceed\n> further\" seems like optimizing the system for broken queries over\n> valid ones, which I don't think is an appropriate goal.\n>\n> Now, I don't say that there'd be *no* benefit to reorganizing the\n> system that way. But it wouldn't be an unalloyed win, and so I\n> share your bottom line that the costs would be out of proportion\n> to the benefits.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 21:33:31 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "st 15. 5. 2024 v 21:33 odesílatel David G. Johnston\n<[email protected]> napsal:\n>\n> On Wed, May 15, 2024 at 12:18 PM <[email protected]> wrote:\n>>\n>> Tom Lane:\n>> >> This is really what is missing for the ecosystem. A libpqparser for\n>> >> tools to use: Formatters, linters, query rewriters, simple syntax\n>> >> checkers... they are all missing access to postgres' own parser.\n>> >\n>> > To get to that, you'd need some kind of agreement on what the syntax\n>> > tree is. I doubt our existing implementation would be directly useful\n>> > to very many tools, and even if it is, do they want to track constant\n>> > version-to-version changes?\n>>\n>> Correct, on top of what the syntax tree currently has, one would\n>> probably need:\n>> - comments\n>> - locations (line number / character) for everything, including those of\n>> comments\n>>\n>\n> I'm with the original patch idea at this point. A utility that simply runs text through the parser, not parse analysis, and answers the question: \"Were you able to parse this?\" has both value and seems like something that can be patched into core in a couple of hundred lines, not thousands, as has already been demonstrated.\n>\n> Sure, other questions are valid and other goals exist in the ecosystem, but that doesn't diminish this one which is sufficiently justified for my +1 on the idea.\n>\n> Now, in my ideal world something like this could be made as an extension so that it can work on older versions and not have to be maintained by core. And likely grow more features over time. Is there anything fundamental about this that prevents it being implemented in an extension and, if so, what can we add to core in v18 to alleviate that limitation?\n\nLike extension providing additional binary? Or what kind of extension\ndo you mean? One of the original ideas was to be able to do so (parse\nquery) without running postgres itself. Could extension be useful\nwithout running postgres backend?\n\n> David J.\n>\n\n\n",
"msg_date": "Wed, 15 May 2024 21:35:20 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:35 PM Josef Šimánek <[email protected]>\nwrote:\n\n> st 15. 5. 2024 v 21:33 odesílatel David G. Johnston\n> <[email protected]> napsal:\n>\n> > Now, in my ideal world something like this could be made as an extension\n> so that it can work on older versions and not have to be maintained by\n> core. And likely grow more features over time. Is there anything\n> fundamental about this that prevents it being implemented in an extension\n> and, if so, what can we add to core in v18 to alleviate that limitation?\n>\n> Like extension providing additional binary? Or what kind of extension\n> do you mean? One of the original ideas was to be able to do so (parse\n> query) without running postgres itself. Could extension be useful\n> without running postgres backend?\n>\n>\nPushing beyond my experience level here...but yes a separately installed\nbinary (extension is being used conceptually here, this doesn't involve\n\"create extension\") that can inspect pg_config to find out where\nbackend/postmaster library objects are installed and link to them.\n\nDavid J.\n\nOn Wed, May 15, 2024 at 12:35 PM Josef Šimánek <[email protected]> wrote:st 15. 5. 2024 v 21:33 odesílatel David G. Johnston\n<[email protected]> napsal:\n> Now, in my ideal world something like this could be made as an extension so that it can work on older versions and not have to be maintained by core. And likely grow more features over time. Is there anything fundamental about this that prevents it being implemented in an extension and, if so, what can we add to core in v18 to alleviate that limitation?\n\nLike extension providing additional binary? Or what kind of extension\ndo you mean? One of the original ideas was to be able to do so (parse\nquery) without running postgres itself. Could extension be useful\nwithout running postgres backend?Pushing beyond my experience level here...but yes a separately installed binary (extension is being used conceptually here, this doesn't involve \"create extension\") that can inspect pg_config to find out where backend/postmaster library objects are installed and link to them.David J.",
"msg_date": "Wed, 15 May 2024 12:43:06 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> Now, in my ideal world something like this could be made as an extension so\n> that it can work on older versions and not have to be maintained by core.\n> And likely grow more features over time. Is there anything fundamental\n> about this that prevents it being implemented in an extension and, if so,\n> what can we add to core in v18 to alleviate that limitation?\n\nIt'd be pretty trivial to create a function that takes a string\nand runs it through raw_parser --- I've got such things laying\nabout for microbenchmarking purposes, in fact. But the API that'd\npresent for tools such as editors is enormously different from\nthe proposed patch: there would need to be a running server and\nthey'd need to be able to log into it, plus there are more minor\nconcerns like having to wrap the string in valid quoting.\n\nNow on the plus side, once you'd bought into that environment,\nit'd be equally trivial to offer alternatives like \"run raw\nparsing and parse analysis, but don't run the query\". I continue\nto maintain that that's the set of checks you'd really want in a\nlot of use-cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 15:45:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 3:28 PM Tom Lane <[email protected]> wrote:\n> Sorry: \"make sense\" was a poorly chosen phrase there. What I was\n> doubting, and continue to doubt, is that 100% checking of what\n> you can check without catalog access and 0% checking of the rest\n> is a behavior that end users will find especially useful.\n\nYou might be right, but my guess is that you're wrong. Syntax\nhighlighting is very popular, and seems like a more sophisticated\nversion of that same concept. I don't personally like it or use it\nmyself, but I bet I'm hugely in the minority these days. And EDB\ncertainly gets customer requests for syntax checking of various kinds;\nwhether this particular kind would get more or less traction than\nother things is somewhat moot in view of the low likelihood of it\nactually happening.\n\n> I'm less enthusiatic about this than you are. I think it would likely\n> produce a slower and less maintainable system. Slower because we'd\n> need more passes over the query: what parse analysis does today would\n> have to be done in at least two separate steps. Less maintainable\n> because knowledge about certain things would end up being more spread\n> around the system. Taking your example of getting syntax checking to\n> recognize invalid EXPLAIN keywords: right now there's just one piece\n> of code that knows what those options are, but this'd require two.\n> Also, \"run the first set of checks and then decide whether to proceed\n> further\" seems like optimizing the system for broken queries over\n> valid ones, which I don't think is an appropriate goal.\n\nWell, we've talked before about other problems that stem from the fact\nthat DDL doesn't have a clear separation between parse analysis and\nexecution. I vaguely imagine that it would be valuable to clean that\nup for various reasons. But I haven't really thought it through, so\nI'm prepared to concede that it might have various downsides that\naren't presently obvious to me.\n\n> Now, I don't say that there'd be *no* benefit to reorganizing the\n> system that way. But it wouldn't be an unalloyed win, and so I\n> share your bottom line that the costs would be out of proportion\n> to the benefits.\n\nI'm glad we agree on that much, and don't feel a compelling need to\nlitigate the remaining differences between our positions, unless you\nreally want to. I'm just telling you what I think, and I'm pleased\nthat we think as similarly as we do, despite remaining differences.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 16:00:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 1:00 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, May 15, 2024 at 3:28 PM Tom Lane <[email protected]> wrote:\n> > Sorry: \"make sense\" was a poorly chosen phrase there. What I was\n> > doubting, and continue to doubt, is that 100% checking of what\n> > you can check without catalog access and 0% checking of the rest\n> > is a behavior that end users will find especially useful.\n>\n> You might be right, but my guess is that you're wrong. Syntax\n> highlighting is very popular, and seems like a more sophisticated\n> version of that same concept.\n\n\nThe proposed seems distinctly less sophisticated though would be a starting\npoint.\n\nI think the documentation for --syntax check would read something like this:\n\npostgres --syntax-check={filename | -}\n\nPerforms a pass over the lexical structure of the script supplied in\nfilename or, if - is specified, standard input, then exits. The exit code\nis zero if no errors were found, otherwise it is 1, and the errors, at full\nverbosity, are printed to standard error. This does not involve reading\nthe configuration file and, by extension, will not detect errors that\nrequire knowledge of a database schema, including the system catalogs, to\nmanifest. There will be no false positives, but due to the prior rule,\nfalse negatives must be factored into its usage. Thus this option is most\nuseful as an initial triage point, quickly rejecting SQL code without\nrequiring a running PostgreSQL service.\n\nNote: This is exposed as a convenient way to get access to the outcome of\nperforming a raw parse within the specific version of the postgres binary\nbeing executed. The specific implementation of that parse is still\nnon-public. Likewise, PostgreSQL doesn't itself have a use for a raw parse\noutput beyond sending it to post-parse analysis. All of the catalog\nrequired checks, and potentially some that don't obviously need the\ncatalogs, happen in this post-parse step; which the syntax checking API\ndoes not expose. In short, the API here doesn't include any guarantees\nregarding the specific errors one should expect to see output, only the no\nfalse positive test result of performing the first stage raw parse.\n\nDavid J.\n\nIf in core I would still want to expose this as say a contrib module binary\ninstead of hacking it into postgres. It would be our first server program\nentry there.\n\nOn Wed, May 15, 2024 at 1:00 PM Robert Haas <[email protected]> wrote:On Wed, May 15, 2024 at 3:28 PM Tom Lane <[email protected]> wrote:\n> Sorry: \"make sense\" was a poorly chosen phrase there. What I was\n> doubting, and continue to doubt, is that 100% checking of what\n> you can check without catalog access and 0% checking of the rest\n> is a behavior that end users will find especially useful.\n\nYou might be right, but my guess is that you're wrong. Syntax\nhighlighting is very popular, and seems like a more sophisticated\nversion of that same concept.The proposed seems distinctly less sophisticated though would be a starting point.I think the documentation for --syntax check would read something like this:postgres --syntax-check={filename | -}Performs a pass over the lexical structure of the script supplied in filename or, if - is specified, standard input, then exits. The exit code is zero if no errors were found, otherwise it is 1, and the errors, at full verbosity, are printed to standard error. This does not involve reading the configuration file and, by extension, will not detect errors that require knowledge of a database schema, including the system catalogs, to manifest. There will be no false positives, but due to the prior rule, false negatives must be factored into its usage. Thus this option is most useful as an initial triage point, quickly rejecting SQL code without requiring a running PostgreSQL service.Note: This is exposed as a convenient way to get access to the outcome of performing a raw parse within the specific version of the postgres binary being executed. The specific implementation of that parse is still non-public. Likewise, PostgreSQL doesn't itself have a use for a raw parse output beyond sending it to post-parse analysis. All of the catalog required checks, and potentially some that don't obviously need the catalogs, happen in this post-parse step; which the syntax checking API does not expose. In short, the API here doesn't include any guarantees regarding the specific errors one should expect to see output, only the no false positive test result of performing the first stage raw parse.David J.If in core I would still want to expose this as say a contrib module binary instead of hacking it into postgres. It would be our first server program entry there.",
"msg_date": "Wed, 15 May 2024 18:35:12 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, May 15, 2024 at 6:35 PM David G. Johnston <\[email protected]> wrote:\n\n>\n> If in core I would still want to expose this as say a contrib module\n> binary instead of hacking it into postgres. It would be our first server\n> program entry there.\n>\n>\nSorry for self-reply but:\n\nMaybe name it \"pg_script_check\" with a, for now mandatory, \"--syntax-only\"\noption that enables this raw parse mode. Leaving room for executing this\nin an environment where there is, or it can launch, a running instance that\nthen performs post-parse analysis as well.\n\nDavid J.\n\nOn Wed, May 15, 2024 at 6:35 PM David G. Johnston <[email protected]> wrote:If in core I would still want to expose this as say a contrib module binary instead of hacking it into postgres. It would be our first server program entry there.Sorry for self-reply but:Maybe name it \"pg_script_check\" with a, for now mandatory, \"--syntax-only\" option that enables this raw parse mode. Leaving room for executing this in an environment where there is, or it can launch, a running instance that then performs post-parse analysis as well.David J.",
"msg_date": "Wed, 15 May 2024 18:39:14 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On Wed, 2024-05-15 at 14:39 -0400, Tom Lane wrote:\n> The thing that was bothering me most about this is that I don't\n> understand why that's a useful check. If I meant to type\n> \n> \tUPDATE mytab SET mycol = 42;\n> \n> and instead I type\n> \n> \tUPDATEE mytab SET mycol = 42;\n> \n> your proposed feature would catch that; great. But if I type\n> \n> \tUPDATE mytabb SET mycol = 42;\n> \n> it won't. How does that make sense?\n\nIt makes sense to me. I see a clear distinction between \"this is a\nvalid SQL statement\" and \"this is an SQL statement that will run on\na specific database with certain objects in it\".\n\nTo me, \"correct syntax\" is the former.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 16 May 2024 11:03:57 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
},
{
"msg_contents": "On 15.05.24 21:05, Robert Haas wrote:\n> I don't think it's at all obvious that this belongs on the server side\n> rather than the client side. I think it could be done in either place,\n> or both. I just think we don't have the infrastructure to do it\n> cleanly, at present.\n\nI think if you're going to do a syntax-check-with-catalog-lookup mode, \nyou need authentication and access control. The mode without catalog \nlookup doesn't need that. But it might be better to offer both modes \nthrough a similar interface (or at least plan ahead for that).\n\n\n\n",
"msg_date": "Thu, 16 May 2024 12:29:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --syntax to postgres for SQL syntax checking"
}
] |
[
{
"msg_contents": "Hello, hackers!\n\nI think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY\nimprovements) in some lighter way.\n\nYes, a serious bug was (2) caused by this optimization and now it reverted.\n\nBut what about a more safe idea in that direction:\n1) add new horizon which ignores PROC_IN_SAFE_IC backends and standbys queries\n2) use this horizon for settings LP_DEAD bit in indexes (excluding\nindexes being built of course)\n\nIndex LP_DEAD hints are not used by standby in any way (they are just\nignored), also heap scan done by index building does not use them as\nwell.\n\nBut, at the same time:\n1) index scans will be much faster during index creation or standby\nreporting queries\n2) indexes can keep them fit using different optimizations\n3) less WAL due to a huge amount of full pages writes (which caused by\ntons of LP_DEAD in indexes)\n\nThe patch seems more-less easy to implement.\nDoes it worth being implemented? Or to scary?\n\n[1]: https://postgr.es/m/[email protected]\n[2]: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org\n\n\n",
"msg_date": "Fri, 15 Dec 2023 20:07:29 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Fri, 15 Dec 2023, 20:07 Michail Nikolaev, <[email protected]>\nwrote:\n\n> Hello, hackers!\n>\n> I think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY\n> improvements) in some lighter way.\n>\n> Yes, a serious bug was (2) caused by this optimization and now it reverted.\n>\n> But what about a more safe idea in that direction:\n> 1) add new horizon which ignores PROC_IN_SAFE_IC backends and standbys\n> queries\n> 2) use this horizon for settings LP_DEAD bit in indexes (excluding\n> indexes being built of course)\n>\n> Index LP_DEAD hints are not used by standby in any way (they are just\n> ignored), also heap scan done by index building does not use them as\n> well.\n>\n> But, at the same time:\n> 1) index scans will be much faster during index creation or standby\n> reporting queries\n> 2) indexes can keep them fit using different optimizations\n> 3) less WAL due to a huge amount of full pages writes (which caused by\n> tons of LP_DEAD in indexes)\n>\n> The patch seems more-less easy to implement.\n> Does it worth being implemented? Or to scary?\n>\n\nI hihgly doubt this is worth the additional cognitive overhead of another\nliveness state, and I think there might be other issues with marking index\ntuples dead in indexes before the table tuple is dead that I can't think of\nright now.\n\nI've thought about alternative solutions, too: how about getting a new\nsnapshot every so often?\nWe don't really care about the liveness of the already-scanned data; the\nsnapshots used for RIC are used only during the scan. C/RIC's relation's\nlock level means vacuum can't run to clean up dead line items, so as long\nas we only swap the backend's reported snapshot (thus xmin) while the scan\nis between pages we should be able to reduce the time C/RIC is the one\nbackend holding back cleanup of old tuples.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n> [1]: https://postgr.es/m/[email protected]\n> [2]: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org\n>\n>\n>\n\nOn Fri, 15 Dec 2023, 20:07 Michail Nikolaev, <[email protected]> wrote:Hello, hackers!\n\nI think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY\nimprovements) in some lighter way.\n\nYes, a serious bug was (2) caused by this optimization and now it reverted.\n\nBut what about a more safe idea in that direction:\n1) add new horizon which ignores PROC_IN_SAFE_IC backends and standbys queries\n2) use this horizon for settings LP_DEAD bit in indexes (excluding\nindexes being built of course)\n\nIndex LP_DEAD hints are not used by standby in any way (they are just\nignored), also heap scan done by index building does not use them as\nwell.\n\nBut, at the same time:\n1) index scans will be much faster during index creation or standby\nreporting queries\n2) indexes can keep them fit using different optimizations\n3) less WAL due to a huge amount of full pages writes (which caused by\ntons of LP_DEAD in indexes)\n\nThe patch seems more-less easy to implement.\nDoes it worth being implemented? Or to scary?I hihgly doubt this is worth the additional cognitive overhead of another liveness state, and I think there might be other issues with marking index tuples dead in indexes before the table tuple is dead that I can't think of right now.I've thought about alternative solutions, too: how about getting a new snapshot every so often? We don't really care about the liveness of the already-scanned data; the snapshots used for RIC are used only during the scan. C/RIC's relation's lock level means vacuum can't run to clean up dead line items, so as long as we only swap the backend's reported snapshot (thus xmin) while the scan is between pages we should be able to reduce the time C/RIC is the one backend holding back cleanup of old tuples.Kind regards,Matthias van de MeentNeon (https://neon.tech)\n\n[1]: https://postgr.es/m/[email protected]\n[2]: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org",
"msg_date": "Fri, 15 Dec 2023 22:11:59 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> I've thought about alternative solutions, too: how about getting a new snapshot every so often?\n> We don't really care about the liveness of the already-scanned data; the snapshots used for RIC\n> are used only during the scan. C/RIC's relation's lock level means vacuum can't run to clean up\n> dead line items, so as long as we only swap the backend's reported snapshot (thus xmin) while\n> the scan is between pages we should be able to reduce the time C/RIC is the one backend\n> holding back cleanup of old tuples.\n\nHm, it looks like an interesting idea! It may be more dangerous, but\nat least it feels much more elegant than an LP_DEAD-related way.\nAlso, feels like we may apply this to both phases (first and the second scans).\nThe original patch (1) was helping only to the second one (after call\nto set_indexsafe_procflags).\n\nBut for the first scan we allowed to do so only for non-unique indexes\nbecause of:\n\n> * The reason for doing that is to avoid\n> * bogus unique-index failures due to concurrent UPDATEs (we might see\n> * different versions of the same row as being valid when we pass over them,\n> * if we used HeapTupleSatisfiesVacuum). This leaves us with an index that\n> * does not contain any tuples added to the table while we built the index.\n\nAlso, (1) was limited to indexes without expressions and predicates\n(2) because such may execute queries to other tables (sic!).\nOne possible solution is to add some checks to make sure no\nuser-defined functions are used.\nBut as far as I understand, it affects only CIC for now and does not\naffect the ability to use the proposed technique (updating snapshot\ntime to time).\n\nHowever, I think we need some more-less formal proof it is safe - it\nis really challenging to keep all the possible cases in the head. I’ll\ntry to do something here.\nAnother possible issue may be caused by the new locking pattern - we\nwill be required to wait for all transaction started before the ending\nof the phase to exit.\n\n[1]: https://postgr.es/m/[email protected]\n[2]: https://www.postgresql.org/message-id/flat/CAAaqYe_tq_Mtd9tdeGDsgQh%2BwMvouithAmcOXvCbLaH2PPGHvA%40mail.gmail.com#cbe3997b75c189c3713f243e25121c20\n\n\n",
"msg_date": "Sun, 17 Dec 2023 21:14:27 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Sun, 17 Dec 2023, 21:14 Michail Nikolaev, <[email protected]> wrote:\n>\n> Hello!\n>\n> > I've thought about alternative solutions, too: how about getting a new snapshot every so often?\n> > We don't really care about the liveness of the already-scanned data; the snapshots used for RIC\n> > are used only during the scan. C/RIC's relation's lock level means vacuum can't run to clean up\n> > dead line items, so as long as we only swap the backend's reported snapshot (thus xmin) while\n> > the scan is between pages we should be able to reduce the time C/RIC is the one backend\n> > holding back cleanup of old tuples.\n>\n> Hm, it looks like an interesting idea! It may be more dangerous, but\n> at least it feels much more elegant than an LP_DEAD-related way.\n> Also, feels like we may apply this to both phases (first and the second scans).\n> The original patch (1) was helping only to the second one (after call\n> to set_indexsafe_procflags).\n>\n> But for the first scan we allowed to do so only for non-unique indexes\n> because of:\n>\n> > * The reason for doing that is to avoid\n> > * bogus unique-index failures due to concurrent UPDATEs (we might see\n> > * different versions of the same row as being valid when we pass over them,\n> > * if we used HeapTupleSatisfiesVacuum). This leaves us with an index that\n> > * does not contain any tuples added to the table while we built the index.\n\nYes, for that we'd need an extra scan of the index that validates\nuniqueness. I think there was a proposal (though it may only have been\nan idea) some time ago, about turning existing non-unique indexes into\nunique ones by validating the data. Such a system would likely be very\nuseful to enable this optimization.\n\n> Also, (1) was limited to indexes without expressions and predicates\n> (2) because such may execute queries to other tables (sic!).\n\nNote that the use of such expressions would be a violation of the\nfunction's definition; it would depend on data from other tables which\nmakes the function behave like a STABLE function, as opposed to the\nIMMUTABLE that is required for index expressions. So, I don't think we\nshould specially care about being correct for incorrectly marked\nfunction definitions.\n\n> One possible solution is to add some checks to make sure no\n> user-defined functions are used.\n> But as far as I understand, it affects only CIC for now and does not\n> affect the ability to use the proposed technique (updating snapshot\n> time to time).\n>\n> However, I think we need some more-less formal proof it is safe - it\n> is really challenging to keep all the possible cases in the head. I’ll\n> try to do something here.\n\nI just realised there is one issue with this design: We can't cheaply\nreset the snapshot during the second table scan:\nIt is critically important that the second scan of R/CIC uses an index\ncontents summary (made with index_bulk_delete) that was created while\nthe current snapshot was already registered. If we didn't do that, the\nfollowing would occur:\n\n1. The index is marked ready for inserts from concurrent backends, but\nnot yet ready for queries.\n2. We get the bulkdelete data\n3. A concurrent backend inserts a new tuple T on heap page P, inserts\nit into the index, and commits. This tuple is not in the summary, but\nhas been inserted into the index.\n4. R/CIC resets the snapshot, making T visible.\n5. R/CIC scans page P, finds that tuple T has to be indexed but is not\npresent in the summary, thus inserts that tuple into the index (which\nalready had it inserted at 3)\n\nThis thus would be a logic bug, as indexes assume at-most-once\nsemantics for index tuple insertion; duplicate insertion are an error.\n\nSo, the \"reset the snapshot every so often\" trick cannot be applied in\nphase 3 (the rescan), or we'd have to do an index_bulk_delete call\nevery time we reset the snapshot. Rescanning might be worth the cost\n(e.g. when using BRIN), but that is very unlikely.\n\nAlternatively, we'd need to find another way to prevent us from\ninserting these duplicate entries - maybe by storing the scan's data\nin a buffer to later load into the index after another\nindex_bulk_delete()? Counterpoint: for BRIN indexes that'd likely\nrequire a buffer much larger than the result index would be.\n\nEither way, for the first scan (i.e. phase 2 \"build new indexes\") this\nis not an issue: we don't care about what transaction adds/deletes\ntuples at that point.\nFor all we know, all tuples of the table may be deleted concurrently\nbefore we even allow concurrent backends to start inserting tuples,\nand the algorithm would still work as it does right now.\n\n> Another possible issue may be caused by the new locking pattern - we\n> will be required to wait for all transaction started before the ending\n> of the phase to exit.\n\nWhat do you mean by \"new locking pattern\"? We already keep an\nShareUpdateExclusiveLock on every heap table we're accessing during\nR/CIC, and that should already prevent any concurrent VACUUM\noperations, right?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 00:53:34 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> Also, feels like we may apply this to both phases (first and the second scans).\n> The original patch (1) was helping only to the second one (after call\n> to set_indexsafe_procflags).\n\nOops, I was wrong here. The original version of the patch was also applied to\nboth phases.\n\n> Note that the use of such expressions would be a violation of the\n> function's definition; it would depend on data from other tables which\n> makes the function behave like a STABLE function, as opposed to the\n> IMMUTABLE that is required for index expressions. So, I don't think we\n> should specially care about being correct for incorrectly marked\n> function definitions.\n\nYes, but such cases could probably cause crashes also...\nSo, I think it is better to check them for custom functions. But I\nstill not sure -\nif such limitations still required for proposed optimization or not.\n\n> I just realised there is one issue with this design: We can't cheaply\n> reset the snapshot during the second table scan:\n> It is critically important that the second scan of R/CIC uses an index\n> contents summary (made with index_bulk_delete) that was created while\n> the current snapshot was already registered.\n\n> So, the \"reset the snapshot every so often\" trick cannot be applied in\n> phase 3 (the rescan), or we'd have to do an index_bulk_delete call\n> every time we reset the snapshot. Rescanning might be worth the cost\n> (e.g. when using BRIN), but that is very unlikely.\n\nHm, I think it is still possible. We could just manually recheck the\ntuples we see\nto the snapshot currently used for the scan. If an \"old\" snapshot can see\nthe tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the\nindex summary.\n\n> What do you mean by \"new locking pattern\"? We already keep an\n> ShareUpdateExclusiveLock on every heap table we're accessing during\n> R/CIC, and that should already prevent any concurrent VACUUM\n> operations, right?\n\nI was thinking not about \"classical\" locking, but about waiting for\nother backends\nby WaitForLockers(heaplocktag, ShareLock, true). But I think\neverything should be\nfine.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:56:33 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Wed, 20 Dec 2023 at 10:56, Michail Nikolaev\n<[email protected]> wrote:\n> > Note that the use of such expressions would be a violation of the\n> > function's definition; it would depend on data from other tables which\n> > makes the function behave like a STABLE function, as opposed to the\n> > IMMUTABLE that is required for index expressions. So, I don't think we\n> > should specially care about being correct for incorrectly marked\n> > function definitions.\n>\n> Yes, but such cases could probably cause crashes also...\n> So, I think it is better to check them for custom functions. But I\n> still not sure -\n> if such limitations still required for proposed optimization or not.\n\nI think contents could be inconsistent, but not more inconsistent than\nif the index was filled across multiple transactions using inserts.\nEither way I don't see it breaking more things that are not already\nbroken in that way in other places - at most it will introduce another\npath that exposes the broken state caused by mislabeled functions.\n\n> > I just realised there is one issue with this design: We can't cheaply\n> > reset the snapshot during the second table scan:\n> > It is critically important that the second scan of R/CIC uses an index\n> > contents summary (made with index_bulk_delete) that was created while\n> > the current snapshot was already registered.\n>\n> > So, the \"reset the snapshot every so often\" trick cannot be applied in\n> > phase 3 (the rescan), or we'd have to do an index_bulk_delete call\n> > every time we reset the snapshot. Rescanning might be worth the cost\n> > (e.g. when using BRIN), but that is very unlikely.\n>\n> Hm, I think it is still possible. We could just manually recheck the\n> tuples we see\n> to the snapshot currently used for the scan. If an \"old\" snapshot can see\n> the tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the\n> index summary.\nThat's an interesting method.\n\nHow would this deal with tuples not visible to the old snapshot?\nPresumably we can assume they're newer than that snapshot (the old\nsnapshot didn't have it, but the new one does, so it's committed after\nthe old snapshot, making them newer), so that backend must have\ninserted it into the index already, right?\n\n> HeapTupleSatisfiesHistoricMVCC\n\nThat function has this comment marker:\n \"Only usable on tuples from catalog tables!\"\nIs that correct even for this?\n\nShould this deal with any potential XID wraparound, too?\nHow does this behave when the newly inserted tuple's xmin gets frozen?\nThis would be allowed to happen during heap page pruning, afaik - no\nrules that I know of which are against that - but it would create\nissues where normal snapshot visibility rules would indicate it\nvisible to both snapshots regardless of whether it actually was\nvisible to the older snapshot when that snapshot was created...\n\nEither way, \"Historic snapshot\" isn't something I've worked with\nbefore, so that goes onto my \"figure out how it works\" pile.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 20 Dec 2023 12:14:32 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> How would this deal with tuples not visible to the old snapshot?\n> Presumably we can assume they're newer than that snapshot (the old\n> snapshot didn't have it, but the new one does, so it's committed after\n> the old snapshot, making them newer), so that backend must have\n> inserted it into the index already, right?\n\nYes, exactly.\n\n>> HeapTupleSatisfiesHistoricMVCC\n> That function has this comment marker:\n> \"Only usable on tuples from catalog tables!\"\n> Is that correct even for this?\n\nYeah, we just need HeapTupleSatisfiesVisibility (which calls\nHeapTupleSatisfiesMVCC) instead.\n\n> Should this deal with any potential XID wraparound, too?\n\nYeah, looks like we should care about such case somehow.\n\nPossible options here:\n\n1) Skip vac_truncate_clog while CIC is running. In fact, I think it's\nnot that much worse than the current state - datfrozenxid is still\nupdated in the catalog and will be considered the next time\nvac_update_datfrozenxid is called (the next VACCUM on any table).\n\n2) Delay vac_truncate_clog while CIC is running.\nIn such a case, if it was skipped, we will need to re-run it using the\nindex builds backend later.\n\n3) Wait for 64-bit xids :)\n\n4) Any ideas?\n\nIn addition, for the first and second options, we need logic to cancel\nthe second phase in the case of ForceTransactionIdLimitUpdate.\nBut maybe I'm missing something and the tuples may be frozen, ignoring\nthe set datfrozenxid values (over some horizon calculated at runtime\nbased on the xmin backends).\n\n> How does this behave when the newly inserted tuple's xmin gets frozen?\n> This would be allowed to happen during heap page pruning, afaik - no\n> rules that I know of which are against that - but it would create\n> issues where normal snapshot visibility rules would indicate it\n> visible to both snapshots regardless of whether it actually was\n> visible to the older snapshot when that snapshot was created...\n\nYes, good catch.\nAssuming we have somehow prevented vac_truncate_clog from occurring\nduring CIC, we can leave frozen and potentially frozen\n(xmin<frozenXID) for the second phase.\n\nSo, first phase processing items:\n* not frozen\n* xmin>frozenXID (may not be frozen)\n* visible by snapshot\n\nsecond phase:\n* frozen\n* xmin>frozenXID (may be frozen)\n* not in the index summary\n* visible by \"old\" snapshot\n\nYou might also think – why is the first stage needed at all? Just use\nbatch processing during initial index building?\n\nBest regards,\nMikhail.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 17:18:18 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "> Yes, good catch.\n> Assuming we have somehow prevented vac_truncate_clog from occurring\n> during CIC, we can leave frozen and potentially frozen\n> (xmin<frozenXID) for the second phase.\n\nJust realized that we can leave this for the first stage to improve efficiency.\nSince the ID is locked, anything that can be frozen will be visible in\nthe first stage.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 17:53:27 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello.\n\nRealized my last idea is invalid (because tuples are frozen by using\ndynamically calculated horizon) - so, don't waste your time on it :)\n\nNeed to think a little bit more here.\n\nThanks,\nMikhail.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 14:14:19 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\nIt seems like the idea of \"old\" snapshot is still a valid one.\n\n> Should this deal with any potential XID wraparound, too?\n\nAs far as I understand in our case, we are not affected by this in any way.\nVacuum in our table is not possible because of locking, so, nothing\nmay be frozen (see below).\nIn the case of super long index building, transactional limits will\nstop new connections using current\nregular infrastructure because it is based on relation data (but not\nactual xmin of backends).\n\n> How does this behave when the newly inserted tuple's xmin gets frozen?\n> This would be allowed to happen during heap page pruning, afaik - no\n> rules that I know of which are against that - but it would create\n> issues where normal snapshot visibility rules would indicate it\n> visible to both snapshots regardless of whether it actually was\n> visible to the older snapshot when that snapshot was created...\n\nAs I can see, heap_page_prune never freezes any tuples.\nIn the case of regular vacuum, it used this way: call heap_page_prune\nand then call heap_prepare_freeze_tuple and then\nheap_freeze_execute_prepared.\n\nMerry Christmas,\nMikhail.\n\n\n",
"msg_date": "Mon, 25 Dec 2023 15:12:41 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Mon, 25 Dec 2023 at 15:12, Michail Nikolaev\n<[email protected]> wrote:\n>\n> Hello!\n>\n> It seems like the idea of \"old\" snapshot is still a valid one.\n>\n> > Should this deal with any potential XID wraparound, too?\n>\n> As far as I understand in our case, we are not affected by this in any way.\n> Vacuum in our table is not possible because of locking, so, nothing\n> may be frozen (see below).\n> In the case of super long index building, transactional limits will\n> stop new connections using current\n> regular infrastructure because it is based on relation data (but not\n> actual xmin of backends).\n>\n> > How does this behave when the newly inserted tuple's xmin gets frozen?\n> > This would be allowed to happen during heap page pruning, afaik - no\n> > rules that I know of which are against that - but it would create\n> > issues where normal snapshot visibility rules would indicate it\n> > visible to both snapshots regardless of whether it actually was\n> > visible to the older snapshot when that snapshot was created...\n>\n> As I can see, heap_page_prune never freezes any tuples.\n> In the case of regular vacuum, it used this way: call heap_page_prune\n> and then call heap_prepare_freeze_tuple and then\n> heap_freeze_execute_prepared.\n\nCorrect, but there are changes being discussed where we would freeze\ntuples during pruning as well [0], which would invalidate that\nimplementation detail. And, if I had to choose between improved\nopportunistic freezing and improved R/CIC, I'd probably choose\nimproved freezing over R/CIC.\n\nAs an alternative, we _could_ keep track of concurrent index inserts\nusing a dummy index (with the same predicate) which only holds the\nTIDs of the inserted tuples. We'd keep it as an empty index in phase\n1, and every time we reset the visibility snapshot we now only need to\nscan that index to know what tuples were concurrently inserted. This\nshould have a significantly lower IO overhead than repeated full index\nbulkdelete scans for the new index in the second table scan phase of\nR/CIC. However, in a worst case it could still require another\nO(tablesize) of storage.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com\n\n\n",
"msg_date": "Thu, 4 Jan 2024 12:24:04 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> Correct, but there are changes being discussed where we would freeze\n> tuples during pruning as well [0], which would invalidate that\n> implementation detail. And, if I had to choose between improved\n> opportunistic freezing and improved R/CIC, I'd probably choose\n> improved freezing over R/CIC.\n\nAs another option, we could extract a dedicated horizon value for an\nopportunistic freezing.\nAnd use some flags in R/CIC backend to keep it at the required value.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Thu, 4 Jan 2024 13:45:06 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Melanie!\n\nSorry to interrupt you, just a quick question.\n\n> Correct, but there are changes being discussed where we would freeze\n> tuples during pruning as well [0], which would invalidate that\n> implementation detail. And, if I had to choose between improved\n> opportunistic freezing and improved R/CIC, I'd probably choose\n> improved freezing over R/CIC.\n\nDo you have any patches\\threads related to that refactoring\n(opportunistic freezing of tuples during pruning) [0]?\nThis may affect the idea of the current thread (latest version of it\nmostly in [1]) - it may be required to disable such a feature for\nparticular relation temporary or affect horizon used for pruning\n(without holding xmin).\n\nJust no sure - is it reasonable to start coding right now, or wait for\nsome prune-freeze-related patch first?\n\n[0] https://www.postgresql.org/message-id/CAAKRu_a+g2oe6aHJCbibFtNFiy2aib4E31X9QYJ_qKjxZmZQEg@mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CANtu0ojRX%3DosoiXL9JJG6g6qOowXVbVYX%2BmDsN%2B2jmFVe%3DeG7w%40mail.gmail.com#a8ff53f23d0fc7edabd446b4d634e7b5\n\n\n",
"msg_date": "Tue, 9 Jan 2024 18:00:28 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "> > > I just realised there is one issue with this design: We can't cheaply\n> > > reset the snapshot during the second table scan:\n> > > It is critically important that the second scan of R/CIC uses an index\n> > > contents summary (made with index_bulk_delete) that was created while\n> > > the current snapshot was already registered.\n> >\n> > > So, the \"reset the snapshot every so often\" trick cannot be applied in\n> > > phase 3 (the rescan), or we'd have to do an index_bulk_delete call\n> > > every time we reset the snapshot. Rescanning might be worth the cost\n> > > (e.g. when using BRIN), but that is very unlikely.\n> >\n> > Hm, I think it is still possible. We could just manually recheck the\n> > tuples we see\n> > to the snapshot currently used for the scan. If an \"old\" snapshot can see\n> > the tuple also (HeapTupleSatisfiesHistoricMVCC) then search for it in the\n> > index summary.\n> That's an interesting method.\n>\n> How would this deal with tuples not visible to the old snapshot?\n> Presumably we can assume they're newer than that snapshot (the old\n> snapshot didn't have it, but the new one does, so it's committed after\n> the old snapshot, making them newer), so that backend must have\n> inserted it into the index already, right?\n\nI made a draft of the patch and this idea is not working.\n\nThe problem is generally the same:\n\n* reference snapshot sees tuple X\n* reference snapshot is used to create index summary (but there is no\ntuple X in the index summary)\n* tuple X is updated to Y creating a HOT-chain\n* we started scan with new temporary snapshot (it sees Y, X is too old for it)\n* tuple X is pruned from HOT-chain because it is not protected by any snapshot\n* we see tuple Y in the scan with temporary snapshot\n * it is not in the index summary - so, we need to check if\nreference snapshot can see it\n * there is no way to understand if the reference snapshot was able\nto see tuple X - because we need the full HOT chain (with X tuple) for\nthat\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Thu, 1 Feb 2024 17:06:28 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Thu, 1 Feb 2024, 17:06 Michail Nikolaev, <[email protected]> wrote:\n>\n> > > > I just realised there is one issue with this design: We can't cheaply\n> > > > reset the snapshot during the second table scan:\n> > > > It is critically important that the second scan of R/CIC uses an index\n> > > > contents summary (made with index_bulk_delete) that was created while\n> > > > the current snapshot was already registered.\n\nI think the best way for this to work would be an index method that\nexclusively stores TIDs, and of which we can quickly determine new\ntuples, too. I was thinking about something like GIN's format, but\nusing (generation number, tid) instead of ([colno, colvalue], tid) as\nkey data for the internal trees, and would be unlogged (because the\ndata wouldn't have to survive a crash). Then we could do something\nlike this for the second table scan phase:\n\n0. index->indisready is set\n[...]\n1. Empty the \"changelog index\", resetting storage and the generation number.\n2. Take index contents snapshot of new index, store this.\n3. Loop until completion:\n4a. Take visibility snapshot\n4b. Update generation number of the changelog index, store this.\n4c. Take index snapshot of \"changelog index\" for data up to the\ncurrent stored generation number. Not including, because we only need\nto scan that part of the index that were added before we created our\nvisibility snapshot, i.e. TIDs labeled with generation numbers between\nthe previous iteration's generation number (incl.) and this\niteration's generation (excl.).\n4d. Combine the current index snapshot with that of the \"changelog\"\nindex, and save this.\n Note that this needs to take care to remove duplicates.\n4e. Scan segment of table (using the combined index snapshot) until we\nneed to update our visibility snapshot or have scanned the whole\ntable.\n\nThis should give similar, if not the same, behavour as that which we\nhave when we RIC a table with several small indexes, without requiring\nus to scan a full index of data several times.\n\nAttemp on proving this approach's correctness:\nIn phase 3, after each step 4b:\nAll matching tuples of the table that are in the visibility snapshot:\n* Were created before scan 1's snapshot, thus in the new index's snapshot, or\n* Were created after scan 1's snapshot but before index->indisready,\nthus not in the new index's snapshot, nor in the changelog index, or\n* Were created after the index was set as indisready, and committed\nbefore the previous iteration's visibility snapshot, thus in the\ncombined index snapshot, or\n* Were created after the index was set as indisready, after the\nprevious visibility snapshot was taken, but before the current\nvisibility snapshot was taken, and thus definitely included in the\nchangelog index.\n\nBecause we hold a snapshot, no data in the table that we should see is\nremoved, so we don't have a chance of broken HOT chains.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Sat, 17 Feb 2024 22:48:44 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> I think the best way for this to work would be an index method that\n> exclusively stores TIDs, and of which we can quickly determine new\n> tuples, too. I was thinking about something like GIN's format, but\n> using (generation number, tid) instead of ([colno, colvalue], tid) as\n> key data for the internal trees, and would be unlogged (because the\n> data wouldn't have to survive a crash)\n\nYeah, this seems to be a reasonable approach, but there are some\ndoubts related to it - it needs new index type as well as unlogged\nindexes to be introduced - this may make the patch too invasive to be\nmerged. Also, some way to remove the index from the catalog in case of\na crash may be required.\n\nA few more thoughts:\n* it is possible to go without generation number - we may provide a\nway to do some kind of fast index lookup (by TID) directly during the\nsecond table scan phase.\n* one more option is to maintain a Tuplesorts (instead of an index)\nwith TIDs as changelog and merge with index snapshot after taking a\nnew visibility snapshot. But it is not clear how to share the same\nTuplesort with multiple inserting backends.\n* crazy idea - what is about to do the scan in the index we are\nbuilding? We have tuple, so, we have all the data indexed in the\nindex. We may try to do an index scan using that data to get all\ntuples and find the one with our TID :) Yes, in some cases it may be\ntoo bad because of the huge amount of TIDs we need to scan + also\nbtree copies whole page despite we need single item. But some\nadditional index method may help - feels like something related to\nuniqueness (but it is only in btree anyway).\n\nThanks,\nMikhail.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 00:33:26 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "One more idea - is just forbid HOT prune while the second phase is\nrunning. It is not possible anyway currently because of snapshot held.\n\nPossible enhancements:\n* we may apply restriction only to particular tables\n* we may apply restrictions only to part of the tables (not yet\nscanned by R/CICs).\n\nYes, it is not an elegant solution, limited, not reliable in terms of\narchitecture, but a simple one.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 09:35:40 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Wed, 21 Feb 2024 at 00:33, Michail Nikolaev\n<[email protected]> wrote:\n>\n> Hello!\n> > I think the best way for this to work would be an index method that\n> > exclusively stores TIDs, and of which we can quickly determine new\n> > tuples, too. I was thinking about something like GIN's format, but\n> > using (generation number, tid) instead of ([colno, colvalue], tid) as\n> > key data for the internal trees, and would be unlogged (because the\n> > data wouldn't have to survive a crash)\n>\n> Yeah, this seems to be a reasonable approach, but there are some\n> doubts related to it - it needs new index type as well as unlogged\n> indexes to be introduced - this may make the patch too invasive to be\n> merged.\n\nI suppose so, though persistence is usually just to keep things\ncorrect in case of crashes, and this \"index\" is only there to support\nprocesses that don't expect to survive crashes.\n\n> Also, some way to remove the index from the catalog in case of\n> a crash may be required.\n\nThat's less of an issue though, we already accept that a crash during\nCIC/RIC leaves unusable indexes around, so \"needs more cleanup\" is not\nexactly a blocker.\n\n> A few more thoughts:\n> * it is possible to go without generation number - we may provide a\n> way to do some kind of fast index lookup (by TID) directly during the\n> second table scan phase.\n\nWhile possible, I don't think this would be more performant than the\ncombination approach, at the cost of potentially much more random IO\nwhen the table is aggressively being updated.\n\n> * one more option is to maintain a Tuplesorts (instead of an index)\n> with TIDs as changelog and merge with index snapshot after taking a\n> new visibility snapshot. But it is not clear how to share the same\n> Tuplesort with multiple inserting backends.\n\nTuplesort requires the leader process to wait for concurrent backends\nto finish their sort before it can start consuming their runs. This\nwould make it a very bad alternative to the \"changelog index\" as the\nCIC process would require on-demand actions from concurrent backends\n(flush of sort state). I'm not convinced that's somehow easier.\n\n> * crazy idea - what is about to do the scan in the index we are\n> building? We have tuple, so, we have all the data indexed in the\n> index. We may try to do an index scan using that data to get all\n> tuples and find the one with our TID :)\n\nWe can't rely on that, because we have no guarantee we can find the\ntuple quickly enough. Equality-based indexing is very much optional,\nand so are TID-based checks (outside the current vacuum-related APIs),\nso finding one TID can (and probably will) take O(indexsize) when the\ntuple is not in the index, which is one reason for ambulkdelete() to\nexist.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:01:45 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Wed, 21 Feb 2024 at 09:35, Michail Nikolaev\n<[email protected]> wrote:\n>\n> One more idea - is just forbid HOT prune while the second phase is\n> running. It is not possible anyway currently because of snapshot held.\n>\n> Possible enhancements:\n> * we may apply restriction only to particular tables\n> * we may apply restrictions only to part of the tables (not yet\n> scanned by R/CICs).\n>\n> Yes, it is not an elegant solution, limited, not reliable in terms of\n> architecture, but a simple one.\n\nHow do you suppose this would work differently from a long-lived\nnormal snapshot, which is how it works right now?\nWould it be exclusively for that relation? How would this be\nintegrated with e.g. heap_page_prune_opt?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:19:29 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hi!\n\n> How do you suppose this would work differently from a long-lived\n> normal snapshot, which is how it works right now?\n\nDifference in the ability to take new visibility snapshot periodically\nduring the second phase with rechecking visibility of tuple according\nto the \"reference\" snapshot (which is taken only once like now).\nIt is the approach from (1) but with a workaround for the issues\ncaused by heap_page_prune_opt.\n\n> Would it be exclusively for that relation?\nYes, only for that affected relation. Other relations are unaffected.\n\n> How would this be integrated with e.g. heap_page_prune_opt?\nProbably by some flag in RelationData, but not sure here yet.\n\nIf the idea looks sane, I could try to extend my POC - it should be\nnot too hard, likely (I already have tests to make sure it is\ncorrect).\n\n(1): https://www.postgresql.org/message-id/flat/CANtu0oijWPRGRpaRR_OvT2R5YALzscvcOTFh-%3DuZKUpNJmuZtw%40mail.gmail.com#8141eb2ea177ff560ee713b3f20de404\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:36:48 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Wed, 21 Feb 2024 at 12:37, Michail Nikolaev\n<[email protected]> wrote:\n>\n> Hi!\n>\n> > How do you suppose this would work differently from a long-lived\n> > normal snapshot, which is how it works right now?\n>\n> Difference in the ability to take new visibility snapshot periodically\n> during the second phase with rechecking visibility of tuple according\n> to the \"reference\" snapshot (which is taken only once like now).\n> It is the approach from (1) but with a workaround for the issues\n> caused by heap_page_prune_opt.\n>\n> > Would it be exclusively for that relation?\n> Yes, only for that affected relation. Other relations are unaffected.\n\nI suppose this could work. We'd also need to be very sure that the\ntoast relation isn't cleaned up either: Even though that's currently\nDELETE+INSERT only and can't apply HOT, it would be an issue if we\ncouldn't find the TOAST data of a deleted for everyone (but visible to\nus) tuple.\n\nNote that disabling cleanup for a relation will also disable cleanup\nof tuple versions in that table that are not used for the R/CIC\nsnapshots, and that'd be an issue, too.\n\n> > How would this be integrated with e.g. heap_page_prune_opt?\n> Probably by some flag in RelationData, but not sure here yet.\n>\n> If the idea looks sane, I could try to extend my POC - it should be\n> not too hard, likely (I already have tests to make sure it is\n> correct).\n\nI'm not a fan of this approach. Changing visibility and cleanup\nsemantics to only benefit R/CIC sounds like a pain to work with in\nessentially all visibility-related code. I'd much rather have to deal\nwith another index AM, even if it takes more time: the changes in\nsemantics will be limited to a new plug in the index AM system and a\nbehaviour change in R/CIC, rather than behaviour that changes in all\nvisibility-checking code.\n\nBut regardless of second scan snapshots, I think we can worry about\nthat part at a later moment: The first scan phase is usually the most\nexpensive and takes the most time of all phases that hold snapshots,\nand in the above discussion we agreed that we can already reduce the\ntime that a snapshot is held during that phase significantly. Sure, it\nisn't great that we have to scan the table again with only a single\nsnapshot, but generally phase 2 doesn't have that much to do (except\nwhen BRIN indexes are involved) so this is likely less of an issue.\nAnd even if it is, we would still have reduced the number of\nlong-lived snapshots by half.\n\n-Matthias\n\n\n",
"msg_date": "Tue, 5 Mar 2024 21:08:08 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello!\n\n> I'm not a fan of this approach. Changing visibility and cleanup\n> semantics to only benefit R/CIC sounds like a pain to work with in\n> essentially all visibility-related code. I'd much rather have to deal\n> with another index AM, even if it takes more time: the changes in\n> semantics will be limited to a new plug in the index AM system and a\n> behaviour change in R/CIC, rather than behaviour that changes in all\n> visibility-checking code.\n\nTechnically, this does not affect the visibility logic, only the\nclearing semantics.\nAll visibility related code remains untouched.\nBut yes, still an inelegant and a little strange-looking option.\n\nAt the same time, perhaps it can be dressed in luxury\nsomehow - for example, add as a first class citizen in ComputeXidHorizonsResult\na list of blocks to clear some relations.\n\n> But regardless of second scan snapshots, I think we can worry about\n> that part at a later moment: The first scan phase is usually the most\n> expensive and takes the most time of all phases that hold snapshots,\n> and in the above discussion we agreed that we can already reduce the\n> time that a snapshot is held during that phase significantly. Sure, it\n> isn't great that we have to scan the table again with only a single\n> snapshot, but generally phase 2 doesn't have that much to do (except\n> when BRIN indexes are involved) so this is likely less of an issue.\n> And even if it is, we would still have reduced the number of\n> long-lived snapshots by half.\n\nHmm, but it looks like we don't have the infrastructure to \"update\" xmin\npropagating to the horizon after the first snapshot in a transaction is taken.\n\nOne option I know of is to reuse the\nd9d076222f5b94a85e0e318339cfc44b8f26022d (1) approach.\nBut if this is the case, then there is no point in re-taking the\nsnapshot again during the first\nphase - just apply this \"if\" only for the first phase - and you're done.\n\nDo you know any less-hacky way? Or is it a nice way to go?\n\n[1]: https://github.com/postgres/postgres/commit/d9d076222f5b94a85e0e318339cfc44b8f26022d#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R1779-R1793\n\n\n",
"msg_date": "Thu, 7 Mar 2024 19:36:53 +0100",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 19:37, Michail Nikolaev\n<[email protected]> wrote:\n>\n> Hello!\n>\n> > I'm not a fan of this approach. Changing visibility and cleanup\n> > semantics to only benefit R/CIC sounds like a pain to work with in\n> > essentially all visibility-related code. I'd much rather have to deal\n> > with another index AM, even if it takes more time: the changes in\n> > semantics will be limited to a new plug in the index AM system and a\n> > behaviour change in R/CIC, rather than behaviour that changes in all\n> > visibility-checking code.\n>\n> Technically, this does not affect the visibility logic, only the\n> clearing semantics.\n> All visibility related code remains untouched.\n\nYeah, correct. But it still needs to update the table relations'\ninformation after finishing creating the indexes, which I'd rather not\nhave to do.\n\n> But yes, still an inelegant and a little strange-looking option.\n>\n> At the same time, perhaps it can be dressed in luxury\n> somehow - for example, add as a first class citizen in ComputeXidHorizonsResult\n> a list of blocks to clear some relations.\n\nNot sure what you mean here, but I don't think\nComputeXidHorizonsResult should have anything to do with actual\nrelations.\n\n> > But regardless of second scan snapshots, I think we can worry about\n> > that part at a later moment: The first scan phase is usually the most\n> > expensive and takes the most time of all phases that hold snapshots,\n> > and in the above discussion we agreed that we can already reduce the\n> > time that a snapshot is held during that phase significantly. Sure, it\n> > isn't great that we have to scan the table again with only a single\n> > snapshot, but generally phase 2 doesn't have that much to do (except\n> > when BRIN indexes are involved) so this is likely less of an issue.\n> > And even if it is, we would still have reduced the number of\n> > long-lived snapshots by half.\n>\n> Hmm, but it looks like we don't have the infrastructure to \"update\" xmin\n> propagating to the horizon after the first snapshot in a transaction is taken.\n\nWe can just release the current snapshot, and get a new one, right? I\nmean, we don't actually use the transaction for much else than\nvisibility during the first scan, and I don't think there is a need\nfor an actual transaction ID until we're ready to mark the index entry\nwith indisready.\n\n> One option I know of is to reuse the\n> d9d076222f5b94a85e0e318339cfc44b8f26022d (1) approach.\n> But if this is the case, then there is no point in re-taking the\n> snapshot again during the first\n> phase - just apply this \"if\" only for the first phase - and you're done.\n\nNot a fan of that, as it is too sensitive to abuse. Note that\nextensions will also have access to these tools, and I think we should\nbuild a system here that's not easy to break, rather than one that is.\n\n> Do you know any less-hacky way? Or is it a nice way to go?\n\nI suppose we could be resetting the snapshot every so often? Or use\nmultiple successive TID range scans with a new snapshot each?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 12 Mar 2024 12:50:24 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias!\n\n> We can just release the current snapshot, and get a new one, right? I\n> mean, we don't actually use the transaction for much else than\n> visibility during the first scan, and I don't think there is a need\n> for an actual transaction ID until we're ready to mark the index entry\n> with indisready.\n\n> I suppose we could be resetting the snapshot every so often? Or use\n> multiple successive TID range scans with a new snapshot each?\n\nIt seems like it is not so easy in that case. Because we still need to hold\ncatalog snapshot xmin, releasing the snapshot which used for the scan does\nnot affect xmin propagated to the horizon.\nThat's why d9d076222f5b94a85e0e318339cfc44b8f26022d(1) affects only the\ndata horizon, but not the catalog's one.\n\nSo, in such a situation, we may:\n\n1) starts scan from scratch with some TID range multiple times. But such an\napproach feels too complex and error-prone for me.\n\n2) split horizons propagated by `MyProc` to data-related xmin and\ncatalog-related xmin. Like `xmin` and `catalogXmin`. We may just mark\nsnapshots as affecting some of the horizons, or both. Such a change feels\neasy to be done but touches pretty core logic, so we need someone's\napproval for such a proposal, probably.\n\n3) provide some less invasive (but less non-kludge) way: add some kind of\nprocess flag like `PROC_IN_SAFE_IC_XMIN` and function like\n`AdvanceIndexSafeXmin` which changes the way backend affect horizon\ncalculation. In the case of `PROC_IN_SAFE_IC_XMIN` `ComputeXidHorizons`\nuses value from `proc->safeIcXmin` which is updated by\n`AdvanceIndexSafeXmin` while switching scan snapshots.\n\nSo, with option 2 or 3, we may avoid holding data horizon during the first\nphase scan by resetting the scan snapshot every so often (and, optionally,\nusing `AdvanceIndexSafeXmin` in case of 3rd approach).\n\n\nThe same will be possible for the second phase (validate).\n\nWe may do the same \"resetting the snapshot every so often\" technique, but\nthere is still the issue with the way we distinguish tuples which were\nmissed by the first phase scan or were inserted into the index after the\nvisibility snapshot was taken.\n\nSo, I see two options here:\n\n1) approach with additional index with some custom AM proposed by you.\n\n It looks correct and reliable but feels complex to implement and\nmaintain. Also, it negatively affects performance of table access (because\nof an additional index) and validation scan (because we need to merge\nadditional index content with visibility snapshot).\n\n2) one more tricky approach.\n\nWe may add some boolean flag to `Relation` about information of index\nbuilding in progress (`indexisbuilding`).\n\nIt may be easily calculated using `(index->indisready &&\n!index->indisvalid)`. For a more reliable solution, we also need to somehow\ncheck if backend/transaction building the index still in progress. Also, it\nis better to check if index is building concurrently using the \"safe_index\"\nway.\n\nI think there is a non too complex and expensive way to do so, probably by\naddition of some flag to index catalog record.\n\nOnce we have such a flag, we may \"legally\" prohibit `heap_page_prune_opt`\naffecting the relation updating `GlobalVisHorizonKindForRel` like this:\n\n if (rel != NULL && rel->rd_indexvalid && rel->rd_indexisbuilding)\n return VISHORIZON_CATALOG;\n\nSo, in common it works this way:\n\n* backend building the index affects catalog horizon as usual, but data\nhorizon is regularly propagated forward during the scan. So, other\nrelations are processed by vacuum and `heap_page_prune_opt` without any\nrestrictions\n\n* but our relation (with CIC in progress) accessed by `heap_page_prune_opt`\n(or any other vacuum-like mechanics) with catalog horizon to honor CIC\nwork. Therefore, validating scan may be sure what none of the HOT-chain\nwill be truncated. Even regular vacuum can't affect it (but yes, it can't\nbe anyway because of relation locking).\n\nAs a result, we may easily distinguish tuples missed by first phase scan,\njust by testing them against reference snapshot (which used to take\nvisibility snapshot).\n\nSo, for me, this approach feels non-kludge enough, safe and effective and\nthe same time.\n\nI have a prototype of this approach and looks like it works (I have a good\ntest catching issues with index content for CIC).\n\nWhat do you think about all this?\n\n[1]:\nhttps://github.com/postgres/postgres/commit/d9d076222f5b94a85e0e318339cfc44b8f26022d#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R1779-R1793\n\nHello, Matthias!> We can just release the current snapshot, and get a new one, right? I> mean, we don't actually use the transaction for much else than> visibility during the first scan, and I don't think there is a need> for an actual transaction ID until we're ready to mark the index entry> with indisready.> I suppose we could be resetting the snapshot every so often? Or use> multiple successive TID range scans with a new snapshot each?It seems like it is not so easy in that case. Because we still need to hold catalog snapshot xmin, releasing the snapshot which used for the scan does not affect xmin propagated to the horizon.That's why d9d076222f5b94a85e0e318339cfc44b8f26022d(1) affects only the data horizon, but not the catalog's one.So, in such a situation, we may:1) starts scan from scratch with some TID range multiple times. But such an approach feels too complex and error-prone for me.2) split horizons propagated by `MyProc` to data-related xmin and catalog-related xmin. Like `xmin` and `catalogXmin`. We may just mark snapshots as affecting some of the horizons, or both. Such a change feels easy to be done but touches pretty core logic, so we need someone's approval for such a proposal, probably.3) provide some less invasive (but less non-kludge) way: add some kind of process flag like `PROC_IN_SAFE_IC_XMIN` and function like `AdvanceIndexSafeXmin` which changes the way backend affect horizon calculation. In the case of `PROC_IN_SAFE_IC_XMIN` `ComputeXidHorizons` uses value from `proc->safeIcXmin` which is updated by `AdvanceIndexSafeXmin` while switching scan snapshots.So, with option 2 or 3, we may avoid holding data horizon during the first phase scan by resetting the scan snapshot every so often (and, optionally, using `AdvanceIndexSafeXmin` in case of 3rd approach).The same will be possible for the second phase (validate).We may do the same \"resetting the snapshot every so often\" technique, but there is still the issue with the way we distinguish tuples which were missed by the first phase scan or were inserted into the index after the visibility snapshot was taken.So, I see two options here:1) approach with additional index with some custom AM proposed by you. It looks correct and reliable but feels complex to implement and maintain. Also, it negatively affects performance of table access (because of an additional index) and validation scan (because we need to merge additional index content with visibility snapshot).2) one more tricky approach. We may add some boolean flag to `Relation` about information of index building in progress (`indexisbuilding`).It may be easily calculated using `(index->indisready && !index->indisvalid)`. For a more reliable solution, we also need to somehow check if backend/transaction building the index still in progress. Also, it is better to check if index is building concurrently using the \"safe_index\" way.I think there is a non too complex and expensive way to do so, probably by addition of some flag to index catalog record.Once we have such a flag, we may \"legally\" prohibit `heap_page_prune_opt` affecting the relation updating `GlobalVisHorizonKindForRel` like this: if (rel != NULL && rel->rd_indexvalid && rel->rd_indexisbuilding) return VISHORIZON_CATALOG;So, in common it works this way:* backend building the index affects catalog horizon as usual, but data horizon is regularly propagated forward during the scan. So, other relations are processed by vacuum and `heap_page_prune_opt` without any restrictions* but our relation (with CIC in progress) accessed by `heap_page_prune_opt` (or any other vacuum-like mechanics) with catalog horizon to honor CIC work. Therefore, validating scan may be sure what none of the HOT-chain will be truncated. Even regular vacuum can't affect it (but yes, it can't be anyway because of relation locking).As a result, we may easily distinguish tuples missed by first phase scan, just by testing them against reference snapshot (which used to take visibility snapshot).So, for me, this approach feels non-kludge enough, safe and effective and the same time.I have a prototype of this approach and looks like it works (I have a good test catching issues with index content for CIC).What do you think about all this?[1]: https://github.com/postgres/postgres/commit/d9d076222f5b94a85e0e318339cfc44b8f26022d#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R1779-R1793",
"msg_date": "Sat, 4 May 2024 17:51:20 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias!\n\nI just realized there is a much simpler and safe way to deal with the\nproblem.\n\nSo, d9d076222f5b94a85e0e318339cfc44b8f26022d(1) had a bug because the scan\nwas not protected by a snapshot. At the same time, we want this snapshot to\naffect not all the relations, but only a subset of them. And there is\nalready a proper way to achieve that - different types of visibility\nhorizons!\n\nSo, to resolve the issue, we just need to create a separated horizon value\nfor such situation as building an index concurrently.\n\nFor now, let's name it `VISHORIZON_BUILD_INDEX_CONCURRENTLY` for example.\nBy default, its value is equal to `VISHORIZON_DATA`. But in some cases it\n\"stops\" propagating forward while concurrent index is building, like this:\n\n h->create_index_concurrently_oldest_nonremovable\n=TransactionIdOlder(h->create_index_concurrently_oldest_nonremovable, xmin);\n if (!(statusFlags & PROC_IN_SAFE_IC))\n h->data_oldest_nonremovable =\nTransactionIdOlder(h->data_oldest_nonremovable, xmin);\n\nThe `PROC_IN_SAFE_IC` marks backend xmin as ignored by `VISHORIZON_DATA`\nbut not by `VISHORIZON_BUILD_INDEX_CONCURRENTLY`.\n\nAfter, we need to use appropriate horizon for relations which are processed\nby `PROC_IN_SAFE_IC` backends. There are a few ways to do it, we may start\nprototyping with `rd_indexisbuilding` from previous message:\n\n static inline GlobalVisHorizonKind\n GlobalVisHorizonKindForRel(Relation rel)\n ........\n if (rel != NULL && rel->rd_indexvalid &&\nrel->rd_indexisbuilding)\n return VISHORIZON_BUILD_INDEX_CONCURRENTLY;\n\n\nThere are few more moments need to be considered:\n\n* Does it move the horizon backwards?\n\nIt is allowed for the horizon to move backwards (like said in\n`ComputeXidHorizons`) but anyway - in that case the horizon for particular\nrelations just starts to lag behind the horizon for other relations.\nInvariant is like that: `VISHORIZON_BUILD_INDEX_CONCURRENTLY` <=\n`VISHORIZON_DATA` <= `VISHORIZON_CATALOG` <= `VISHORIZON_SHARED`.\n\n* What is about old cached versions of `Relation` objects without\n`rd_indexisbuilding` yet set?\n\nThis is not a problem because once the backend registers a new index, it\nwaits for all transactions without that knowledge to end\n(`WaitForLockers`). So, new ones will also get information about new\nhorizon for that particular relation.\n\n* What is about TOAST?\nTo keep TOAST horizon aligned with relation building the index, we may do\nthe next thing (as first implementation iteration):\n\n else if (rel != NULL && ((rel->rd_indexvalid &&\nrel->rd_indexisbuilding) || IsToastRelation(rel)))\n return VISHORIZON_BUILD_INDEX_CONCURRENTLY;\n\nFor the normal case, `VISHORIZON_BUILD_INDEX_CONCURRENTLY` is equal to\n`VISHORIZON_DATA` - nothing is changed at all. But while the concurrent\nindex is building, the TOAST horizon is guaranteed to be aligned with its\nparent relation. And yes, it is better to find an easy way to affect only\nTOAST relations related to the relation with index building in progress.\n\nNew horizon adds some complexity, but not too much, in my opinion. I am\npretty sure it is worth being done because the ability to rebuild indexes\nwithout performance degradation is an extremely useful feature.\nThings to be improved:\n* better way to track relations with concurrent indexes being built (with\nmechanics to understood what index build was failed)\n* better way to affect TOAST tables only related to concurrent index build\n* better naming\n\nPatch prototype in attachment.\nAlso, maybe it is worth committing test separately - it was based on Andrey\nBorodin work (2). The test fails well in the case of incorrect\nimplementation.\n\n[1]:\nhttps://github.com/postgres/postgres/commit/d9d076222f5b94a85e0e318339cfc44b8f26022d#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R1779-R1793\n[2]: https://github.com/x4m/postgres_g/commit/d0651e7d0d14862d5a4dac076355",
"msg_date": "Mon, 6 May 2024 01:37:20 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias and others!\n\nUpdated WIP in attach.\n\nChanges are:\n* Renaming, now it feels better for me\n* More reliable approach in `GlobalVisHorizonKindForRel` to make sure we\nhave not missed `rd_safeindexconcurrentlybuilding` by calling\n`RelationGetIndexList` if required\n* Optimization to avoid any additional `RelationGetIndexList` if zero of\nconcurrently indexes are being built\n* TOAST moved to TODO, since looks like it is out of scope - but not sure\nyet, need to dive dipper\n\nTODO:\n* TOAST\n* docs and comments\n* make sure non-data tables are not affected\n* Per-database scope of optimization\n* Handle index building errors correctly in optimization code\n* More tests: create index, multiple re-indexes, multiple tables\n\nThanks,\nMichail.",
"msg_date": "Tue, 7 May 2024 14:35:13 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hi again!\n\nMade an error in `GlobalVisHorizonKindForRel` logic, and it was caught by a\nnew test.\n\nFixed version in attach.\n\n>",
"msg_date": "Tue, 7 May 2024 22:23:23 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias and others!\n\nRealized new horizon was applied only during validation phase (once index\nis marked as ready).\nNow it applied if index is not marked as valid yet.\n\nUpdated version in attach.\n\n--------------------------------------------------\n\n> I think the best way for this to work would be an index method that\n> exclusively stores TIDs, and of which we can quickly determine new\n> tuples, too. I was thinking about something like GIN's format, but\n> using (generation number, tid) instead of ([colno, colvalue], tid) as\n> key data for the internal trees, and would be unlogged (because the\n> data wouldn't have to survive a crash). Then we could do something\n> like this for the second table scan phase:\n\nRegarding that approach to dealing with validation phase and resetting of\nsnapshot:\n\nI was thinking about it and realized: once we go for an additional index -\nwe don't need the second heap scan at all!\n\nWe may do it this way:\n\n* create target index, not marked as indisready yet\n* create a temporary unlogged index with the same parameters to store tids\n(optionally with the indexes columns data, see below), marked as indisready\n(but not indisvalid)\n* commit them both in a single transaction\n* wait for other transaction to know about them and honor in HOT\nconstraints and new inserts (for temporary index)\n* now our temporary index is filled by the tuples inserted to the table\n* start building out target index, resetting snapshot every so often (if it\nis \"safe\" index)\n* finish target index building phase\n* mark target index as indisready\n* now, start validation of the index:\n * take the reference snapshot\n * take a visibility snapshot of the target index, sort it (as it done\ncurrently)\n * take a visibility snapshot of our temporary index, sort it\n * start merging loop using two synchronized cursors over both\nvisibility snapshots\n * if we encountered tid which is not present in target visibility\nsnapshot\n * insert it to target index\n * if a temporary index contains the column's data - we may\neven avoid the tuple fetch\n * if temporary index is tid-only - we fetch tuple from the\nheap, but as plus we are also skipping dead tuples from insertion to the\nnew index (I think it is better option)\n * commit everything, release reference snapshot\n* wait for transactions older than reference snapshot (as it done currently)\n* mark target index as indisvalid, drop temporary index\n* done\n\n\nSo, pros:\n* just a single heap scan\n* snapshot is reset periodically\n\nCons:\n* we need to maintain the additional index during the main building phase\n* one more tuplesort\n\nIf the temporary index is unlogged, cheap to maintain (just append-only\nmechanics) this feels like a perfect tradeoff for me.\n\nThis approach will work perfectly with low amount of tuple inserts during\nthe building phase. And looks like even in the worst case it still better\nthan the current approach.\n\nWhat do you think? Have I missed something?\n\nThanks,\nMichail.",
"msg_date": "Thu, 9 May 2024 15:00:00 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello.\n\nI did the POC (1) of the method described in the previous email, and it\nlooks promising.\n\nIt doesn't block the VACUUM, indexes are built about 30% faster (22 mins vs\n15 mins). Additional index is lightweight and does not produce any WAL.\n\nI'll continue the more stress testing for a while. Also, I need to\nrestructure the commits (my path was no direct) into some meaningful and\nreviewable patches.\n\n[1]\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach\n\nHello.I did the POC (1) of the method described in the previous email, and it looks promising.It doesn't block the VACUUM, indexes are built about 30% faster (22 mins vs 15 mins). Additional index is lightweight and does not produce any WAL.I'll continue the more stress testing for a while. Also, I need to restructure the commits (my path was no direct) into some meaningful and reviewable patches.[1] https://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach",
"msg_date": "Tue, 11 Jun 2024 10:58:05 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "On Tue, 11 Jun 2024 at 10:58, Michail Nikolaev\n<[email protected]> wrote:\n>\n> Hello.\n>\n> I did the POC (1) of the method described in the previous email, and it looks promising.\n>\n> It doesn't block the VACUUM, indexes are built about 30% faster (22 mins vs 15 mins).\n\nThat's a nice improvement.\n\n> Additional index is lightweight and does not produce any WAL.\n\nThat doesn't seem to be what I see in the current patchset:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach#diff-cc3cb8968cf833c4b8498ad2c561c786099c910515c4bf397ba853ae60aa2bf7R311\n\n> I'll continue the more stress testing for a while. Also, I need to restructure the commits (my path was no direct) into some meaningful and reviewable patches.\n\nWhile waiting for this, here are some initial comments on the github diffs:\n\n- I notice you've added a new argument to\nheapam_index_build_range_scan. I think this could just as well be\nimplemented by reading the indexInfo->ii_Concurrent field, as the\nvalues should be equivalent, right?\n\n- In heapam_index_build_range_scan, it seems like you're popping the\nsnapshot and registering a new one while holding a tuple from\nheap_getnext(), thus while holding a page lock. I'm not so sure that's\nOK, expecially when catalogs are also involved (specifically for\nexpression indexes, where functions could potentially be updated or\ndropped if we re-create the visibility snapshot)\n\n- In heapam_index_build_range_scan, you pop the snapshot before the\nreturned heaptuple is processed and passed to the index-provided\ncallback. I think that's incorrect, as it'll change the visibility of\nthe returned tuple before it's passed to the index's callback. I think\nthe snapshot manipulation is best added at the end of the loop, if we\nadd it at all in that function.\n\n- The snapshot reset interval is quite high, at 500ms. Why did you\nconfigure it that low, and didn't you make this configurable?\n\n- You seem to be using WAL in the STIR index, while it doesn't seem\nthat relevant for the use case of auxiliary indexes that won't return\nany data and are only used on the primary. It would imply that the\ndata is being sent to replicas and more data being written than\nstrictly necessary, which to me seems wasteful.\n\n- The locking in stirinsert can probably be improved significantly if\nwe use things like atomic operations on STIR pages. We'd need an\nexclusive lock only for page initialization, while share locks are\nenough if the page's data is modified without WAL. That should improve\nconcurrent insert performance significantly, as it would further\nreduce the length of the exclusively locked hot path.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 7 Aug 2024 01:40:51 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias!\n\n> While waiting for this, here are some initial comments on the github\ndiffs:\n\nThanks for your review!\nWhile stress testing the POC, I found some issues unrelated to the patch\nthat need to be fixed first.\nThis is [1] and [2].\n\n>> Additional index is lightweight and does not produce any WAL.\n> That doesn't seem to be what I see in the current patchset:\n\nPersistence is passed as parameter [3] and set to RELPERSISTENCE_UNLOGGED\nfor auxiliary indexes [4].\n\n> - I notice you've added a new argument to\n> heapam_index_build_range_scan. I think this could just as well be\n> implemented by reading the indexInfo->ii_Concurrent field, as the\n> values should be equivalent, right?\n\nNot always; currently, it is set by ResetSnapshotsAllowed[5].\nWe fall back to regular index build if there is a predicate or expression\nin the index (which should be considered \"safe\" according to [6]).\nHowever, we may remove this check later.\nAdditionally, there is no sense in resetting the snapshot if we already\nhave an xmin assigned to the backend for some reason.\n\n> In heapam_index_build_range_scan, it seems like you're popping the\n> snapshot and registering a new one while holding a tuple from\n> heap_getnext(), thus while holding a page lock. I'm not so sure that's\n> OK, expecially when catalogs are also involved (specifically for\n> expression indexes, where functions could potentially be updated or\n> dropped if we re-create the visibility snapshot)\n\nYeah, good catch.\nInitially, I implemented a different approach by extracting the catalog\nxmin to a separate horizon [7]. It might be better to return to this option.\n\n> In heapam_index_build_range_scan, you pop the snapshot before the\n> returned heaptuple is processed and passed to the index-provided\n> callback. I think that's incorrect, as it'll change the visibility of\n> the returned tuple before it's passed to the index's callback. I think\n> the snapshot manipulation is best added at the end of the loop, if we\n> add it at all in that function.\n\nYes, this needs to be fixed as well.\n\n> The snapshot reset interval is quite high, at 500ms. Why did you\n> configure it that low, and didn't you make this configurable?\n\nIt is just a random value for testing purposes.\nI don't think there is a need to make it configurable.\nGetting a new snapshot is a cheap operation now, so we can do it more often\nif required.\nInternally, I was testing it with a 0ms interval.\n\n> You seem to be using WAL in the STIR index, while it doesn't seem\n> that relevant for the use case of auxiliary indexes that won't return\n> any data and are only used on the primary. It would imply that the\n> data is being sent to replicas and more data being written than\n> strictly necessary, which to me seems wasteful.\n\nIt just looks like an index with WAL, but as mentioned above, it is\nunlogged in actual usage.\n\n> The locking in stirinsert can probably be improved significantly if\n> we use things like atomic operations on STIR pages. We'd need an\n> exclusive lock only for page initialization, while share locks are\n> enough if the page's data is modified without WAL. That should improve\n> concurrent insert performance significantly, as it would further\n> reduce the length of the exclusively locked hot path.\n\nHm, good idea. I'll check it later.\n\nBest regards & thanks again,\nMikhail\n\n[1]:\nhttps://www.postgresql.org/message-id/CANtu0ohHmYXsK5bxU9Thcq1FbELLAk0S2Zap0r8AnU3OTmcCOA%40mail.gmail.com\n[2]:\nhttps://www.postgresql.org/message-id/CANtu0ojga8s9%2BJ89cAgLzn2e-bQgy3L0iQCKaCnTL%3Dppot%3Dqhw%40mail.gmail.com\n[3]:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach#diff-50abc48efcc362f0d3194aceba6969429f46fa1f07a119e555255545e6655933R93\n[4]:\nhttps://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/catalog/index.c#L1600\n[5]:\nhttps://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/catalog/index.c#L2657\n[6]:\nhttps://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/commands/indexcmds.c#L1129\n[7]:\nhttps://github.com/postgres/postgres/commit/38b243d6cc7358a44cb1a865b919bf9633825b0c\n\nHello, Matthias!> While waiting for this, here are some initial comments on the github diffs:Thanks for your review!While stress testing the POC, I found some issues unrelated to the patch that need to be fixed first.This is [1] and [2].>> Additional index is lightweight and does not produce any WAL.> That doesn't seem to be what I see in the current patchset:Persistence is passed as parameter [3] and set to RELPERSISTENCE_UNLOGGED for auxiliary indexes [4].> - I notice you've added a new argument to> heapam_index_build_range_scan. I think this could just as well be> implemented by reading the indexInfo->ii_Concurrent field, as the> values should be equivalent, right?Not always; currently, it is set by ResetSnapshotsAllowed[5].We fall back to regular index build if there is a predicate or expression in the index (which should be considered \"safe\" according to [6]).However, we may remove this check later.Additionally, there is no sense in resetting the snapshot if we already have an xmin assigned to the backend for some reason.> In heapam_index_build_range_scan, it seems like you're popping the> snapshot and registering a new one while holding a tuple from> heap_getnext(), thus while holding a page lock. I'm not so sure that's> OK, expecially when catalogs are also involved (specifically for> expression indexes, where functions could potentially be updated or> dropped if we re-create the visibility snapshot)Yeah, good catch.Initially, I implemented a different approach by extracting the catalog xmin to a separate horizon [7]. It might be better to return to this option.> In heapam_index_build_range_scan, you pop the snapshot before the> returned heaptuple is processed and passed to the index-provided> callback. I think that's incorrect, as it'll change the visibility of> the returned tuple before it's passed to the index's callback. I think> the snapshot manipulation is best added at the end of the loop, if we> add it at all in that function.Yes, this needs to be fixed as well.> The snapshot reset interval is quite high, at 500ms. Why did you> configure it that low, and didn't you make this configurable?It is just a random value for testing purposes.I don't think there is a need to make it configurable.Getting a new snapshot is a cheap operation now, so we can do it more often if required.Internally, I was testing it with a 0ms interval.> You seem to be using WAL in the STIR index, while it doesn't seem> that relevant for the use case of auxiliary indexes that won't return> any data and are only used on the primary. It would imply that the> data is being sent to replicas and more data being written than> strictly necessary, which to me seems wasteful.It just looks like an index with WAL, but as mentioned above, it is unlogged in actual usage.> The locking in stirinsert can probably be improved significantly if> we use things like atomic operations on STIR pages. We'd need an> exclusive lock only for page initialization, while share locks are> enough if the page's data is modified without WAL. That should improve> concurrent insert performance significantly, as it would further> reduce the length of the exclusively locked hot path.Hm, good idea. I'll check it later.Best regards & thanks again,Mikhail[1]: https://www.postgresql.org/message-id/CANtu0ohHmYXsK5bxU9Thcq1FbELLAk0S2Zap0r8AnU3OTmcCOA%40mail.gmail.com[2]: https://www.postgresql.org/message-id/CANtu0ojga8s9%2BJ89cAgLzn2e-bQgy3L0iQCKaCnTL%3Dppot%3Dqhw%40mail.gmail.com[3]: https://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach#diff-50abc48efcc362f0d3194aceba6969429f46fa1f07a119e555255545e6655933R93[4]: https://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/catalog/index.c#L1600[5]: https://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/catalog/index.c#L2657[6]: https://github.com/michail-nikolaev/postgres/blob/e2698ca7c814a5fa5d4de8a170b7cae83034cade/src/backend/commands/indexcmds.c#L1129[7]: https://github.com/postgres/postgres/commit/38b243d6cc7358a44cb1a865b919bf9633825b0c",
"msg_date": "Thu, 8 Aug 2024 15:53:00 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias!\n\nJust wanted to update you with some information about the next steps in\nwork.\n\n> In heapam_index_build_range_scan, it seems like you're popping the\n> snapshot and registering a new one while holding a tuple from\n> heap_getnext(), thus while holding a page lock. I'm not so sure that's\n> OK, expecially when catalogs are also involved (specifically for\n> expression indexes, where functions could potentially be updated or\n> dropped if we re-create the visibility snapshot)\n\nI have returned to the solution with a dedicated catalog_xmin for backends\n[1].\nAdditionally, I have added catalog_xmin to pg_stat_activity [2].\n\n> In heapam_index_build_range_scan, you pop the snapshot before the\n> returned heaptuple is processed and passed to the index-provided\n> callback. I think that's incorrect, as it'll change the visibility of\n> the returned tuple before it's passed to the index's callback. I think\n> the snapshot manipulation is best added at the end of the loop, if we\n> add it at all in that function.\n\nNow it's fixed, and the snapshot is reset between pages [3].\n\nAdditionally, I resolved the issue with potential duplicates in unique\nindexes. It looks a bit clunky, but it works for now [4].\n\nSingle commit from [5] also included, just for stable stress testing.\n\nFull diff is available at [6].\n\nBest regards,\nMikhail.\n\n[1]:\nhttps://github.com/michail-nikolaev/postgres/commit/01a47623571592c52c7a367f85b1cff9d8b593c0\n[2]:\nhttps://github.com/michail-nikolaev/postgres/commit/d3345d60bd51fe2e0e4a73806774b828f34ba7b6\n[3]:\nhttps://github.com/michail-nikolaev/postgres/commit/7d1dd4f971e8d03f38de95f82b730635ffe09aaf\n[4]:\nhttps://github.com/michail-nikolaev/postgres/commit/4ad56e14dd504d5530657069068c2bdf172e482d\n[5]: https://commitfest.postgresql.org/49/5160/\n[6]:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach?diff=split&w=\n\nHello, Matthias!Just wanted to update you with some information about the next steps in work.> In heapam_index_build_range_scan, it seems like you're popping the> snapshot and registering a new one while holding a tuple from> heap_getnext(), thus while holding a page lock. I'm not so sure that's> OK, expecially when catalogs are also involved (specifically for> expression indexes, where functions could potentially be updated or> dropped if we re-create the visibility snapshot)I have returned to the solution with a dedicated catalog_xmin for backends [1].Additionally, I have added catalog_xmin to pg_stat_activity [2].> In heapam_index_build_range_scan, you pop the snapshot before the> returned heaptuple is processed and passed to the index-provided> callback. I think that's incorrect, as it'll change the visibility of> the returned tuple before it's passed to the index's callback. I think> the snapshot manipulation is best added at the end of the loop, if we> add it at all in that function.Now it's fixed, and the snapshot is reset between pages [3].Additionally, I resolved the issue with potential duplicates in unique indexes. It looks a bit clunky, but it works for now [4].Single commit from [5] also included, just for stable stress testing.Full diff is available at [6].Best regards,Mikhail.[1]: https://github.com/michail-nikolaev/postgres/commit/01a47623571592c52c7a367f85b1cff9d8b593c0[2]: https://github.com/michail-nikolaev/postgres/commit/d3345d60bd51fe2e0e4a73806774b828f34ba7b6[3]: https://github.com/michail-nikolaev/postgres/commit/7d1dd4f971e8d03f38de95f82b730635ffe09aaf[4]: https://github.com/michail-nikolaev/postgres/commit/4ad56e14dd504d5530657069068c2bdf172e482d[5]: https://commitfest.postgresql.org/49/5160/[6]: https://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach?diff=split&w=",
"msg_date": "Sun, 1 Sep 2024 23:19:00 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
},
{
"msg_contents": "Hello, Matthias!\n\n>> - I notice you've added a new argument to\n>> heapam_index_build_range_scan. I think this could just as well be\n>> implemented by reading the indexInfo->ii_Concurrent field, as the\n>> values should be equivalent, right?\n\n> Not always; currently, it is set by ResetSnapshotsAllowed[5].\n> We fall back to regular index build if there is a predicate or expression\nin the index (which should be considered \"safe\" according to [6]).\n> However, we may remove this check later.\n> Additionally, there is no sense in resetting the snapshot if we already\nhave an xmin assigned to the backend for some reason.\n\nI realized you were right. It's always possible to reset snapshots for\nconcurrent index building without any limitations related to predicates or\nexpressions.\nAdditionally, the PROC_IN_SAFE_IC flag is no longer necessary since\nsnapshots are rotating quickly, and it's possible to wait for them without\nrequiring any special exceptions for CREATE/REINDEX INDEX CONCURRENTLY.\n\nCurrently, it looks like this [1]. I've also attached a single large patch\njust for the case.\n\nI plan to restructure the patch into the following set:\n\n* Introduce catalogXmin as a separate value to calculate the horizon for\nthe catalog.\n* Add the STIR access method.\n* Modify concurrent build/reindex to use an aux-index approach without\nsnapshot rotation.\n* Add support for snapshot rotation for non-parallel and non-unique cases.\n* Extend support for snapshot rotation in parallel index builds.\n* Implement snapshot rotation support for unique indexes.\n\nBest regards,\nMikhail\n\n[1]:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach_rebased?expand=1\n\n>",
"msg_date": "Sun, 8 Sep 2024 17:18:00 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements"
}
] |
[
{
"msg_contents": "Hi,\n\nFYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n\nclang15: ~16s\nclang16: ~211s\nclang17: ~233s\n\nFirst noticed on FreeBSD (where the system cc is now clang16), but\nreproduced also on Debian (via apt.llvm.org packages).\n\n\n",
"msg_date": "Sat, 16 Dec 2023 15:25:58 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clang optimiser vs preproc.c"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> FYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n\n> clang15: ~16s\n> clang16: ~211s\n> clang17: ~233s\n\nWhat are the numbers for gram.c?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Dec 2023 21:44:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "On Sat, Dec 16, 2023 at 3:44 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > FYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n>\n> > clang15: ~16s\n> > clang16: ~211s\n> > clang17: ~233s\n>\n> What are the numbers for gram.c?\n\nclang15: ~3.8s\nclang16: ~3.2s\nclang17: ~2.9s\n\n\n",
"msg_date": "Sat, 16 Dec 2023 16:04:28 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sat, Dec 16, 2023 at 3:44 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> FYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n>>> clang15: ~16s\n>>> clang16: ~211s\n>>> clang17: ~233s\n\n>> What are the numbers for gram.c?\n\n> clang15: ~3.8s\n> clang16: ~3.2s\n> clang17: ~2.9s\n\nHuh. There's not that many more productions in the ecpg grammar\nthan the core, so it doesn't seem likely that this is purely a\nsize-of-file issue. I'd bet on there being something that clang\ndoesn't do well about the (very stylized) C code being generated\nwithin the grammar productions.\n\nWe actually noticed this or a closely-related problem before [1]\nand briefly discussed the possibility of rearranging the generated\ncode to make it less indigestible to clang. But there was no concrete\nidea about what to do specifically, and the thread slid off the radar\nscreen.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n",
"msg_date": "Fri, 15 Dec 2023 22:19:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "On Sat, Dec 16, 2023 at 4:19 PM Tom Lane <[email protected]> wrote:\n> We actually noticed this or a closely-related problem before [1]\n> and briefly discussed the possibility of rearranging the generated\n> code to make it less indigestible to clang. But there was no concrete\n> idea about what to do specifically, and the thread slid off the radar\n> screen.\n\nI've never paid attention to the output of -ftime-report before but\nthis difference stands out pretty clearly with clang16:\n\n ---User Time--- --System Time-- --User+System-- ---Wall\nTime--- --- Name ---\n 201.2266 ( 99.6%) 0.0074 ( 99.3%) 201.2341 ( 99.6%) 207.1308 (\n99.6%) SLPVectorizerPass\n\nThe equivalent line for clang15 is:\n 3.0979 ( 64.8%) 0.0000 ( 0.0%) 3.0979 ( 64.8%) 3.0996 (\n64.8%) SLPVectorizerPass\n\nThe thing Andres showed in that other thread was like this (though in\nmy output it's grown \"#2\") which is much of the time in 15, but \"only\"\ngoes up by a couple of seconds in 16, so it's not our primary problem:\n\n 9.1890 ( 73.1%) 0.0396 ( 23.9%) 9.2286 ( 72.4%) 9.6586 (\n72.9%) Greedy Register Allocator #2\n\n\n",
"msg_date": "Sat, 16 Dec 2023 18:32:07 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-15 22:19:56 -0500, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Sat, Dec 16, 2023 at 3:44 PM Tom Lane <[email protected]> wrote:\n> >> Thomas Munro <[email protected]> writes:\n> >>> FYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n> >>> clang15: ~16s\n> >>> clang16: ~211s\n> >>> clang17: ~233s\n\nIs this with non-assert clangs? Because I see times that seem smaller by more\nthan what can be explained by hardware differences:\n\npreproc.c:\n17 10.270s\n16 9.685s\n15 8.300s\n\ngram.c:\n17 1.936s\n16 2.131s\n15 2.161s\n\nThat's still bad, but a far cry away from 233s.\n\n\n> Huh. There's not that many more productions in the ecpg grammar\n> than the core, so it doesn't seem likely that this is purely a\n> size-of-file issue. I'd bet on there being something that clang\n> doesn't do well about the (very stylized) C code being generated\n> within the grammar productions.\n> \n> We actually noticed this or a closely-related problem before [1]\n> and briefly discussed the possibility of rearranging the generated\n> code to make it less indigestible to clang. But there was no concrete\n> idea about what to do specifically, and the thread slid off the radar\n> screen.\n\nOne interest aspect might be that preproc.c ends up with ~33% more states than\ngram.c\n\ngram.c:\n#define YYLAST 116587\n\npreproc.c:\n#define YYLAST 155495\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Dec 2023 04:29:00 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "On Sun, Dec 17, 2023 at 1:29 AM Andres Freund <[email protected]> wrote:\n> On 2023-12-15 22:19:56 -0500, Tom Lane wrote:\n> > Thomas Munro <[email protected]> writes:\n> > > On Sat, Dec 16, 2023 at 3:44 PM Tom Lane <[email protected]> wrote:\n> > >> Thomas Munro <[email protected]> writes:\n> > >>> FYI, it looks like there is a big jump in CPU time to compile preproc.c at -O2:\n> > >>> clang15: ~16s\n> > >>> clang16: ~211s\n> > >>> clang17: ~233s\n>\n> Is this with non-assert clangs? Because I see times that seem smaller by more\n> than what can be explained by hardware differences:\n>\n> preproc.c:\n> 17 10.270s\n> 16 9.685s\n> 15 8.300s\n>\n> gram.c:\n> 17 1.936s\n> 16 2.131s\n> 15 2.161s\n>\n> That's still bad, but a far cry away from 233s.\n\nHrmph. Well something weird is going on, but it might indeed involve\nme being confused about debug options of the compiler itself. How can\none find out which build options were used for clang/llvm compiler +\nlibraries? My earlier reports were from a little machine at home, so\nlet's try again on an i9-9900 CPU @ 3.10GHz (a bit snappier) running\nDebian 12, again using packages from apt.llvm.org:\n\n17 ~198s\n16 ~14s\n15 ~11s\n\nOK so even if we ignore the wild outlier it is getting significantly\nslower. But... huh, there goes the big jump, but at a different\nversion than I saw with FBSD's packages. Here's what perf says it's\ndoing:\n\n+ 99.42% 20.12% clang-17 libLLVM-17.so.1 [.]\nllvm::slpvectorizer::BoUpSLP::getTreeCost\n ◆\n+ 96.91% 0.00% clang-17 libLLVM-17.so.1 [.]\nllvm::SLPVectorizerPass::runImpl\n ▒\n+ 96.91% 0.00% clang-17 libLLVM-17.so.1 [.]\nllvm::SLPVectorizerPass::vectorizeChainsInBlock\n ▒\n+ 96.91% 0.00% clang-17 libLLVM-17.so.1 [.]\nllvm::SLPVectorizerPass::vectorizeSimpleInstructions\n ▒\n+ 96.91% 0.00% clang-17 libLLVM-17.so.1 [.]\nllvm::SLPVectorizerPass::vectorizeInsertElementInst\n ▒\n+ 96.91% 0.00% clang-17 libLLVM-17.so.1 [.]\nllvm::SLPVectorizerPass::tryToVectorizeList\n ▒\n+ 73.79% 0.00% clang-17 libLLVM-17.so.1 [.]\n0x00007fbead445cb0\n ▒\n+ 36.88% 36.88% clang-17 libLLVM-17.so.1 [.]\n0x0000000001e45cda\n ▒\n+ 3.95% 3.95% clang-17 libLLVM-17.so.1 [.] 0x0000000001e45d11\n\n\n",
"msg_date": "Tue, 19 Dec 2023 11:42:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 11:42 AM Thomas Munro <[email protected]> wrote:\n> Hrmph. Well something weird is going on, but it might indeed involve\n> me being confused about debug options of the compiler itself. How can\n> one find out which build options were used for clang/llvm compiler +\n> libraries? My earlier reports were from a little machine at home, so\n> let's try again on an i9-9900 CPU @ 3.10GHz (a bit snappier) running\n> Debian 12, again using packages from apt.llvm.org:\n>\n> 17 ~198s\n> 16 ~14s\n> 15 ~11s\n\nAnd on another Debian machine (this time a VM) also using apt.llvm.org\npackages, the huge ~3 minute time occurs with clang-16... hrrrnnnff...\nseems like there must be some other variable here that I haven't\nspotted yet...\n\n\n",
"msg_date": "Tue, 19 Dec 2023 17:20:39 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clang optimiser vs preproc.c"
},
{
"msg_contents": "Hello Thomas,\n\n19.12.2023 07:20, Thomas Munro wrote:\n> On Tue, Dec 19, 2023 at 11:42 AM Thomas Munro <[email protected]> wrote:\n>> Hrmph. Well something weird is going on, but it might indeed involve\n>> me being confused about debug options of the compiler itself. How can\n>> one find out which build options were used for clang/llvm compiler +\n>> libraries? My earlier reports were from a little machine at home, so\n>> let's try again on an i9-9900 CPU @ 3.10GHz (a bit snappier) running\n>> Debian 12, again using packages from apt.llvm.org:\n>>\n>> 17 ~198s\n>> 16 ~14s\n>> 15 ~11s\n> And on another Debian machine (this time a VM) also using apt.llvm.org\n> packages, the huge ~3 minute time occurs with clang-16... hrrrnnnff...\n> seems like there must be some other variable here that I haven't\n> spotted yet...\n\nReproduced here, with clang-16 and clang-17 (on Ubuntu 22.04, on Fedora 39).\nNamely, I tried on Ubuntu\nclang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04 and\nclang+llvm-17.0.6-x86_64-linux-gnu-ubuntu-22.04\nfrom https://github.com/llvm/llvm-project/releases, as follows:\nPATH=\".../clang+llvm-17.0.6-x86_64-linux-gnu-ubuntu-22.04/bin:$PATH\" CC=clang-17 CPPFLAGS=\"-O2\" sh -c \"./configure -q \n--enable-debug --enable-cassert && time make >make.log\"\nand then\nPATH=\".../clang+llvm-17.0.6-x86_64-linux-gnu-ubuntu-22.04/bin:$PATH\" sh -c \"cd src/interfaces/ecpg/preproc; time \nclang-17 -v -g -I../include -I../../../../src/interfaces/ecpg/include -I. -I../../../../src/interfaces/ecpg/ecpglib \n-I../../../../src/interfaces/libpq -I../../../../src/include -O2 -D_GNU_SOURCE -c -o preproc.o preproc.c\"\n144.59user 0.25system 2:24.94elapsed 99%CPU (0avgtext+0avgdata 217320maxresident)k\n\nThe same is observed with clang-16 (16.0.6 20231112100510+7cbf1a259152...)\ninstalled from http://apt.llvm.org/jammy/.\n(Adding parameters -fno-slp-vectorize or -mllvm -slp-threshold=100000 or\n-mllvm -slp-max-vf=3 decreases/normalizes compilation time.)\n\nOn a fresh vagrant image \"fedora/39-cloud-base\", I tried versions\n16.0.0~rc4, 17.0.0~rc3, 17.0.6 downloaded from\nhttps://koji.fedoraproject.org/koji/packageinfo?packageID=21848\nhttps://koji.fedoraproject.org/koji/packageinfo?packageID=5646\nAll of them also give 2+ minutes for me.\nBut I see no slowdown with version 15.0.7 on the same VM.\n\nAlso, I see no issue with clang-18, installed on Ubuntu from apt.llvm.org.\nSo, as far as I can see, this anomaly started from clang-16, and ended with\nclang-18.\nComparing histories of SLPVectorizer.cpp:\nhttps://github.com/llvm/llvm-project/commits/main/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp\nhttps://github.com/llvm/llvm-project/commits/release/17.x/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp\nI see a commit that probably could fix the issue in the master branch (for\nclang-18):\n[SLP][NFC]Improve compile time by storing all nodes for the given ...\n\nThough I still can't explain how you get ~14s with clang-16. Could you show\nthe exact sequence of commands you use to measure the duration?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 20 Dec 2023 15:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clang optimiser vs preproc.c"
}
] |
[
{
"msg_contents": "Hi\n\nI have made some documentation enhancements for PL/pgSQL and PL/Python\nsections. The changes include the addition of a Quick Start Guide to\nfacilitate a smoother onboarding experience for users.\n\nPatch File Name:\n0001-plpyhton-plpgsql-docu-changes.patch\n\nPatch Description:\nThis patch introduces a Quick Start Guide to the documentation for PL/pgSQL\nand PL/Python. The Quick Start Guide provides users with a step-by-step\nwalkthrough of setting up and using these procedural languages. The goal is\nto improve user accessibility and streamline the learning process.\n\nChanges Made:\n1. Added a new section titled \"Quick Start Guide\" to both PL/pgSQL and\nPL/Python documentation.\n2. Included step-by-step instructions for users to get started with these\nprocedural languages.\n3. Provided explanations, code snippets, and examples to illustrate key\nconcepts.\n\nDiscussion Points:\nI am seeking your feedback on the following aspects:\n- Clarity and completeness of the Quick Start Guide\n- Any additional information or examples that could enhance the guide\n- Suggestions for improving the overall documentation structure\n\nYour insights and suggestions are highly valuable, and I appreciate your\ntime in reviewing this documentation enhancement.\n\n--\nBest regards,\nIshaan Adarsh",
"msg_date": "Sat, 16 Dec 2023 16:19:06 +0530",
"msg_from": "Ishaan Adarsh <[email protected]>",
"msg_from_op": true,
"msg_subject": "[DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "On Sat, Dec 16, 2023, at 7:49 AM, Ishaan Adarsh wrote:\n> I have made some documentation enhancements for PL/pgSQL and PL/Python sections. The changes include the addition of a Quick Start Guide to facilitate a smoother onboarding experience for users.\n\nGreat! Add your patch to the next CF [1] so we don't miss it.\n\n[1] https://commitfest.postgresql.org/46/\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Dec 16, 2023, at 7:49 AM, Ishaan Adarsh wrote:I have made some documentation enhancements for PL/pgSQL and PL/Python sections. The changes include the addition of a Quick Start Guide to facilitate a smoother onboarding experience for users.Great! Add your patch to the next CF [1] so we don't miss it.[1] https://commitfest.postgresql.org/46/--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 18 Dec 2023 08:58:14 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "\nOn Sat, 16 Dec 2023 at 18:49, Ishaan Adarsh <[email protected]> wrote:\n> Hi\n>\n> I have made some documentation enhancements for PL/pgSQL and PL/Python\n> sections. The changes include the addition of a Quick Start Guide to\n> facilitate a smoother onboarding experience for users.\n>\n> Patch File Name:\n> 0001-plpyhton-plpgsql-docu-changes.patch\n>\n> Patch Description:\n> This patch introduces a Quick Start Guide to the documentation for PL/pgSQL\n> and PL/Python. The Quick Start Guide provides users with a step-by-step\n> walkthrough of setting up and using these procedural languages. The goal is\n> to improve user accessibility and streamline the learning process.\n>\n> Changes Made:\n> 1. Added a new section titled \"Quick Start Guide\" to both PL/pgSQL and\n> PL/Python documentation.\n> 2. Included step-by-step instructions for users to get started with these\n> procedural languages.\n> 3. Provided explanations, code snippets, and examples to illustrate key\n> concepts.\n>\n> Discussion Points:\n> I am seeking your feedback on the following aspects:\n> - Clarity and completeness of the Quick Start Guide\n> - Any additional information or examples that could enhance the guide\n> - Suggestions for improving the overall documentation structure\n>\n> Your insights and suggestions are highly valuable, and I appreciate your\n> time in reviewing this documentation enhancement.\n\n1.\nIt seems you miss <filename> tag in plpgsql \"Create the Makefile\":\n\n+ <sect2 id=\"plpgsql-step4\">\n+ <title>Create the Makefile</title>\n+\n+ <para>\n+ Create a Makefile in the <filename>pg_plpgsql_ext</filename> directory with the following content:\n+ </para>\n\n2.\nWe expected use CREATE EXTENSION to load the extension, should we add the\nfollowing in extension--version.sql?\n\n-- complain if script is sourced in psql, rather than via CREATE EXTENSION\n\\echo Use \"CREATE EXTENSION pair\" to load this file. \\quit\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 14:58:30 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "I've addressed the points you raised:\n\n1. *Missing `<filename>` Tag:*\nI reviewed the \"Create the Makefile\" section, and it seems that <filename>\ntags are appropriately used for filenames. If there's a specific instance\nwhere you observed a missing tag, please provide more details, and I'll\nensure it's addressed.\n\n2. *Use `CREATE EXTENSION` in \"extension--version.sql\":*\n Considering that there's already a CREATE EXTENSION step in the quick\nstart guide, I can include a note in the general documentation to explain\nthe rationale without repeating it in the individual script. What do you\nthink?\n\n--\nBest regards,\nIshaan Adarsh\n\nOn Tue, Dec 19, 2023 at 12:28 PM Japin Li <[email protected]> wrote:\n\n>\n> 1.\n> It seems you miss <filename> tag in plpgsql \"Create the Makefile\":\n>\n> + <sect2 id=\"plpgsql-step4\">\n> + <title>Create the Makefile</title>\n> +\n> + <para>\n> + Create a Makefile in the <filename>pg_plpgsql_ext</filename>\n> directory with the following content:\n> + </para>\n>\n> 2.\n> We expected use CREATE EXTENSION to load the extension, should we add the\n> following in extension--version.sql?\n>\n> -- complain if script is sourced in psql, rather than via CREATE EXTENSION\n> \\echo Use \"CREATE EXTENSION pair\" to load this file. \\quit\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n>\n\nI've addressed the points you raised:1. Missing `<filename>` Tag:I reviewed the \"Create the Makefile\" section, and it seems that <filename> tags are appropriately used for filenames. If there's a specific instance where you observed a missing tag, please provide more details, and I'll ensure it's addressed.2. Use `CREATE EXTENSION` in \"extension--version.sql\": Considering that there's already a CREATE EXTENSION step in the quick start guide, I can include a note in the general documentation to explain the rationale without repeating it in the individual script. What do you think?--Best regards,Ishaan AdarshOn Tue, Dec 19, 2023 at 12:28 PM Japin Li <[email protected]> wrote:\n1.\nIt seems you miss <filename> tag in plpgsql \"Create the Makefile\":\n\n+ <sect2 id=\"plpgsql-step4\">\n+ <title>Create the Makefile</title>\n+\n+ <para>\n+ Create a Makefile in the <filename>pg_plpgsql_ext</filename> directory with the following content:\n+ </para>\n\n2.\nWe expected use CREATE EXTENSION to load the extension, should we add the\nfollowing in extension--version.sql?\n\n-- complain if script is sourced in psql, rather than via CREATE EXTENSION\n\\echo Use \"CREATE EXTENSION pair\" to load this file. \\quit\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.",
"msg_date": "Tue, 19 Dec 2023 15:27:24 +0530",
"msg_from": "Ishaan Adarsh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "\nOn Tue, 19 Dec 2023 at 17:57, Ishaan Adarsh <[email protected]> wrote:\n> I've addressed the points you raised:\n>\n> 1. *Missing `<filename>` Tag:*\n> I reviewed the \"Create the Makefile\" section, and it seems that <filename>\n> tags are appropriately used for filenames. If there's a specific instance\n> where you observed a missing tag, please provide more details, and I'll\n> ensure it's addressed.\n>\n\nThanks.\n\n> 2. *Use `CREATE EXTENSION` in \"extension--version.sql\":*\n> Considering that there's already a CREATE EXTENSION step in the quick\n> start guide, I can include a note in the general documentation to explain\n> the rationale without repeating it in the individual script. What do you\n> think?\n\nAgreed.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 22:10:53 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "On 16.12.23 11:49, Ishaan Adarsh wrote:\n> 1. Added a new section titled \"Quick Start Guide\" to both PL/pgSQL and \n> PL/Python documentation.\n> 2. Included step-by-step instructions for users to get started with \n> these procedural languages.\n> 3. Provided explanations, code snippets, and examples to illustrate key \n> concepts.\n\nThe way I read it, that's not really what your patch does. Your patch \nexplains how to write an extension in PL/pgSQL and PL/Python, \nrespectively. Which is okay, I guess, but a bit unusual. But I \nwouldn't call that an unqualified \"quick start\" in the respective \nlanguages. Also, it seems to repeat the very basics of setting up an \nextension file structure etc. repeatedly in each chapter.\n\nThe existing documentation has \"explanations, code snippets, and \nexamples\". Are they not good? Do you have more? Better ones? Why are \nyours separate from the existing ones?\n\nI think it would be useful to take a step back here and define the \npurpose of a bit clearer.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 16:48:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "Subject: Clarification on the Purpose of the Patch\n\nHi Peter,\n\nThe intention was to address the challenge faced by newcomers in\nunderstanding how to write an extension for PostgreSQL. The existing\ndocumentation, while comprehensive, lacks a consolidated and easy-to-follow\ntutorial that serves as a quick start guide. The goal was to create a\nbeginner-friendly resource that assumes only knowledge of Postgres and the\ntarget language, making it accessible for new contributors because the\nbarrier for entry is prohibitive for new contributors. There are various\nthird-party blog posts focusing on different areas, and sometimes\ncontradictory.\n\nSpecifically:\n1. The new section titled \"Quick Start Guide\" aims to provide step-by-step\ninstructions for users to get started with writing extensions in PL/pgSQL\nand PL/Python.\n2. Explanations, code snippets, and examples were included to illustrate\nkey concepts, making it easier for users to follow along.\n\nI understand your point about the basics of setting up an extension file\nstructure being repeated. The intention was to provide a self-contained\nguide within each language's documentation, ensuring that users who\ndirectly access a specific language's documentation get a complete guide\nwithout having to navigate between sections.\n\nIf there are areas where the existing documentation is already sufficient\nor if there are ways to improve the approach, I am open to suggestions. The\nprimary aim is to enhance the accessibility of extension development\ninformation for newcomers.\n\n--\nBest regards,\nIshaan Adarsh\n\n\nOn Tue, Dec 19, 2023 at 9:18 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 16.12.23 11:49, Ishaan Adarsh wrote:\n> > 1. Added a new section titled \"Quick Start Guide\" to both PL/pgSQL and\n> > PL/Python documentation.\n> > 2. Included step-by-step instructions for users to get started with\n> > these procedural languages.\n> > 3. Provided explanations, code snippets, and examples to illustrate key\n> > concepts.\n>\n> The way I read it, that's not really what your patch does. Your patch\n> explains how to write an extension in PL/pgSQL and PL/Python,\n> respectively. Which is okay, I guess, but a bit unusual. But I\n> wouldn't call that an unqualified \"quick start\" in the respective\n> languages. Also, it seems to repeat the very basics of setting up an\n> extension file structure etc. repeatedly in each chapter.\n>\n> The existing documentation has \"explanations, code snippets, and\n> examples\". Are they not good? Do you have more? Better ones? Why are\n> yours separate from the existing ones?\n>\n> I think it would be useful to take a step back here and define the\n> purpose of a bit clearer.\n>\n\nSubject: Clarification on the Purpose of the PatchHi Peter,The intention was to address the challenge faced by newcomers in understanding how to write an extension for PostgreSQL. The existing documentation, while comprehensive, lacks a consolidated and easy-to-follow tutorial that serves as a quick start guide. The goal was to create a beginner-friendly resource that assumes only knowledge of Postgres and the target language, making it accessible for new contributors because the barrier for entry is prohibitive for new contributors. There are various third-party blog posts focusing on different areas, and sometimes contradictory.Specifically:1. The new section titled \"Quick Start Guide\" aims to provide step-by-step instructions for users to get started with writing extensions in PL/pgSQL and PL/Python.2. Explanations, code snippets, and examples were included to illustrate key concepts, making it easier for users to follow along.I understand your point about the basics of setting up an extension file structure being repeated. The intention was to provide a self-contained guide within each language's documentation, ensuring that users who directly access a specific language's documentation get a complete guide without having to navigate between sections.If there are areas where the existing documentation is already sufficient or if there are ways to improve the approach, I am open to suggestions. The primary aim is to enhance the accessibility of extension development information for newcomers.--Best regards,Ishaan AdarshOn Tue, Dec 19, 2023 at 9:18 PM Peter Eisentraut <[email protected]> wrote:On 16.12.23 11:49, Ishaan Adarsh wrote:\n> 1. Added a new section titled \"Quick Start Guide\" to both PL/pgSQL and \n> PL/Python documentation.\n> 2. Included step-by-step instructions for users to get started with \n> these procedural languages.\n> 3. Provided explanations, code snippets, and examples to illustrate key \n> concepts.\n\nThe way I read it, that's not really what your patch does. Your patch \nexplains how to write an extension in PL/pgSQL and PL/Python, \nrespectively. Which is okay, I guess, but a bit unusual. But I \nwouldn't call that an unqualified \"quick start\" in the respective \nlanguages. Also, it seems to repeat the very basics of setting up an \nextension file structure etc. repeatedly in each chapter.\n\nThe existing documentation has \"explanations, code snippets, and \nexamples\". Are they not good? Do you have more? Better ones? Why are \nyours separate from the existing ones?\n\nI think it would be useful to take a step back here and define the \npurpose of a bit clearer.",
"msg_date": "Tue, 19 Dec 2023 21:56:01 +0530",
"msg_from": "Ishaan Adarsh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "On 19.12.23 17:26, Ishaan Adarsh wrote:\n> Subject: Clarification on the Purpose of the Patch\n> \n> Hi Peter,\n> \n> The intention was to address the challenge faced by newcomers in \n> understanding how to write an extension for PostgreSQL. The existing \n> documentation, while comprehensive, lacks a consolidated and \n> easy-to-follow tutorial that serves as a quick start guide. The goal was \n> to create a beginner-friendly resource that assumes only knowledge of \n> Postgres and the target language, making it accessible for new \n> contributors because the barrier for entry is prohibitive for new \n> contributors. There are various third-party blog posts focusing on \n> different areas, and sometimes contradictory.\n\nHave you seen this: \nhttps://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n\nMaybe that could be extended/modified/simplified?\n\n> Specifically:\n> 1. The new section titled \"Quick Start Guide\" aims to provide \n> step-by-step instructions for users to get started with writing \n> extensions in PL/pgSQL and PL/Python.\n\nWhat's confusing here is writing an extension in a PL language is not a \nnormal use case I'd say. The normal use case involves some C code.\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:18:02 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "Hi\n\nčt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]>\nnapsal:\n\n> On 19.12.23 17:26, Ishaan Adarsh wrote:\n> > Subject: Clarification on the Purpose of the Patch\n> >\n> > Hi Peter,\n> >\n> > The intention was to address the challenge faced by newcomers in\n> > understanding how to write an extension for PostgreSQL. The existing\n> > documentation, while comprehensive, lacks a consolidated and\n> > easy-to-follow tutorial that serves as a quick start guide. The goal was\n> > to create a beginner-friendly resource that assumes only knowledge of\n> > Postgres and the target language, making it accessible for new\n> > contributors because the barrier for entry is prohibitive for new\n> > contributors. There are various third-party blog posts focusing on\n> > different areas, and sometimes contradictory.\n>\n> Have you seen this:\n>\n> https://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n>\n> Maybe that could be extended/modified/simplified?\n>\n> > Specifically:\n> > 1. The new section titled \"Quick Start Guide\" aims to provide\n> > step-by-step instructions for users to get started with writing\n> > extensions in PL/pgSQL and PL/Python.\n>\n> What's confusing here is writing an extension in a PL language is not a\n> normal use case I'd say. The normal use case involves some C code.\n>\n\n Extensions were designed for C, but they are working with PL well too.\nSome of my customers use extensions for PLpgSQL and they are almost happy.\n1) there is nothing else, 2) it is really works\n\nI agree with Peter - this topic is not what I imagine under \"Quick start\nguide\"\n\nRegards\n\nPavel\n\nHičt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]> napsal:On 19.12.23 17:26, Ishaan Adarsh wrote:\n> Subject: Clarification on the Purpose of the Patch\n> \n> Hi Peter,\n> \n> The intention was to address the challenge faced by newcomers in \n> understanding how to write an extension for PostgreSQL. The existing \n> documentation, while comprehensive, lacks a consolidated and \n> easy-to-follow tutorial that serves as a quick start guide. The goal was \n> to create a beginner-friendly resource that assumes only knowledge of \n> Postgres and the target language, making it accessible for new \n> contributors because the barrier for entry is prohibitive for new \n> contributors. There are various third-party blog posts focusing on \n> different areas, and sometimes contradictory.\n\nHave you seen this: \nhttps://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n\nMaybe that could be extended/modified/simplified?\n\n> Specifically:\n> 1. The new section titled \"Quick Start Guide\" aims to provide \n> step-by-step instructions for users to get started with writing \n> extensions in PL/pgSQL and PL/Python.\n\nWhat's confusing here is writing an extension in a PL language is not a \nnormal use case I'd say. The normal use case involves some C code. Extensions were designed for C, but they are working with PL well too. Some of my customers use extensions for PLpgSQL and they are almost happy. 1) there is nothing else, 2) it is really worksI agree with Peter - this topic is not what I imagine under \"Quick start guide\"RegardsPavel",
"msg_date": "Thu, 21 Dec 2023 11:46:29 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "The recent documentation patches are part of my GSoC 2023 project\n<https://wiki.postgresql.org/wiki/GSoC_2023#Postgres_extension_tutorial_.2F_quick_start>\nto develop a comprehensive PostgreSQL extension development tutorial, it\nassumes only a basic knowledge of Postgres and the target programming\nlanguage.\n\nThe entire project is available on GitHub: Postgres-extension-tutorial\n<https://github.com/IshaanAdarsh/Postgres-extension-tutorial/blob/main/SGML/intro_and_toc.md>.\nIt covers many topics, including prerequisites, writing extensions,\ncreating Makefiles, using procedural languages, incorporating external\nlanguages, writing regression tests, and managing extension releases.\n*The patch submitted\nfor procedural languages, specifically PL/pgSQL and PL/Python, is part of\nthe procedural language section within the broader tutorial. *\n\nBased on the feedback I think there is a real need\n<https://twitter.com/jer_s/status/1699071450915938609> for this as this is\na very important and growing part of the Postgres ecosystem. Currently, all\nthe extension material is scattered and very limited. There are various\nthird-party blog posts focusing on different areas, and sometimes\ncontradictory. The main motivation behind making this is to make the barrier\nfor entry less prohibitive for new contributors.\n\nI would greatly appreciate your input on how to add it to the existing\ndocumentation (this is where I have major doubts) and any suggestions on\nhow to proceed. If there are areas where the existing documentation is\nalready sufficient or if there are ways to improve the overall structure, I\nam open to making adjustments.\n\nBest,\nIshaan Adarsh\n\n\nOn Thu, Dec 21, 2023 at 4:17 PM Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> čt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]>\n> napsal:\n>\n>> On 19.12.23 17:26, Ishaan Adarsh wrote:\n>> > Subject: Clarification on the Purpose of the Patch\n>> >\n>> > Hi Peter,\n>> >\n>> > The intention was to address the challenge faced by newcomers in\n>> > understanding how to write an extension for PostgreSQL. The existing\n>> > documentation, while comprehensive, lacks a consolidated and\n>> > easy-to-follow tutorial that serves as a quick start guide. The goal\n>> was\n>> > to create a beginner-friendly resource that assumes only knowledge of\n>> > Postgres and the target language, making it accessible for new\n>> > contributors because the barrier for entry is prohibitive for new\n>> > contributors. There are various third-party blog posts focusing on\n>> > different areas, and sometimes contradictory.\n>>\n>> Have you seen this:\n>>\n>> https://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n>>\n>> Maybe that could be extended/modified/simplified?\n>>\n>> > Specifically:\n>> > 1. The new section titled \"Quick Start Guide\" aims to provide\n>> > step-by-step instructions for users to get started with writing\n>> > extensions in PL/pgSQL and PL/Python.\n>>\n>> What's confusing here is writing an extension in a PL language is not a\n>> normal use case I'd say. The normal use case involves some C code.\n>>\n>\n> Extensions were designed for C, but they are working with PL well too.\n> Some of my customers use extensions for PLpgSQL and they are almost happy.\n> 1) there is nothing else, 2) it is really works\n>\n> I agree with Peter - this topic is not what I imagine under \"Quick start\n> guide\"\n>\n> Regards\n>\n> Pavel\n>\n\nThe recent documentation patches are part of my GSoC 2023 project to develop a comprehensive PostgreSQL extension development tutorial, it assumes only a basic knowledge of Postgres and the target programming language. The entire project is available on GitHub: Postgres-extension-tutorial. It covers many topics, including prerequisites, writing extensions, creating Makefiles, using procedural languages, incorporating external languages, writing regression tests, and managing extension releases. The patch submitted for procedural languages, specifically PL/pgSQL and PL/Python, is part of the procedural language section within the broader tutorial. Based on the feedback I think there is a real need for this as this is a very important and growing part of the Postgres ecosystem. Currently, all the extension material is scattered and very limited. There are various third-party blog posts focusing on different areas, and sometimes contradictory. The main motivation behind making this is to make the barrier for entry less prohibitive for new contributors.I would greatly appreciate your input on how to add it to the existing documentation (this is where I have major doubts) and any suggestions on how to proceed. If there are areas where the existing documentation is already sufficient or if there are ways to improve the overall structure, I am open to making adjustments. Best,Ishaan AdarshOn Thu, Dec 21, 2023 at 4:17 PM Pavel Stehule <[email protected]> wrote:Hičt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]> napsal:On 19.12.23 17:26, Ishaan Adarsh wrote:\n> Subject: Clarification on the Purpose of the Patch\n> \n> Hi Peter,\n> \n> The intention was to address the challenge faced by newcomers in \n> understanding how to write an extension for PostgreSQL. The existing \n> documentation, while comprehensive, lacks a consolidated and \n> easy-to-follow tutorial that serves as a quick start guide. The goal was \n> to create a beginner-friendly resource that assumes only knowledge of \n> Postgres and the target language, making it accessible for new \n> contributors because the barrier for entry is prohibitive for new \n> contributors. There are various third-party blog posts focusing on \n> different areas, and sometimes contradictory.\n\nHave you seen this: \nhttps://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n\nMaybe that could be extended/modified/simplified?\n\n> Specifically:\n> 1. The new section titled \"Quick Start Guide\" aims to provide \n> step-by-step instructions for users to get started with writing \n> extensions in PL/pgSQL and PL/Python.\n\nWhat's confusing here is writing an extension in a PL language is not a \nnormal use case I'd say. The normal use case involves some C code. Extensions were designed for C, but they are working with PL well too. Some of my customers use extensions for PLpgSQL and they are almost happy. 1) there is nothing else, 2) it is really worksI agree with Peter - this topic is not what I imagine under \"Quick start guide\"RegardsPavel",
"msg_date": "Thu, 21 Dec 2023 18:07:39 +0530",
"msg_from": "Ishaan Adarsh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "Hi\n\nčt 21. 12. 2023 v 13:37 odesílatel Ishaan Adarsh <[email protected]>\nnapsal:\n\n> The recent documentation patches are part of my GSoC 2023 project\n> <https://wiki.postgresql.org/wiki/GSoC_2023#Postgres_extension_tutorial_.2F_quick_start>\n> to develop a comprehensive PostgreSQL extension development tutorial, it\n> assumes only a basic knowledge of Postgres and the target programming\n> language.\n>\n> The entire project is available on GitHub: Postgres-extension-tutorial\n> <https://github.com/IshaanAdarsh/Postgres-extension-tutorial/blob/main/SGML/intro_and_toc.md>.\n> It covers many topics, including prerequisites, writing extensions,\n> creating Makefiles, using procedural languages, incorporating external\n> languages, writing regression tests, and managing extension releases. *The patch submitted\n> for procedural languages, specifically PL/pgSQL and PL/Python, is part of\n> the procedural language section within the broader tutorial. *\n>\n> Based on the feedback I think there is a real need\n> <https://twitter.com/jer_s/status/1699071450915938609> for this as this\n> is a very important and growing part of the Postgres ecosystem. Currently,\n> all the extension material is scattered and very limited. There are\n> various third-party blog posts focusing on different areas, and sometimes\n> contradictory. The main motivation behind making this is to make the barrier\n> for entry less prohibitive for new contributors.\n>\n> I would greatly appreciate your input on how to add it to the existing\n> documentation (this is where I have major doubts) and any suggestions on\n> how to proceed. If there are areas where the existing documentation is\n> already sufficient or if there are ways to improve the overall structure, I\n> am open to making adjustments.\n>\n\nhttps://www.postgresql.org/docs/current/plpgsql-development-tips.html and\nnew section - deployment or packaging to extensions\n\nI agree so https://www.postgresql.org/docs/current/plpgsql-overview.html is\nunder dimensioned, but packaging should not be there\n\nRegards\n\nPavel\n\n\n>\n> Best,\n> Ishaan Adarsh\n>\n>\n> On Thu, Dec 21, 2023 at 4:17 PM Pavel Stehule <[email protected]>\n> wrote:\n>\n>> Hi\n>>\n>> čt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]>\n>> napsal:\n>>\n>>> On 19.12.23 17:26, Ishaan Adarsh wrote:\n>>> > Subject: Clarification on the Purpose of the Patch\n>>> >\n>>> > Hi Peter,\n>>> >\n>>> > The intention was to address the challenge faced by newcomers in\n>>> > understanding how to write an extension for PostgreSQL. The existing\n>>> > documentation, while comprehensive, lacks a consolidated and\n>>> > easy-to-follow tutorial that serves as a quick start guide. The goal\n>>> was\n>>> > to create a beginner-friendly resource that assumes only knowledge of\n>>> > Postgres and the target language, making it accessible for new\n>>> > contributors because the barrier for entry is prohibitive for new\n>>> > contributors. There are various third-party blog posts focusing on\n>>> > different areas, and sometimes contradictory.\n>>>\n>>> Have you seen this:\n>>>\n>>> https://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n>>>\n>>> Maybe that could be extended/modified/simplified?\n>>>\n>>> > Specifically:\n>>> > 1. The new section titled \"Quick Start Guide\" aims to provide\n>>> > step-by-step instructions for users to get started with writing\n>>> > extensions in PL/pgSQL and PL/Python.\n>>>\n>>> What's confusing here is writing an extension in a PL language is not a\n>>> normal use case I'd say. The normal use case involves some C code.\n>>>\n>>\n>> Extensions were designed for C, but they are working with PL well too.\n>> Some of my customers use extensions for PLpgSQL and they are almost happy.\n>> 1) there is nothing else, 2) it is really works\n>>\n>> I agree with Peter - this topic is not what I imagine under \"Quick start\n>> guide\"\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n\nHičt 21. 12. 2023 v 13:37 odesílatel Ishaan Adarsh <[email protected]> napsal:The recent documentation patches are part of my GSoC 2023 project to develop a comprehensive PostgreSQL extension development tutorial, it assumes only a basic knowledge of Postgres and the target programming language. The entire project is available on GitHub: Postgres-extension-tutorial. It covers many topics, including prerequisites, writing extensions, creating Makefiles, using procedural languages, incorporating external languages, writing regression tests, and managing extension releases. The patch submitted for procedural languages, specifically PL/pgSQL and PL/Python, is part of the procedural language section within the broader tutorial. Based on the feedback I think there is a real need for this as this is a very important and growing part of the Postgres ecosystem. Currently, all the extension material is scattered and very limited. There are various third-party blog posts focusing on different areas, and sometimes contradictory. The main motivation behind making this is to make the barrier for entry less prohibitive for new contributors.I would greatly appreciate your input on how to add it to the existing documentation (this is where I have major doubts) and any suggestions on how to proceed. If there are areas where the existing documentation is already sufficient or if there are ways to improve the overall structure, I am open to making adjustments. https://www.postgresql.org/docs/current/plpgsql-development-tips.html and new section - deployment or packaging to extensionsI agree so https://www.postgresql.org/docs/current/plpgsql-overview.html is under dimensioned, but packaging should not be thereRegardsPavel Best,Ishaan AdarshOn Thu, Dec 21, 2023 at 4:17 PM Pavel Stehule <[email protected]> wrote:Hičt 21. 12. 2023 v 11:18 odesílatel Peter Eisentraut <[email protected]> napsal:On 19.12.23 17:26, Ishaan Adarsh wrote:\n> Subject: Clarification on the Purpose of the Patch\n> \n> Hi Peter,\n> \n> The intention was to address the challenge faced by newcomers in \n> understanding how to write an extension for PostgreSQL. The existing \n> documentation, while comprehensive, lacks a consolidated and \n> easy-to-follow tutorial that serves as a quick start guide. The goal was \n> to create a beginner-friendly resource that assumes only knowledge of \n> Postgres and the target language, making it accessible for new \n> contributors because the barrier for entry is prohibitive for new \n> contributors. There are various third-party blog posts focusing on \n> different areas, and sometimes contradictory.\n\nHave you seen this: \nhttps://www.postgresql.org/docs/devel/extend-extensions.html#EXTEND-EXTENSIONS-EXAMPLE\n\nMaybe that could be extended/modified/simplified?\n\n> Specifically:\n> 1. The new section titled \"Quick Start Guide\" aims to provide \n> step-by-step instructions for users to get started with writing \n> extensions in PL/pgSQL and PL/Python.\n\nWhat's confusing here is writing an extension in a PL language is not a \nnormal use case I'd say. The normal use case involves some C code. Extensions were designed for C, but they are working with PL well too. Some of my customers use extensions for PLpgSQL and they are almost happy. 1) there is nothing else, 2) it is really worksI agree with Peter - this topic is not what I imagine under \"Quick start guide\"RegardsPavel",
"msg_date": "Thu, 21 Dec 2023 14:03:38 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "\nOn Thu, 21 Dec 2023 at 21:03, Pavel Stehule <[email protected]> wrote:\n> Hi\n>\n> čt 21. 12. 2023 v 13:37 odesílatel Ishaan Adarsh <[email protected]>\n> napsal:\n>\n>> The recent documentation patches are part of my GSoC 2023 project\n>> <https://wiki.postgresql.org/wiki/GSoC_2023#Postgres_extension_tutorial_.2F_quick_start>\n>> to develop a comprehensive PostgreSQL extension development tutorial, it\n>> assumes only a basic knowledge of Postgres and the target programming\n>> language.\n>>\n>> The entire project is available on GitHub: Postgres-extension-tutorial\n>> <https://github.com/IshaanAdarsh/Postgres-extension-tutorial/blob/main/SGML/intro_and_toc.md>.\n>> It covers many topics, including prerequisites, writing extensions,\n>> creating Makefiles, using procedural languages, incorporating external\n>> languages, writing regression tests, and managing extension releases. *The patch submitted\n>> for procedural languages, specifically PL/pgSQL and PL/Python, is part of\n>> the procedural language section within the broader tutorial. *\n>>\n>> Based on the feedback I think there is a real need\n>> <https://twitter.com/jer_s/status/1699071450915938609> for this as this\n>> is a very important and growing part of the Postgres ecosystem. Currently,\n>> all the extension material is scattered and very limited. There are\n>> various third-party blog posts focusing on different areas, and sometimes\n>> contradictory. The main motivation behind making this is to make the barrier\n>> for entry less prohibitive for new contributors.\n>>\n>> I would greatly appreciate your input on how to add it to the existing\n>> documentation (this is where I have major doubts) and any suggestions on\n>> how to proceed. If there are areas where the existing documentation is\n>> already sufficient or if there are ways to improve the overall structure, I\n>> am open to making adjustments.\n>>\n>\n> https://www.postgresql.org/docs/current/plpgsql-development-tips.html and\n> new section - deployment or packaging to extensions\n>\n> I agree so https://www.postgresql.org/docs/current/plpgsql-overview.html is\n> under dimensioned, but packaging should not be there\n>\n\nIt seems redundant if we add this for each PL, maybe a separate section to\ndescribe how to package PL into extensions is better.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Fri, 22 Dec 2023 22:50:08 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "pá 22. 12. 2023 v 15:50 odesílatel Japin Li <[email protected]> napsal:\n\n>\n> On Thu, 21 Dec 2023 at 21:03, Pavel Stehule <[email protected]>\n> wrote:\n> > Hi\n> >\n> > čt 21. 12. 2023 v 13:37 odesílatel Ishaan Adarsh <[email protected]>\n> > napsal:\n> >\n> >> The recent documentation patches are part of my GSoC 2023 project\n> >> <\n> https://wiki.postgresql.org/wiki/GSoC_2023#Postgres_extension_tutorial_.2F_quick_start\n> >\n> >> to develop a comprehensive PostgreSQL extension development tutorial, it\n> >> assumes only a basic knowledge of Postgres and the target programming\n> >> language.\n> >>\n> >> The entire project is available on GitHub: Postgres-extension-tutorial\n> >> <\n> https://github.com/IshaanAdarsh/Postgres-extension-tutorial/blob/main/SGML/intro_and_toc.md\n> >.\n> >> It covers many topics, including prerequisites, writing extensions,\n> >> creating Makefiles, using procedural languages, incorporating external\n> >> languages, writing regression tests, and managing extension releases.\n> *The patch submitted\n> >> for procedural languages, specifically PL/pgSQL and PL/Python, is part\n> of\n> >> the procedural language section within the broader tutorial. *\n> >>\n> >> Based on the feedback I think there is a real need\n> >> <https://twitter.com/jer_s/status/1699071450915938609> for this as this\n> >> is a very important and growing part of the Postgres ecosystem.\n> Currently,\n> >> all the extension material is scattered and very limited. There are\n> >> various third-party blog posts focusing on different areas, and\n> sometimes\n> >> contradictory. The main motivation behind making this is to make the\n> barrier\n> >> for entry less prohibitive for new contributors.\n> >>\n> >> I would greatly appreciate your input on how to add it to the existing\n> >> documentation (this is where I have major doubts) and any suggestions on\n> >> how to proceed. If there are areas where the existing documentation is\n> >> already sufficient or if there are ways to improve the overall\n> structure, I\n> >> am open to making adjustments.\n> >>\n> >\n> > https://www.postgresql.org/docs/current/plpgsql-development-tips.html\n> and\n> > new section - deployment or packaging to extensions\n> >\n> > I agree so https://www.postgresql.org/docs/current/plpgsql-overview.html\n> is\n> > under dimensioned, but packaging should not be there\n> >\n>\n> It seems redundant if we add this for each PL, maybe a separate section to\n> describe how to package PL into extensions is better.\n>\n\nI have not a strong opinion about it. My personal experience is so 99% PL\ncode is PLpgSQL, so it can be there, and other PL can be referenced there.\nI am not sure if there is some common part for all PL.\n\nRegards\n\nPavel\n\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n>\n\npá 22. 12. 2023 v 15:50 odesílatel Japin Li <[email protected]> napsal:\nOn Thu, 21 Dec 2023 at 21:03, Pavel Stehule <[email protected]> wrote:\n> Hi\n>\n> čt 21. 12. 2023 v 13:37 odesílatel Ishaan Adarsh <[email protected]>\n> napsal:\n>\n>> The recent documentation patches are part of my GSoC 2023 project\n>> <https://wiki.postgresql.org/wiki/GSoC_2023#Postgres_extension_tutorial_.2F_quick_start>\n>> to develop a comprehensive PostgreSQL extension development tutorial, it\n>> assumes only a basic knowledge of Postgres and the target programming\n>> language.\n>>\n>> The entire project is available on GitHub: Postgres-extension-tutorial\n>> <https://github.com/IshaanAdarsh/Postgres-extension-tutorial/blob/main/SGML/intro_and_toc.md>.\n>> It covers many topics, including prerequisites, writing extensions,\n>> creating Makefiles, using procedural languages, incorporating external\n>> languages, writing regression tests, and managing extension releases. *The patch submitted\n>> for procedural languages, specifically PL/pgSQL and PL/Python, is part of\n>> the procedural language section within the broader tutorial. *\n>>\n>> Based on the feedback I think there is a real need\n>> <https://twitter.com/jer_s/status/1699071450915938609> for this as this\n>> is a very important and growing part of the Postgres ecosystem. Currently,\n>> all the extension material is scattered and very limited. There are\n>> various third-party blog posts focusing on different areas, and sometimes\n>> contradictory. The main motivation behind making this is to make the barrier\n>> for entry less prohibitive for new contributors.\n>>\n>> I would greatly appreciate your input on how to add it to the existing\n>> documentation (this is where I have major doubts) and any suggestions on\n>> how to proceed. If there are areas where the existing documentation is\n>> already sufficient or if there are ways to improve the overall structure, I\n>> am open to making adjustments.\n>>\n>\n> https://www.postgresql.org/docs/current/plpgsql-development-tips.html and\n> new section - deployment or packaging to extensions\n>\n> I agree so https://www.postgresql.org/docs/current/plpgsql-overview.html is\n> under dimensioned, but packaging should not be there\n>\n\nIt seems redundant if we add this for each PL, maybe a separate section to\ndescribe how to package PL into extensions is better.I have not a strong opinion about it. My personal experience is so 99% PL code is PLpgSQL, so it can be there, and other PL can be referenced there. I am not sure if there is some common part for all PL. RegardsPavel\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.",
"msg_date": "Fri, 22 Dec 2023 18:03:15 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 12:04 PM Pavel Stehule <[email protected]> wrote:\n> I have not a strong opinion about it. My personal experience is so 99% PL code is PLpgSQL, so it can be there, and other PL can be referenced there. I am not sure if there is some common part for all PL.\n\nAfter reading over this thread, it seems clear to me that there is no\nconsensus to proceed with this patch in its current form, and the\ndiscussion seems to have stalled. Accordingly, I've marked this\n\"Returned with Feedback\" in the CommitFest.\n\nIshaan, if you plan to rework this into a form which might be\nacceptable given the review comments made up until now, please feel\nfree to change this back to \"Waiting on Author\", and/or move it to a\nfuture CommitFest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 11:15:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Introducing Quick Start Guide to PL/pgSQL and PL/Python\n Documentation"
}
] |
[
{
"msg_contents": "Respected sir/madam,\r\nI am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final year at Punjab Engineering College (PEC). I am new to open source but know Python, PostgreSQL, C, Javascript and node.js. I would love to contribute to your organization, but could you please tell me how to get started?\r\nHoping to hear from you soon.\r\nRegards\r\nSukhbir\r\n\n\n\n\n\n\n\n\r\nRespected sir/madam,\n\r\nI am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final year at Punjab Engineering College (PEC). I am new to open source but know Python, PostgreSQL, C, Javascript and node.js. I would love to contribute to your organization, but could you please\r\n tell me how to get started?\n\r\nHoping to hear from you soon.\n\nRegards\n\nSukhbir",
"msg_date": "Sun, 17 Dec 2023 15:09:10 +0000",
"msg_from": "Sukhbir Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to get started with contribution"
},
{
"msg_contents": "Hi,\n\nOn Sun, Dec 17, 2023 at 03:09:10PM +0000, Sukhbir Singh wrote:\n>\n> I am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final year at\n> Punjab Engineering College (PEC). I am new to open source but know Python,\n> PostgreSQL, C, Javascript and node.js. I would love to contribute to your\n> organization, but could you please tell me how to get started?\n\nYou can look at\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F, it's a good\nstarting point with many links to additional resources.\n\n\n",
"msg_date": "Sun, 17 Dec 2023 17:34:10 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Thanks a lot for sharing. I'm excited about the opportunity to contribute\nto your organization and apply my skills to your project, making a valuable\nimpact.\n\nOn Sun, 17 Dec, 2023, 10:04 pm Julien Rouhaud, <[email protected]> wrote:\n\n> Hi,\n>\n> On Sun, Dec 17, 2023 at 03:09:10PM +0000, Sukhbir Singh wrote:\n> >\n> > I am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final\n> year at\n> > Punjab Engineering College (PEC). I am new to open source but know\n> Python,\n> > PostgreSQL, C, Javascript and node.js. I would love to contribute to your\n> > organization, but could you please tell me how to get started?\n>\n> You can look at\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F, it's\n> a good\n> starting point with many links to additional resources.\n>\n\nThanks a lot for sharing. I'm excited about the opportunity to contribute to your organization and apply my skills to your project, making a valuable impact.On Sun, 17 Dec, 2023, 10:04 pm Julien Rouhaud, <[email protected]> wrote:Hi,\n\nOn Sun, Dec 17, 2023 at 03:09:10PM +0000, Sukhbir Singh wrote:\n>\n> I am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final year at\n> Punjab Engineering College (PEC). I am new to open source but know Python,\n> PostgreSQL, C, Javascript and node.js. I would love to contribute to your\n> organization, but could you please tell me how to get started?\n\nYou can look at\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F, it's a good\nstarting point with many links to additional resources.",
"msg_date": "Sun, 17 Dec 2023 22:12:52 +0530",
"msg_from": "Sukhbir Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "On 12/17/23 17:34, Julien Rouhaud wrote:\n> Hi,\n> \n> On Sun, Dec 17, 2023 at 03:09:10PM +0000, Sukhbir Singh wrote:\n>>\n>> I am Sukhbir Singh, a B.Tech undergraduate, and I am in my pre-final year at\n>> Punjab Engineering College (PEC). I am new to open source but know Python,\n>> PostgreSQL, C, Javascript and node.js. I would love to contribute to your\n>> organization, but could you please tell me how to get started?\n> \n> You can look at\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F, it's a good\n> starting point with many links to additional resources.\n> \n\nYeah, there's also a Developer FAQ (linked from that wiki page) [1],\ndiscussing various steps in the development process in more detail.\n\nBut I think all of this ignores a step that's fairly hard for new\ncontributors - finding what to work on. Without that, knowing the\ntechnical stuff is a bit pointless :-(\n\nSukhbir, do you have any idea what you'd like to work on? General area\nof interest, tool you'd be interested in, etc.? Maybe take a look at\nwhat other people work on in the commitfest app [2], and review a couple\npatches that you find interesting.\n\nThat'll allow you to setup the environment, get familiar with the core\nand various development tasks (running tests, ..).\n\nIf you get stuck. ask in this thread and we'll try to help you.\n\n\n\n[1] https://wiki.postgresql.org/wiki/Developer_FAQ\n\n[2] https://commitfest.postgresql.org/46/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Dec 2023 18:16:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
}
] |
[
{
"msg_contents": "Hey Hackers,\n\nQuick follow-up to my slew of questions back in [September][1]. I wanted to update [my patch][2] to note that only JSON Path equality operators are supported by indexes, as [previously discussed][3]. I thought perhaps adding a note to this bit of the docs would be useful:\n\n> For these operators, a GIN index extracts clauses of the form accessors_chain = constant out of the jsonpath pattern, and does the index search based on the keys and values mentioned in these clauses. The accessors chain may include .key, [*], and [index] accessors. The jsonb_ops operator class also supports .* and .** accessors, but the jsonb_path_ops operator class does not.\n\nBut perhaps that’s what `accessors_chain = constant` is supposed to mean? I’m not super clear on it, though, since the operator is `==` and not `=` (and I would presume that `!=` would use the index, as well. Is that correct?\n\nIf so, how would you feel about something like this?\n\n--- a/doc/src/sgml/json.sgml\n+++ b/doc/src/sgml/json.sgml\n@@ -513,7 +513,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n </programlisting>\n For these operators, a GIN index extracts clauses of the form\n <literal><replaceable>accessors_chain</replaceable>\n- = <replaceable>constant</replaceable></literal> out of\n+ == <replaceable>constant</replaceable></literal> out of\n the <type>jsonpath</type> pattern, and does the index search based on\n the keys and values mentioned in these clauses. The accessors chain\n may include <literal>.<replaceable>key</replaceable></literal>,\n@@ -522,6 +522,9 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n The <literal>jsonb_ops</literal> operator class also\n supports <literal>.*</literal> and <literal>.**</literal> accessors,\n but the <literal>jsonb_path_ops</literal> operator class does not.\n+ Only the <literal>==</literal> and <literal>!=</literal> <link\n+ linkend=\"functions-sqljson-path-operators\">SQL/JSON Path Operators</link>\n+ can use the index.\n </para>\n \n <para>\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/message-id/[email protected]\n [2]: https://commitfest.postgresql.org/45/4624/\n [3]: https://www.postgresql.org/message-id/[email protected]\n [4]: https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\n\n\n",
"msg_date": "Sun, 17 Dec 2023 13:10:03 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN-Indexable JSON Patterns"
},
{
"msg_contents": "On Dec 17, 2023, at 13:10, David E. Wheeler <[email protected]> wrote:\n\n> Quick follow-up to my slew of questions back in [September][1]. I wanted to update [my patch][2] to note that only JSON Path equality operators are supported by indexes, as [previously discussed][3].\n\nShould I just add it to the patch and let the reviews fall where they may? :-)\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/message-id/[email protected]\n [2]: https://commitfest.postgresql.org/45/4624/\n [3]: https://www.postgresql.org/message-id/[email protected]\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 09:45:10 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIN-Indexable JSON Patterns"
}
] |
[
{
"msg_contents": "I noticed psql was lacking JSON formatting of query results which I\nneed for a follow-up patch. It also seems useful generally, so here's\na patch:\n\npostgres=# \\pset format json\nOutput format is json.\npostgres=# select * from (values ('one', 2, 'three'), ('four', 5, 'six')) as sub(a, b, c);\n[\n{ \"a\": \"one\", \"b\": \"2\", \"c\": \"three\" },\n{ \"a\": \"four\", \"b\": \"5\", \"c\": \"six\" }\n]\npostgres=# \\x\nExpanded display is on.\npostgres=# select * from (values ('one', 2, 'three'), ('four', 5, 'six')) as sub(a, b, c);\n[{\n \"a\": \"one\",\n \"b\": \"2\",\n \"c\": \"three\"\n},{\n \"a\": \"four\",\n \"b\": \"5\",\n \"c\": \"six\"\n}]\npostgres=#\n\nBoth normal and expanded output format are optimized for readability\nwhile still saving screen space.\n\nBoth formats output the same JSON structure, an array of objects.\nOther variants like array-of-arrays or line-separated objects\n(\"jsonline\") might be possible, but I didn't want to overengineer it.\n\nOn the command line, the format is selected by `psql --json` and `psql -J`.\n(I'm not attached to the short option, but -J was free and it's in\nline with `psql -H` to select HTML.)\n\nChristoph",
"msg_date": "Mon, 18 Dec 2023 15:56:28 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql JSON output format"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 15:56, Christoph Berg <[email protected]> wrote:\n> I noticed psql was lacking JSON formatting of query results which I\n> need for a follow-up patch.\n\nThis seems useful to me too, but my usecases would also be solved (and\npossibly better solved) by adding JSON support to COPY as proposed\nhere: https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n\nI'm wondering if your follow-up patch would be better served by that too or not.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:06:21 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: To PostgreSQL Hackers\n> On the command line, the format is selected by `psql --json` and `psql -J`.\n\nAmong other uses, it enables easy post-processing of psql output using `jq`:\n\n$ psql -lJ | jq\n[\n {\n \"Name\": \"myon\",\n \"Owner\": \"myon\",\n \"Encoding\": \"UTF8\",\n \"Locale Provider\": \"libc\",\n \"Collate\": \"de_DE.utf8\",\n \"Ctype\": \"de_DE.utf8\",\n \"ICU Locale\": null,\n \"ICU Rules\": null,\n \"Access privileges\": null\n },\n...\n]\n\nChristoph\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:06:21 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Jelte Fennema-Nio\n> This seems useful to me too, but my usecases would also be solved (and\n> possibly better solved) by adding JSON support to COPY as proposed\n> here: https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n\nThanks for the pointer, I had not scrolled back enough to see that\nthread.\n\nI'm happy to see that this patch is also settling on \"array of\nobjects\".\n\n> I'm wondering if your follow-up patch would be better served by that too or not.\n\nI'd need it to work on query results. Which could of course be wrapped\ninto \"copy (select whatever) to stdout (format json)\", but doing it in\npsql without mangling the query is cleaner. And (see the other mail),\nthe psql format selection works nicely with existing queries like\n`psql -l`.\n\nAnd \"copy format json\" wouldn't support \\x expanded mode.\n\nWe'd want both patches even if they do the same thing on two different\nlevels, I'd say.\n\nChristoph\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:38:05 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 16:38, Christoph Berg <[email protected]> wrote:\n> We'd want both patches even if they do the same thing on two different\n> levels, I'd say.\n\nMakes sense. One thing I was still wondering is if it wouldn't be\neasier to wrap all queries in \"copy (select whatever) to stdout\n(format json)\" automatically when the -J flag is passed psql. Because\nit would be nice not to have to implement this very similar logic in\ntwo places.\n\nBut I guess that approach does not work for commands that don't work\ninside COPY, i.e. DML and DDL. I'm assuming your current patch works\nfine with DML/DDL. If that's indeed the case then I agree it makes\nsense to have this patch. And another big benefit is that it wouldn't\nrequire a new Postgres server function for the json functionality of\npsql.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:33:53 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Mon, 18 Dec 2023 at 16:34, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Mon, 18 Dec 2023 at 16:38, Christoph Berg <[email protected]> wrote:\n> > We'd want both patches even if they do the same thing on two different\n> > levels, I'd say.\n>\n> Makes sense.\n>\n\nI can see the appeal in this feature. However, as it stands, this\nisn't compatible with copy format json, and I think it would need to\nduplicate quite a lot of the JSON output code in client-side code to\nmake it compatible.\n\nConsider, for example:\n\nCREATE TABLE foo(col json);\nINSERT INTO foo VALUES ('\"str_value\"');\n\ncopy foo to stdout with (format json) produces this:\n\n{\"col\":\"str_value\"}\n\nwhich is as expected. However, psql -Jc \"select * from foo\" produces\n\n[\n{ \"col\": \"\\\"str_value\\\"\" }\n]\n\nThe problem is, various datatypes such as boolean, number types, json,\nand jsonb must not be quoted and escaped, since that would change them\nto strings or double-encode them in the result. And then there are\ndomain types built on top of those types, and arrays, etc. See, for\nexample, the logic in json_categorize_type(). I think that trying to\nduplicate that client-side is doomed to failure.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 8 Jan 2024 18:43:14 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Mon, 2024-01-08 at 18:43 +0000, Dean Rasheed wrote:\n> I can see the appeal in this feature. However, as it stands, this\n> isn't compatible with copy format json, and I think it would need to\n> duplicate quite a lot of the JSON output code in client-side code to\n> make it compatible.\n> \n> Consider, for example:\n> \n> CREATE TABLE foo(col json);\n> INSERT INTO foo VALUES ('\"str_value\"');\n> \n> copy foo to stdout with (format json) produces this:\n> \n> {\"col\":\"str_value\"}\n> \n> which is as expected. However, psql -Jc \"select * from foo\" produces\n> \n> [\n> { \"col\": \"\\\"str_value\\\"\" }\n> ]\n> \n> The problem is, various datatypes such as boolean, number types, json,\n> and jsonb must not be quoted and escaped, since that would change them\n> to strings or double-encode them in the result.\n\nI agree that such data types should not be double quoted.\n\n> And then there are\n> domain types built on top of those types, and arrays, etc. See, for\n> example, the logic in json_categorize_type(). I think that trying to\n> duplicate that client-side is doomed to failure.\n\nPerhaps. But maybe \"printTableContent\" could be extended to contain\na boolean array \"quote_for_json\" that is set in \"printTableAddHeader\"\nbased on the underlying data type, similar to how \"aligns\" is set now.\nDetecting array types might be a challenge.\n\nDomains might not be a problem, since \"PQftype()\" seems to return the\nbase data type for domain values.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 09 Jan 2024 09:22:02 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Dean Rasheed\n> I can see the appeal in this feature. However, as it stands, this\n> isn't compatible with copy format json, and I think it would need to\n> duplicate quite a lot of the JSON output code in client-side code to\n> make it compatible.\n\nI can see we probably wouldn't want two different output formats named\njson, but the general idea of \"allow psql to format results as json of\nstrings\" makes a lot of sense, so we should try to make it work. Does\nit even have to be compatible?\n\nIf the code required is the same, it could be moved to libpgcommon.\n\n> The problem is, various datatypes such as boolean, number types, json,\n> and jsonb must not be quoted and escaped, since that would change them\n> to strings or double-encode them in the result. And then there are\n> domain types built on top of those types, and arrays, etc. See, for\n> example, the logic in json_categorize_type(). I think that trying to\n> duplicate that client-side is doomed to failure.\n\nCan we try to make it work first, before we declare the perfect the\nenemy of the good?\n\nI'll note that the current code uses PG's string representation of\nstrings which is meant to be round-trip safe when fed back into the\nserver. So quoted numeric values aren't a problem at all. (And that\npart is fixable.)\n\nThe real problem here is that COPY json violates that by pulling json\nvalues up one syntax level. \"Normal\" cases will be fixable by just\nlooking for json(b) and printing that unquoted. And composite types\nwith jsonb members... are these really only half-quoted?!\n\n\nRe: Laurenz Albe\n> > The problem is, various datatypes such as boolean, number types, json,\n> > and jsonb must not be quoted and escaped, since that would change them\n> > to strings or double-encode them in the result.\n> \n> I agree that such data types should not be double quoted.\n\nI left that out so far because it didn't make a practical difference,\nbut that's fixable.\n\n> > And then there are\n> > domain types built on top of those types, and arrays, etc. See, for\n> > example, the logic in json_categorize_type(). I think that trying to\n> > duplicate that client-side is doomed to failure.\n> \n> Perhaps. But maybe \"printTableContent\" could be extended to contain\n> a boolean array \"quote_for_json\" that is set in \"printTableAddHeader\"\n> based on the underlying data type, similar to how \"aligns\" is set now.\n> Detecting array types might be a challenge.\n> \n> Domains might not be a problem, since \"PQftype()\" seems to return the\n> base data type for domain values.\n\nThanks, I'll give that a try.\n\nChristoph\n\n\n",
"msg_date": "Tue, 9 Jan 2024 10:43:53 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 9 Jan 2024 at 09:43, Christoph Berg <[email protected]> wrote:\n>\n> I can see we probably wouldn't want two different output formats named\n> json, but the general idea of \"allow psql to format results as json of\n> strings\" makes a lot of sense, so we should try to make it work. Does\n> it even have to be compatible?\n>\n\nI would say that they should be compatible, in the sense that an\nexternal tool parsing the outputs should regard them as equal, but\nmaybe there are different expectations for the two features.\n\n> I'll note that the current code uses PG's string representation of\n> strings which is meant to be round-trip safe when fed back into the\n> server. So quoted numeric values aren't a problem at all. (And that\n> part is fixable.)\n>\n\nI'm not sure that being round-trip safe is a necessary goal here, but\nagain, it's about the expectations for the feature. I was imagining\nthat the goal was to produce something that an external tool would\nparse, rather than something Postgres would read back in. So not\nquoting numeric values seems desirable to produce output that better\nreflects the semantic content of the data (though it doesn't affect it\nbeing round-trip safe).\n\n> The real problem here is that COPY json violates that by pulling json\n> values up one syntax level. \"Normal\" cases will be fixable by just\n> looking for json(b) and printing that unquoted. And composite types\n> with jsonb members... are these really only half-quoted?!\n>\n\nWhat to do with composites is an interesting point in question. COPY\nformat json will turn a composite into a JSON object whose keys are\nthe field names. That's useful if you want to use an external tool to\nparse the result and get at the individual fields, but it's not\nround-trip safe. OTOH, this patch outputs the Postgres string\nrepresentation of the object, which might be round-trip safe, but is\nnot very convenient for any other tool to read.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 9 Jan 2024 11:57:14 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Dean Rasheed\n> > I'll note that the current code uses PG's string representation of\n> > strings which is meant to be round-trip safe when fed back into the\n> > server. So quoted numeric values aren't a problem at all. (And that\n> > part is fixable.)\n> \n> I'm not sure that being round-trip safe is a necessary goal here, but\n> again, it's about the expectations for the feature. I was imagining\n> that the goal was to produce something that an external tool would\n> parse, rather than something Postgres would read back in. So not\n> quoting numeric values seems desirable to produce output that better\n> reflects the semantic content of the data (though it doesn't affect it\n> being round-trip safe).\n\nGetting it print numeric/boolean without quotes was actually easy, as\nwell as json(b). Implemented as the attached v2 patch.\n\nBut: not quoting json means that NULL and 'null'::json will both be\nrendered as 'null'. That strikes me as a pretty undesirable conflict.\nDoes the COPY patch also do that?\n\n> OTOH, this patch outputs the Postgres string representation of the\n> object, which might be round-trip safe, but is not very convenient\n> for any other tool to read.\n\nFor my use case, I need something that can be fed back into PG.\nReassembling all the json parts back into proper values would be a\npretty hard problem.\n\nPerhaps there should be two output formats, one that's roundtrip-safe,\nand one that represents json structures and composite values nicely.\nAdding format-specific options could also be used to switch the output\nbetween \"array of json objects\" and \"one json object per line\".\n\nChristoph",
"msg_date": "Tue, 9 Jan 2024 15:35:56 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "[cc'ing Joe]\n\nOn Tue, 9 Jan 2024 at 14:35, Christoph Berg <[email protected]> wrote:\n>\n> Getting it print numeric/boolean without quotes was actually easy, as\n> well as json(b). Implemented as the attached v2 patch.\n>\n> But: not quoting json means that NULL and 'null'::json will both be\n> rendered as 'null'. That strikes me as a pretty undesirable conflict.\n> Does the COPY patch also do that?\n>\n\nYes. Perhaps what needs to happen is for a NULL column to be omitted\nentirely from the output. I think the COPY TO json patch would have to\ndo that if COPY FROM json were to be added later, to make it\nround-trip safe.\n\n> > OTOH, this patch outputs the Postgres string representation of the\n> > object, which might be round-trip safe, but is not very convenient\n> > for any other tool to read.\n>\n> For my use case, I need something that can be fed back into PG.\n> Reassembling all the json parts back into proper values would be a\n> pretty hard problem.\n>\n\nWhat is your use case? It seems like what you want is quite different\nfrom the COPY patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 9 Jan 2024 16:51:31 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 2024-01-09 at 16:51 +0000, Dean Rasheed wrote:\n> On Tue, 9 Jan 2024 at 14:35, Christoph Berg <[email protected]> wrote:\n> > \n> > Getting it print numeric/boolean without quotes was actually easy, as\n> > well as json(b). Implemented as the attached v2 patch.\n> > \n> > But: not quoting json means that NULL and 'null'::json will both be\n> > rendered as 'null'. That strikes me as a pretty undesirable conflict.\n> > Does the COPY patch also do that?\n> \n> Yes. Perhaps what needs to happen is for a NULL column to be omitted\n> entirely from the output. I think the COPY TO json patch would have to\n> do that if COPY FROM json were to be added later, to make it\n> round-trip safe.\n\nI think the behavior is fine as it is. I'd expect both NULL and JSON \"null\"\nto be rendered as \"null\". I think the main use case for a feature like this\nis people who need the result in JSON for further processing somewhere else.\n\n\"Round-trip safety\" is not so important. If you want to move data from\nPostgreSQL to PostgreSQL, you use the plain or the binary format.\nThe CSV format by default renders NULL and empty strings identical, and\nI don't think anybody objects to that.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 16 Jan 2024 17:07:36 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On 2024-01-16 Tu 11:07, Laurenz Albe wrote:\n> On Tue, 2024-01-09 at 16:51 +0000, Dean Rasheed wrote:\n>> On Tue, 9 Jan 2024 at 14:35, Christoph Berg<[email protected]> wrote:\n>>> Getting it print numeric/boolean without quotes was actually easy, as\n>>> well as json(b). Implemented as the attached v2 patch.\n>>>\n>>> But: not quoting json means that NULL and 'null'::json will both be\n>>> rendered as 'null'. That strikes me as a pretty undesirable conflict.\n>>> Does the COPY patch also do that?\n>> Yes. Perhaps what needs to happen is for a NULL column to be omitted\n>> entirely from the output. I think the COPY TO json patch would have to\n>> do that if COPY FROM json were to be added later, to make it\n>> round-trip safe.\n> I think the behavior is fine as it is. I'd expect both NULL and JSON \"null\"\n> to be rendered as \"null\". I think the main use case for a feature like this\n> is people who need the result in JSON for further processing somewhere else.\n>\n> \"Round-trip safety\" is not so important. If you want to move data from\n> PostgreSQL to PostgreSQL, you use the plain or the binary format.\n> The CSV format by default renders NULL and empty strings identical, and\n> I don't think anybody objects to that.\n\n\nThis is absolutely not true. The docs say about CSV format:\n\n A NULL is output as the NULL parameter string and is not quoted,\n while a non-NULL value matching the NULL parameter string is quoted.\n For example, with the default settings, a NULL is written as an\n unquoted empty string, while an empty string data value is written\n with double quotes (\"\").\n\nCSV format with default settings is and has been from the beginning \ndesigned to be round trippable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-01-16 Tu 11:07, Laurenz Albe\n wrote:\n\n\nOn Tue, 2024-01-09 at 16:51 +0000, Dean Rasheed wrote:\n\n\nOn Tue, 9 Jan 2024 at 14:35, Christoph Berg <[email protected]> wrote:\n\n\n\nGetting it print numeric/boolean without quotes was actually easy, as\nwell as json(b). Implemented as the attached v2 patch.\n\nBut: not quoting json means that NULL and 'null'::json will both be\nrendered as 'null'. That strikes me as a pretty undesirable conflict.\nDoes the COPY patch also do that?\n\n\n\nYes. Perhaps what needs to happen is for a NULL column to be omitted\nentirely from the output. I think the COPY TO json patch would have to\ndo that if COPY FROM json were to be added later, to make it\nround-trip safe.\n\n\n\nI think the behavior is fine as it is. I'd expect both NULL and JSON \"null\"\nto be rendered as \"null\". I think the main use case for a feature like this\nis people who need the result in JSON for further processing somewhere else.\n\n\"Round-trip safety\" is not so important. If you want to move data from\nPostgreSQL to PostgreSQL, you use the plain or the binary format.\nThe CSV format by default renders NULL and empty strings identical, and\nI don't think anybody objects to that.\n\n\n\nThis is absolutely not true. The docs say about CSV format:\n\nA NULL is output as the NULL parameter string and is not\n quoted, while a non-NULL value matching the NULL parameter\n string is quoted. For example, with the default settings, a NULL\n is written as an unquoted empty string, while an empty string\n data value is written with double quotes (\"\").\n\n\nCSV format with default settings is and has been from the\n beginning designed to be round trippable.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 16 Jan 2024 11:49:52 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 11:07 AM Laurenz Albe <[email protected]> wrote:\n> \"Round-trip safety\" is not so important. If you want to move data from\n> PostgreSQL to PostgreSQL, you use the plain or the binary format.\n> The CSV format by default renders NULL and empty strings identical, and\n> I don't think anybody objects to that.\n\nAs Andrew says, the part about the CSV format is not correct, but I\nalso don't think I agree with the larger point, either. I believe that\nround-trip safety is a really desirable property. Is it absolutely\nnecessary in every case? Maybe not. But, it shouldn't be lacking\nwithout a good reason, either, at least IMHO. If you postulate that\npeople are moving data from A to B, it is reasonable to think that\neventually someone is going to want to move some data from B back to\nA. If that turns out to be hard, they'll be sad. We shouldn't make\npeople sad without a good reason.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 14:12:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 2024-01-16 at 11:49 -0500, Andrew Dunstan wrote:\n> On 2024-01-16 Tu 11:07, Laurenz Albe wrote:\n> > On Tue, 2024-01-09 at 16:51 +0000, Dean Rasheed wrote:\n> > > On Tue, 9 Jan 2024 at 14:35, Christoph Berg <[email protected]> wrote:\n> > > > Getting it print numeric/boolean without quotes was actually easy, as\n> > > > well as json(b). Implemented as the attached v2 patch.\n> > > > \n> > > > But: not quoting json means that NULL and 'null'::json will both be\n> > > > rendered as 'null'. That strikes me as a pretty undesirable conflict.\n> > > > Does the COPY patch also do that?\n> > > \n> > > Yes. Perhaps what needs to happen is for a NULL column to be omitted\n> > > entirely from the output. I think the COPY TO json patch would have to\n> > > do that if COPY FROM json were to be added later, to make it\n> > > round-trip safe.\n> > \n> > I think the behavior is fine as it is. I'd expect both NULL and JSON \"null\"\n> > to be rendered as \"null\". I think the main use case for a feature like this\n> > is people who need the result in JSON for further processing somewhere else.\n> > \n> > \"Round-trip safety\" is not so important. If you want to move data from\n> > PostgreSQL to PostgreSQL, you use the plain or the binary format.\n> > The CSV format by default renders NULL and empty strings identical, and\n> > I don't think anybody objects to that.\n> \n> This is absolutely not true.\n> \n> CSV format with default settings is and has been from the beginning designed\n> to be round trippable.\n\nSorry for being unclear. I wasn't talking about COPY, but about the psql\noutput format:\n\nCREATE TABLE xy (a integer, b text);\n\nINSERT INTO xy VALUES (1, 'one'), (2, NULL), (3, '');\n\n\\pset format csv\nOutput format is csv.\n\nTABLE xy;\na,b\n1,one\n2,\n3,\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 17 Jan 2024 09:52:27 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 2024-01-16 at 14:12 -0500, Robert Haas wrote:\n> On Tue, Jan 16, 2024 at 11:07 AM Laurenz Albe <[email protected]> wrote:\n> > \"Round-trip safety\" is not so important. If you want to move data from\n> > PostgreSQL to PostgreSQL, you use the plain or the binary format.\n> > The CSV format by default renders NULL and empty strings identical, and\n> > I don't think anybody objects to that.\n> \n> As Andrew says, the part about the CSV format is not correct, but I\n> also don't think I agree with the larger point, either. I believe that\n> round-trip safety is a really desirable property. Is it absolutely\n> necessary in every case? Maybe not. But, it shouldn't be lacking\n> without a good reason, either, at least IMHO. If you postulate that\n> people are moving data from A to B, it is reasonable to think that\n> eventually someone is going to want to move some data from B back to\n> A. If that turns out to be hard, they'll be sad. We shouldn't make\n> people sad without a good reason.\n\nAs mentioned in my other mail, I was talking about the psql output\nformat \"csv\" rather than about COPY.\n\nI agree that it is desirable to lose as little information as possible.\nBut if we want to format query output as JSON, we have a couple of\nrequirements that cannot all be satisfied:\n\n1. lose no information (\"round-trip safe\")\n\n2. don't double quote numbers, booleans and other JSON values\n\n3. don't skip any table column in the output\n\nChristoph's original patch didn't satisfy #2, and his current version\ndoesn't satisfy #1. Do you think that skipping NULL columns would be\nthe best solution? We don't do that in the to_json() function, which\nalso renders SQL NULL as JSON null.\n\nI think the argument for round-trip safety of psql output is tenuous.\nThere is no way for psql to ingest JSON as input format, and the patch\nto add JSON as COPY format only supports COPY TO. And unless you can\nname the exact way that the data written by psql will be loaded into\nPostgreSQL again, all that remains is an (understandable) unease about\nlosing the distiction between SQL NULL and JSON null.\n\nWe have jsonb_populate_record() to convert JSON back to a table row,\nbut that function will convert both missing columns and a JSON null\nto SQL NULL:\n\nCREATE TABLE xy (id integer, j jsonb);\n\n\\pset null '∅'\n\nSELECT * FROM jsonb_populate_record(NULL::xy, '{\"id\":1,\"j\":null}');\n\n id │ j \n════╪═══\n 1 │ ∅\n(1 row)\n\nSELECT * FROM jsonb_populate_record(NULL::xy, '{\"id\":1}');\n\n id │ j \n════╪═══\n 1 │ ∅\n(1 row)\n\nIndeed, there doesn't seem to be a way to generate JSON null with that\nfunction.\n\nSo I wouldn't worry about round-trip safety too much, and my preference\nis how the current patch does it. I am not dead set against a solution\nthat omits NULL columns in the output, though.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 17 Jan 2024 10:30:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "\nOn 2024-01-17 We 03:52, Laurenz Albe wrote:\n> On Tue, 2024-01-16 at 11:49 -0500, Andrew Dunstan wrote:\n>> On 2024-01-16 Tu 11:07, Laurenz Albe wrote:\n>>> On Tue, 2024-01-09 at 16:51 +0000, Dean Rasheed wrote:\n>>>> On Tue, 9 Jan 2024 at 14:35, Christoph Berg <[email protected]> wrote:\n>>>>> Getting it print numeric/boolean without quotes was actually easy, as\n>>>>> well as json(b). Implemented as the attached v2 patch.\n>>>>>\n>>>>> But: not quoting json means that NULL and 'null'::json will both be\n>>>>> rendered as 'null'. That strikes me as a pretty undesirable conflict.\n>>>>> Does the COPY patch also do that?\n>>>> Yes. Perhaps what needs to happen is for a NULL column to be omitted\n>>>> entirely from the output. I think the COPY TO json patch would have to\n>>>> do that if COPY FROM json were to be added later, to make it\n>>>> round-trip safe.\n>>> I think the behavior is fine as it is. I'd expect both NULL and JSON \"null\"\n>>> to be rendered as \"null\". I think the main use case for a feature like this\n>>> is people who need the result in JSON for further processing somewhere else.\n>>>\n>>> \"Round-trip safety\" is not so important. If you want to move data from\n>>> PostgreSQL to PostgreSQL, you use the plain or the binary format.\n>>> The CSV format by default renders NULL and empty strings identical, and\n>>> I don't think anybody objects to that.\n>> This is absolutely not true.\n>>\n>> CSV format with default settings is and has been from the beginning designed\n>> to be round trippable.\n> Sorry for being unclear. I wasn't talking about COPY, but about the psql\n> output format:\n>\n> CREATE TABLE xy (a integer, b text);\n>\n> INSERT INTO xy VALUES (1, 'one'), (2, NULL), (3, '');\n>\n> \\pset format csv\n> Output format is csv.\n>\n> TABLE xy;\n> a,b\n> 1,one\n> 2,\n> 3,\n>\n\nI think the reason nobody's complained about it is quite possibly that \nvery few people have used it. That's certainly the case with me - if I'd \nnoticed it I would have complained.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 11:31:06 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:30 AM Laurenz Albe <[email protected]> wrote:\n> As mentioned in my other mail, I was talking about the psql output\n> format \"csv\" rather than about COPY.\n\nOh. Well, I think it's sad that the psql format csv has that property.\nWhy doesn't it adopt COPY's handling?\n\n> I agree that it is desirable to lose as little information as possible.\n> But if we want to format query output as JSON, we have a couple of\n> requirements that cannot all be satisfied:\n>\n> 1. lose no information (\"round-trip safe\")\n>\n> 2. don't double quote numbers, booleans and other JSON values\n>\n> 3. don't skip any table column in the output\n>\n> Christoph's original patch didn't satisfy #2, and his current version\n> doesn't satisfy #1. Do you think that skipping NULL columns would be\n> the best solution? We don't do that in the to_json() function, which\n> also renders SQL NULL as JSON null.\n\nLet me start by clarifying that I'm OK with sacrificing\nround-trippability here as long as we do it thoughtfully.\n\"Round-trippability is important but X is more important and we cannot\nhave both for Y reasons\" seems like a potentially fine argument to me;\nI'm only objecting to an argument of the form \"round-trippability\ndoesn't even matter.\" My previous comment was a bit of a drive-by\nremark on that specifically rather than a strong opinion about what\nexactly we ought to do here.\n\nI guess the specifically issue here is around a json(b) column that is\nnull at the SQL level vs one that contains a JSON null. How do we\ndistinguish those cases? I think entirely omitting null columns could\nbe a way forward, but I don't know if that would cause other problems\nfor users.\n\nI'm not quite sure that addresses all the issues, though. For\ninstance, consider that 1.00::numeric and 1.0::numeric are equal but\ndistinguishable. If those get rendered into the JSON unquoted as 1.00\nand 1.0, respectively, is that going to round-trip properly? What\nabout float8 values where extra_float_digits=3 is needed to properly\nround trip? If we take PostgreSQL's array data types and turn them\ninto JSON arrays, what happens with non-default bounds? I know how\nwe're going to turn '{1,2}'::int[] into a JSON array, or at least I\nassume I do, but what in the world are we going to do about\n'[-3:-2]={1,2}'?\n\nAs much as I think round-trippability is good, getting it to 100% here\nis probably a good bit of work. And maybe that work isn't worth doing\nor involves too much collateral damage. But I do think it has positive\nvalue. If we produce output that could be ingested back into PG later\nwith the right tool, that leaves the door open for someone to build\nthe tool later even if we don't have it today. If we produce output\nthat loses information, no tool built later can make up for the loss.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jan 2024 14:52:55 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Wed, 2024-01-17 at 14:52 -0500, Robert Haas wrote:\n> Let me start by clarifying that I'm OK with sacrificing\n> round-trippability here as long as we do it thoughtfully.\n\nGot you.\n\n\n> I'm not quite sure that addresses all the issues, though. For\n> instance, consider that 1.00::numeric and 1.0::numeric are equal but\n> distinguishable. If those get rendered into the JSON unquoted as 1.00\n> and 1.0, respectively, is that going to round-trip properly? What\n> about float8 values where extra_float_digits=3 is needed to properly\n> round trip? If we take PostgreSQL's array data types and turn them\n> into JSON arrays, what happens with non-default bounds? I know how\n> we're going to turn '{1,2}'::int[] into a JSON array, or at least I\n> assume I do, but what in the world are we going to do about\n> '[-3:-2]={1,2}'?\n> \n> As much as I think round-trippability is good, getting it to 100% here\n> is probably a good bit of work.\n\nI would go as far as saying that the attempt to preserve all that is\nfutile, if you are bound to JSON as format.\n\n> But I do think it has positive\n> value. If we produce output that could be ingested back into PG later\n> with the right tool, that leaves the door open for someone to build\n> the tool later even if we don't have it today. If we produce output\n> that loses information, no tool built later can make up for the loss.\n\nI am all for losing as little information as possible, but I think\nthat this discussion is going off on a tangent. After all, we are not\ntalking about a data export tool here, we are talking about psql.\nI don't see anybody complain that float8 values lose precision in\nthe default aligned format, or that empty strings and NULL values\nlook the same in aligned format. Why do the goalposts move for the\nJSON output format? I don't think psql output should be considered\na form of backup.\n\nI'd say that we should strive to preserve whatever information we\neasily can, and we shouldn't worry about the rest.\n\nCan we get consensus that SQL NULL columns should be omitted from the\noutput, and the rest left as it currently is?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 19 Jan 2024 13:22:37 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Laurenz Albe\n> > But I do think it has positive\n> > value. If we produce output that could be ingested back into PG later\n> > with the right tool, that leaves the door open for someone to build\n> > the tool later even if we don't have it today. If we produce output\n> > that loses information, no tool built later can make up for the loss.\n\n> I am all for losing as little information as possible, but I think\n> that this discussion is going off on a tangent. After all, we are not\n> talking about a data export tool here, we are talking about psql.\n\nI've just posted the other patch where I need the JSON format:\nhttps://www.postgresql.org/message-id/flat/Za6EfXeewwLRS_fs%40msg.df7cb.de\n\nThere, I need to be able to read back the query output into psql,\nwhile at the same time being human-readable so the user can sanely\nedit the data in an editor. The default \"aligned\" format is only\nhuman-readable, while CSV is mostly only machine-readable. JSON is the\nbest option between the two, I think.\n\nWhat I did now in v3 of this patch is to print boolean and numeric\nvalues (ints, floats, numeric) without quotes, while adding the quotes\nback to json. This solves the NULL vs 'null'::json problem.\n\n> I don't see anybody complain that float8 values lose precision in\n> the default aligned format, or that empty strings and NULL values\n> look the same in aligned format. Why do the goalposts move for the\n> JSON output format? I don't think psql output should be considered\n> a form of backup.\n\nFwiw, not quoting numbers in JSON won't have any of these problems if\nthe JSON reader just passes the strings read through. (Which PG's JSON\nparser does.)\n\n> Can we get consensus that SQL NULL columns should be omitted from the\n> output, and the rest left as it currently is?\n\nI think that would be an interesting option for a JSON export format.\nThe psql JSON format is more for human inspection, where omitting the\ncolumns might create confusion. (We could make it a pset parameter of\nthe format, but I think the default should be to show NULL columns.)\n\nChristoph",
"msg_date": "Mon, 22 Jan 2024 16:19:27 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Mon, 2024-01-22 at 16:19 +0100, Christoph Berg wrote:\n> What I did now in v3 of this patch is to print boolean and numeric\n> values (ints, floats, numeric) without quotes, while adding the quotes\n> back to json. This solves the NULL vs 'null'::json problem.\n\nThe patch is working as advertised.\n\nI am kind of unhappy about this change. It seems awkward and undesirable\nso have JSON values decorated with weird quoting in JSON output.\nI understand the motivation, but I bet it's not what will make users\nhappy.\n\nIf you need to disambiguate between SQL NULL and JSON null, my\npreferred solution would be to omit SQL NULL columns from the output\naltogether.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 23 Jan 2024 15:15:11 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Am Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n<[email protected]>:\n> I understand the motivation, but I bet it's not what will make users\n> happy.\n>\n> If you need to disambiguate between SQL NULL and JSON null, my\n> preferred solution would be to omit SQL NULL columns from the output\n> altogether.\n\nI fully support Laurenz's proposal and argumentation. The main use\ncase for such a JSON output feature is further processing somewhere\nelse.\n\n--Stefan\n\nAm Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n<[email protected]>:\n>\n> On Mon, 2024-01-22 at 16:19 +0100, Christoph Berg wrote:\n> > What I did now in v3 of this patch is to print boolean and numeric\n> > values (ints, floats, numeric) without quotes, while adding the quotes\n> > back to json. This solves the NULL vs 'null'::json problem.\n>\n> The patch is working as advertised.\n>\n> I am kind of unhappy about this change. It seems awkward and undesirable\n> so have JSON values decorated with weird quoting in JSON output.\n> I understand the motivation, but I bet it's not what will make users\n> happy.\n>\n> If you need to disambiguate between SQL NULL and JSON null, my\n> preferred solution would be to omit SQL NULL columns from the output\n> altogether.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\n\n",
"msg_date": "Tue, 23 Jan 2024 15:35:17 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 7:35 AM Stefan Keller <[email protected]> wrote:\n\n> Am Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n> <[email protected]>:\n> > I understand the motivation, but I bet it's not what will make users\n> > happy.\n> >\n> > If you need to disambiguate between SQL NULL and JSON null, my\n> > preferred solution would be to omit SQL NULL columns from the output\n> > altogether.\n>\n> I fully support Laurenz's proposal and argumentation. The main use\n> case for such a JSON output feature is further processing somewhere\n> else.\n>\n> --Stefan\n>\n> Am Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n> <[email protected]>:\n> >\n> > On Mon, 2024-01-22 at 16:19 +0100, Christoph Berg wrote:\n> > > What I did now in v3 of this patch is to print boolean and numeric\n> > > values (ints, floats, numeric) without quotes, while adding the quotes\n> > > back to json. This solves the NULL vs 'null'::json problem.\n> >\n> > The patch is working as advertised.\n> >\n> > I am kind of unhappy about this change. It seems awkward and undesirable\n> > so have JSON values decorated with weird quoting in JSON output.\n> > I understand the motivation, but I bet it's not what will make users\n> > happy.\n> >\n> > If you need to disambiguate between SQL NULL and JSON null, my\n> > preferred solution would be to omit SQL NULL columns from the output\n> > altogether.\n> >\n>\n\nI agree on distinguishing SQL via omission but I do think, almost\nregardless, that the output should include a metadata section that lists\nall of the actual columns in the result, the column position, and since we\nhave the info available, the data type name and possibly OID. Then any\ncolumn name present in the metadata but that isn't a key name for a given\nobject is known to have an SQL NULL as the value of that column in that row.\n\nDavid J.\n\nOn Tue, Jan 23, 2024 at 7:35 AM Stefan Keller <[email protected]> wrote:Am Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n<[email protected]>:\n> I understand the motivation, but I bet it's not what will make users\n> happy.\n>\n> If you need to disambiguate between SQL NULL and JSON null, my\n> preferred solution would be to omit SQL NULL columns from the output\n> altogether.\n\nI fully support Laurenz's proposal and argumentation. The main use\ncase for such a JSON output feature is further processing somewhere\nelse.\n\n--Stefan\n\nAm Di., 23. Jan. 2024 um 15:15 Uhr schrieb Laurenz Albe\n<[email protected]>:\n>\n> On Mon, 2024-01-22 at 16:19 +0100, Christoph Berg wrote:\n> > What I did now in v3 of this patch is to print boolean and numeric\n> > values (ints, floats, numeric) without quotes, while adding the quotes\n> > back to json. This solves the NULL vs 'null'::json problem.\n>\n> The patch is working as advertised.\n>\n> I am kind of unhappy about this change. It seems awkward and undesirable\n> so have JSON values decorated with weird quoting in JSON output.\n> I understand the motivation, but I bet it's not what will make users\n> happy.\n>\n> If you need to disambiguate between SQL NULL and JSON null, my\n> preferred solution would be to omit SQL NULL columns from the output\n> altogether.\n>I agree on distinguishing SQL via omission but I do think, almost regardless, that the output should include a metadata section that lists all of the actual columns in the result, the column position, and since we have the info available, the data type name and possibly OID. Then any column name present in the metadata but that isn't a key name for a given object is known to have an SQL NULL as the value of that column in that row.David J.",
"msg_date": "Tue, 23 Jan 2024 08:01:46 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Laurenz Albe\n> I am kind of unhappy about this change. It seems awkward and undesirable\n> so have JSON values decorated with weird quoting in JSON output.\n> I understand the motivation, but I bet it's not what will make users\n> happy.\n\nWell, why stop at JSON, and not represent any array type as a JSON\narray? Compound types could be transformed into JSON objects. Custom\ndata types could add hooks for their own custom JSON representation.\n\nI'd just stop the flood right before it starts...\n\nThe real reason I'd not want to go that route is because I need the\nformat to be round-trip safe for the \"\\gedit\" patch, see the other\nthread. We would have to transform JSON sub-parts back into PG's text\nformat. But there were already complaints that other patch is complex.\nAt the moment it's just strcmp-ing the text value before and after,\nwhich is straighforward.\n\n> If you need to disambiguate between SQL NULL and JSON null, my\n> preferred solution would be to omit SQL NULL columns from the output\n> altogether.\n\nThat works, but only by convention only.\n\n\nRe: David G. Johnston\n> I agree on distinguishing SQL via omission but I do think, almost\n> regardless, that the output should include a metadata section that lists\n> all of the actual columns in the result, the column position, and since we\n> have the info available, the data type name and possibly OID. Then any\n> column name present in the metadata but that isn't a key name for a given\n> object is known to have an SQL NULL as the value of that column in that row.\n\nThere are no comments in JSON.\n\nAdding the info in-band would break the simple use case of using the\ndata as-is for further processing.\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:36:58 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 2024-01-23 at 08:01 -0700, David G. Johnston wrote:\n> I do think that the output should include a metadata section that lists all of\n> the actual columns in the result, the column position, and since we have the\n> info available, the data type name and possibly OID. Then any column name\n> present in the metadata but that isn't a key name for a given object is known\n> to have an SQL NULL as the value of that column in that row.\n\nSounds attractive, but I'm a bit worried that that additional information\nmight make life harder for some consumers.\n\nMy crystal ball is as cloudy as anybody's when it comes to guessing the\nmost likely use cases for the feature, but I'd rather keep it simple and add\nfeatures like that later, if there is a demand.\n\nIf you import the data into an existing structure, you don't need the metadata.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:39:18 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Laurenz Albe\n> If you import the data into an existing structure, you don't need the metadata.\n\nAlso, since you were running the query yourself, you should know what\ncolumns you were expecting.\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:51:35 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, 2024-01-23 at 16:36 +0100, Christoph Berg wrote:\n> Re: Laurenz Albe\n> > I am kind of unhappy about this change. It seems awkward and undesirable\n> > so have JSON values decorated with weird quoting in JSON output.\n> > I understand the motivation, but I bet it's not what will make users\n> > happy.\n> \n> Well, why stop at JSON, and not represent any array type as a JSON\n> array? Compound types could be transformed into JSON objects. Custom\n> data types could add hooks for their own custom JSON representation.\n> \n> I'd just stop the flood right before it starts...\n\nI'd stop the flood right after json/jsonb.\n\nArrays as database columns are probably too rare to be a real issue.\n\n> The real reason I'd not want to go that route is because I need the\n> format to be round-trip safe for the \"\\gedit\" patch, see the other\n> thread. We would have to transform JSON sub-parts back into PG's text\n> format. But there were already complaints that other patch is complex.\n> At the moment it's just strcmp-ing the text value before and after,\n> which is straighforward.\n\nI don't particularly care about \\gedit, any I think \\gedit shouldn't\nbe the raison d'être for this patch.\n\nI cannot imagine that anybody who wants to move data from PostgreSQL\nto PostgreSQL will use \\gedit, load the output from psql into the\neditor and save. After all, there is COPY and pg_dump.\n\nI'm pretty certain that people are more likely to use psql's JSON\noutput format to move data somewhere else.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:54:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Laurenz Albe\n> > I'd just stop the flood right before it starts...\n> \n> I'd stop the flood right after json/jsonb.\n\nNod. I do see a point here, given it's \"json in json\", and not\n\"something else in json\". Will try to make it work with \\gedit.\n\n> Arrays as database columns are probably too rare to be a real issue.\n\nAck.\n\n> I'm pretty certain that people are more likely to use psql's JSON\n> output format to move data somewhere else.\n\nWell, there's also the other patch to add JSON support to COPY, that\nwould be even more suitable.\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 Jan 2024 17:35:01 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 11:35 AM Christoph Berg <[email protected]> wrote:\n> Ack.\n\nThe last version of this patch was posted on January 22nd and got a\nbunch of replies, so I'm marking\nhttps://commitfest.postgresql.org/48/4707/ as Returned with Feedback\nfor now. Please feel free to update the status of the patch when the\nsituation changes.\n\nIMHO, the big problem here is that different people want different\ncorner-case behaviors and it's not clear what to do about that. I\ndon't think there's a single vote for \"don't do this at all\". So if\nthere is a desire to take this work forward, the goal probably ought\nto be to try to either (a) figure out one behavior that everyone can\nlive with or (b) figure out a short list of options that can be used\nto customize the behavior to a degree that lets everyone get something\nreasonably close to what they want. For instance, \"what to do if you\nfind a SQL null\" and \"whether to include json values as strings or\njson objects\" seem like they could potentially be customizable. That's\nprobably not a silver bullet because (1) that's more work and (2)\nthere might be more behaviors than we want to code, or maintain the\ncode for, and (3) if it gets too complicated that can itself become a\nsource of objections. But it's an idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 14:00:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "Re: Robert Haas\n> IMHO, the big problem here is that different people want different\n> corner-case behaviors and it's not clear what to do about that. I\n> don't think there's a single vote for \"don't do this at all\". So if\n> there is a desire to take this work forward, the goal probably ought\n> to be to try to either (a) figure out one behavior that everyone can\n> live with or (b) figure out a short list of options that can be used\n> to customize the behavior to a degree that lets everyone get something\n> reasonably close to what they want. For instance, \"what to do if you\n> find a SQL null\" and \"whether to include json values as strings or\n> json objects\" seem like they could potentially be customizable. That's\n> probably not a silver bullet because (1) that's more work and (2)\n> there might be more behaviors than we want to code, or maintain the\n> code for, and (3) if it gets too complicated that can itself become a\n> source of objections. But it's an idea.\n\nThanks for summarizing the thread.\n\nThings mentioned in the thread:\n\n1) rendering of SQL NULLs - include or omit the column\n\n2) rendering of JSON values - both \"quoted string\" and \"inline as\n JSON\" make sense\n\n3) not quoting numeric values and booleans\n\n4) no special treatment of other datatypes like arrays or compound\n values, just quote them\n\n5) row format: JSON object or array (array would be close to CSV\n format)\n\n6) overall format: array of rows, or simply print each row separately\n (\"JSON Lines\" format, https://jsonlines.org/)\n\nI think 1, 2 and perhaps 6 make sense to have configurable. Two or\nthree \\pset options (or one option with a list of flags) don't sound\ntoo bad complexity-wise.\n\nOr maybe just default to \"omit NULL columns\" and \"inline JSON\" (and\nrender null as NULL).\n\nThoughts?\n\nChristoph\n\n\n",
"msg_date": "Fri, 17 May 2024 15:42:52 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "On Fri, May 17, 2024 at 9:42 AM Christoph Berg <[email protected]> wrote:\n> Thanks for summarizing the thread.\n>\n> Things mentioned in the thread:\n>\n> 1) rendering of SQL NULLs - include or omit the column\n>\n> 2) rendering of JSON values - both \"quoted string\" and \"inline as\n> JSON\" make sense\n>\n> 3) not quoting numeric values and booleans\n>\n> 4) no special treatment of other datatypes like arrays or compound\n> values, just quote them\n>\n> 5) row format: JSON object or array (array would be close to CSV\n> format)\n>\n> 6) overall format: array of rows, or simply print each row separately\n> (\"JSON Lines\" format, https://jsonlines.org/)\n>\n> I think 1, 2 and perhaps 6 make sense to have configurable. Two or\n> three \\pset options (or one option with a list of flags) don't sound\n> too bad complexity-wise.\n>\n> Or maybe just default to \"omit NULL columns\" and \"inline JSON\" (and\n> render null as NULL).\n\nIf we're going to just have one option, I agree with making that the\ndefault, and I'd default to an array of row objects. If we're going to\nhave something configurable, I'd at least consider making (4)\nconfigurable.\n\nIt's tempting to just have one option, like \\pset jsonformat\nnullcolumns=omit;inlinevalues=json,array;rowformat=object;resultcontainer=array\nsimply because adding a ton of new options just for this isn't very\nappealing. But looking at how long that is, it's probably not a great\nidea. So I guess separate options is probably better?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 10:04:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
},
{
"msg_contents": "pá 17. 5. 2024 v 16:04 odesílatel Robert Haas <[email protected]>\nnapsal:\n\n> On Fri, May 17, 2024 at 9:42 AM Christoph Berg <[email protected]> wrote:\n> > Thanks for summarizing the thread.\n> >\n> > Things mentioned in the thread:\n> >\n> > 1) rendering of SQL NULLs - include or omit the column\n> >\n> > 2) rendering of JSON values - both \"quoted string\" and \"inline as\n> > JSON\" make sense\n> >\n> > 3) not quoting numeric values and booleans\n> >\n> > 4) no special treatment of other datatypes like arrays or compound\n> > values, just quote them\n> >\n> > 5) row format: JSON object or array (array would be close to CSV\n> > format)\n> >\n> > 6) overall format: array of rows, or simply print each row separately\n> > (\"JSON Lines\" format, https://jsonlines.org/)\n> >\n> > I think 1, 2 and perhaps 6 make sense to have configurable. Two or\n> > three \\pset options (or one option with a list of flags) don't sound\n> > too bad complexity-wise.\n> >\n> > Or maybe just default to \"omit NULL columns\" and \"inline JSON\" (and\n> > render null as NULL).\n>\n> If we're going to just have one option, I agree with making that the\n> default, and I'd default to an array of row objects. If we're going to\n> have something configurable, I'd at least consider making (4)\n> configurable.\n>\n> It's tempting to just have one option, like \\pset jsonformat\n>\n> nullcolumns=omit;inlinevalues=json,array;rowformat=object;resultcontainer=array\n> simply because adding a ton of new options just for this isn't very\n> appealing. But looking at how long that is, it's probably not a great\n> idea. So I guess separate options is probably better?\n>\n\n+1 for separate options\n\nlot of these proposed options can be used for XML too\n\nRegards\n\nPavel\n\n\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\npá 17. 5. 2024 v 16:04 odesílatel Robert Haas <[email protected]> napsal:On Fri, May 17, 2024 at 9:42 AM Christoph Berg <[email protected]> wrote:\n> Thanks for summarizing the thread.\n>\n> Things mentioned in the thread:\n>\n> 1) rendering of SQL NULLs - include or omit the column\n>\n> 2) rendering of JSON values - both \"quoted string\" and \"inline as\n> JSON\" make sense\n>\n> 3) not quoting numeric values and booleans\n>\n> 4) no special treatment of other datatypes like arrays or compound\n> values, just quote them\n>\n> 5) row format: JSON object or array (array would be close to CSV\n> format)\n>\n> 6) overall format: array of rows, or simply print each row separately\n> (\"JSON Lines\" format, https://jsonlines.org/)\n>\n> I think 1, 2 and perhaps 6 make sense to have configurable. Two or\n> three \\pset options (or one option with a list of flags) don't sound\n> too bad complexity-wise.\n>\n> Or maybe just default to \"omit NULL columns\" and \"inline JSON\" (and\n> render null as NULL).\n\nIf we're going to just have one option, I agree with making that the\ndefault, and I'd default to an array of row objects. If we're going to\nhave something configurable, I'd at least consider making (4)\nconfigurable.\n\nIt's tempting to just have one option, like \\pset jsonformat\nnullcolumns=omit;inlinevalues=json,array;rowformat=object;resultcontainer=array\nsimply because adding a ton of new options just for this isn't very\nappealing. But looking at how long that is, it's probably not a great\nidea. So I guess separate options is probably better?+1 for separate optionslot of these proposed options can be used for XML tooRegardsPavel\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 May 2024 16:14:02 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql JSON output format"
}
] |
[
{
"msg_contents": "Hi,\n\nPFA a patch that attempts to fix the bug that \\. on a line\nby itself is handled incorrectly by COPY FROM ... CSV.\nThis issue has been discussed several times previously,\nfor instance in [1] and [2], and mentioned in the\ndoc for \\copy in commit 42d3125.\n\nThere's one case that works today: when\nthe line is part of a multi-line quoted section,\nand the data is read from a file, not from the client.\nIn other situations, an error is raised or the data is cut at\nthe point of \\. without an error.\n\nThe patch addresses that issue in the server and in psql,\nexcept for the case of inlined data, where \\. cannot be \nboth valid data and an EOF marker at the same time, so\nit keeps treating it as an EOF marker. \n\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n[2]\nhttps://www.postgresql.org/message-id/8aeab305-5e94-4fa5-82bf-6da6baee6e05%40app.fastmail.com\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Mon, 18 Dec 2023 21:35:53 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 02:06, Daniel Verite <[email protected]> wrote:\n>\n> Hi,\n>\n> PFA a patch that attempts to fix the bug that \\. on a line\n> by itself is handled incorrectly by COPY FROM ... CSV.\n> This issue has been discussed several times previously,\n> for instance in [1] and [2], and mentioned in the\n> doc for \\copy in commit 42d3125.\n>\n> There's one case that works today: when\n> the line is part of a multi-line quoted section,\n> and the data is read from a file, not from the client.\n> In other situations, an error is raised or the data is cut at\n> the point of \\. without an error.\n>\n> The patch addresses that issue in the server and in psql,\n> except for the case of inlined data, where \\. cannot be\n> both valid data and an EOF marker at the same time, so\n> it keeps treating it as an EOF marker.\n\nI noticed that these tests are passing without applying patch too:\n+++ b/src/test/regress/sql/copy.sql\n@@ -38,6 +38,17 @@ copy copytest2 from :'filename' csv quote '''' escape E'\\\\';\n\n select * from copytest except select * from copytest2;\n\n+--- test unquoted .\\ as data inside CSV\n+\n+truncate copytest2;\n+\n+insert into copytest2(test) values('line1'), ('\\.'), ('line2');\n+copy (select test from copytest2 order by test collate \"C\") to :'filename' csv;\n+-- get the data back in with copy\n+truncate copytest2;\n+copy copytest2(test) from :'filename' csv;\n+select test from copytest2 order by test collate \"C\";\n\nI was not sure if this was intentional. Can we add a test which fails\nin HEAD and passes with the patch applied.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 19 Dec 2023 14:16:40 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "vignesh C wrote:\n\n> I noticed that these tests are passing without applying patch too:\n\n> +insert into copytest2(test) values('line1'), ('\\.'), ('line2');\n> +copy (select test from copytest2 order by test collate \"C\") to :'filename'\n> csv;\n> +-- get the data back in with copy\n> +truncate copytest2;\n> +copy copytest2(test) from :'filename' csv;\n> +select test from copytest2 order by test collate \"C\";\n> \n> I was not sure if this was intentional. Can we add a test which fails\n> in HEAD and passes with the patch applied.\n\nThanks for checking this out.\nIndeed, that was not intentional. I've been using static files\nin my tests and forgot that if the data was produced with\nCOPY OUT, it would quote backslash-dot so that COPY IN could\nreload it without problem.\n\nPFA an updated version that uses \\qecho to produce the\ndata instead of COPY OUT. This test on unpatched HEAD\nshows that copytest2 is missing 2 rows after COPY IN.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Tue, 19 Dec 2023 12:27:42 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 16:57, Daniel Verite <[email protected]> wrote:\n>\n> vignesh C wrote:\n>\n> > I noticed that these tests are passing without applying patch too:\n>\n> > +insert into copytest2(test) values('line1'), ('\\.'), ('line2');\n> > +copy (select test from copytest2 order by test collate \"C\") to :'filename'\n> > csv;\n> > +-- get the data back in with copy\n> > +truncate copytest2;\n> > +copy copytest2(test) from :'filename' csv;\n> > +select test from copytest2 order by test collate \"C\";\n> >\n> > I was not sure if this was intentional. Can we add a test which fails\n> > in HEAD and passes with the patch applied.\n>\n> Thanks for checking this out.\n> Indeed, that was not intentional. I've been using static files\n> in my tests and forgot that if the data was produced with\n> COPY OUT, it would quote backslash-dot so that COPY IN could\n> reload it without problem.\n>\n> PFA an updated version that uses \\qecho to produce the\n> data instead of COPY OUT. This test on unpatched HEAD\n> shows that copytest2 is missing 2 rows after COPY IN.\n\nThanks for the updated patch, any reason why this is handled only in csv.\npostgres=# copy test1 from '/home/vignesh/postgres/inst/bin/copy1.out';\nCOPY 1\npostgres=# select * from test1;\n c1\n-------\n line1\n(1 row)\n\npostgres=# copy test1 from '/home/vignesh/postgres/inst/bin/copy1.out' csv;\nCOPY 1\npostgres=# select * from test1;\n c1\n-------\n line1\n \\.\n line2\n(3 rows)\n\nAs the documentation at [1] says:\nAn end-of-data marker is not necessary when reading from a file, since\nthe end of file serves perfectly well; it is needed only when copying\ndata to or from client applications using pre-3.0 client protocol.\n\n[1] - https://www.postgresql.org/docs/devel/sql-copy.html\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:29:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\tvignesh C wrote:\n\n> Thanks for the updated patch, any reason why this is handled only in csv.\n> postgres=# copy test1 from '/home/vignesh/postgres/inst/bin/copy1.out';\n> COPY 1\n> postgres=# select * from test1;\n> c1\n> -------\n> line1\n> (1 row)\n\nI believe it's safer to not change anything to the normal \"non-csv\"\ntext mode.\nThe current doc says that \\. will not be taken as data in this format.\nFrom https://www.postgresql.org/docs/current/sql-copy.html :\n\n Any other backslashed character that is not mentioned in the above\n table will be taken to represent itself. However, beware of adding\n backslashes unnecessarily, since that might accidentally produce a\n string matching the end-of-data marker (\\.) or the null string (\\N\n by default). These strings will be recognized before any other\n backslash processing is done.\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 21 Dec 2023 20:47:14 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Fri, 22 Dec 2023 at 01:17, Daniel Verite <[email protected]> wrote:\n>\n> vignesh C wrote:\n>\n> > Thanks for the updated patch, any reason why this is handled only in csv.\n> > postgres=# copy test1 from '/home/vignesh/postgres/inst/bin/copy1.out';\n> > COPY 1\n> > postgres=# select * from test1;\n> > c1\n> > -------\n> > line1\n> > (1 row)\n>\n> I believe it's safer to not change anything to the normal \"non-csv\"\n> text mode.\n> The current doc says that \\. will not be taken as data in this format.\n> From https://www.postgresql.org/docs/current/sql-copy.html :\n>\n> Any other backslashed character that is not mentioned in the above\n> table will be taken to represent itself. However, beware of adding\n> backslashes unnecessarily, since that might accidentally produce a\n> string matching the end-of-data marker (\\.) or the null string (\\N\n> by default). These strings will be recognized before any other\n> backslash processing is done.\n\nThanks for the clarification. Then let's keep it as you have implemented.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 23 Dec 2023 09:02:58 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Hi,\n\nThe CI patch tester fails on this patch, because it has a label\nat the end of a C block, which I'm learning is a C23 feature\nthat happens to be supported by gcc 11 [1], but is not portable.\n\nPFA an update fixing this, plus removing an obsolete chunk\nin the COPY documentation that v2 left out.\n\n\n[1] https://gcc.gnu.org/onlinedocs/gcc/Mixed-Labels-and-Declarations.html\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Sun, 31 Dec 2023 16:32:33 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 3:36 PM Daniel Verite <[email protected]> wrote:\n> PFA a patch that attempts to fix the bug that \\. on a line\n> by itself is handled incorrectly by COPY FROM ... CSV.\n> This issue has been discussed several times previously,\n> for instance in [1] and [2], and mentioned in the\n> doc for \\copy in commit 42d3125.\n\nThose links unfortunately seem not to be entirely specific to this\nissue. Other, related things seem to be discussed there, and it's not\nobvious that everyone agrees on what to do, or really that anyone\nagrees on what to do. The best link that I found for this exact issue\nis https://www.postgresql.org/message-id/[email protected]\nbut the thread isn't very conclusive and is so old that any\nconclusions reached then might no longer be considered valid today.\n\nAnd I guess the reason I mention is that, supposing your patch were\ntechnically perfect in every way, would everyone agree that it ought\nto be committed? If Alice is a user with a CSV file that might contain\n\\. on a line by itself within a quoted CSV field, then Alice is\ncurrently sad because she can't necessarily load all of her CSV files.\nThe patch would fix that, and make her happy. On the other hand, if\nBob is a user with a CSV-ish file that definitely doesn't contain \\.\non a line by itself within a quoted CSV field but might have been\ntruncated in the middle of a quoted field, maybe Bob will be sad if\nthis patch gets committed, because he will no longer be able to append\n\\n\\.\\n to whatever junk he's got in the file and let the server sort\nout whether to throw an error.\n\nI have to admit that it seems more likely that there are people in the\nworld with Alice's problem rather than people in the world with Bob's\nproblem. We'd probably make more people happy with the change than\nsad. But it is a definitional change, I think, and that's a little\nscary, and maybe somebody will think that's a reason why we should\nchange nothing here. Part of my hesitancy, I suppose, is that I don't\nunderstand why we even have this strange convention of making \\.\nterminate the input in the first place -- I mean, why wouldn't that be\ndone in some kind of out-of-band way, rather than including a special\nmarker in the data?\n\nI can't help feeling a bit nervous about this first documentation hunk:\n\n--- a/doc/src/sgml/ref/copy.sgml\n+++ b/doc/src/sgml/ref/copy.sgml\n@@ -761,11 +761,7 @@ COPY <replaceable class=\"parameter\">count</replaceable>\n format, <literal>\\.</literal>, the end-of-data marker, could also appear\n as a data value. To avoid any misinterpretation, a <literal>\\.</literal>\n data value appearing as a lone entry on a line is automatically\n- quoted on output, and on input, if quoted, is not interpreted as the\n- end-of-data marker. If you are loading a file created by another\n- application that has a single unquoted column and might have a\n- value of <literal>\\.</literal>, you might need to quote that value in the\n- input file.\n+ quoted on output.\n </para>\n\n <note>\n\nIt doesn't feel right to me to just replace all of this text with\nnothing. That leaves us documenting only the behavior on output,\nwhereas the previous text documents both the output behavior (we quote\nit) and the input behavior (it has to be quoted to avoid being taken\nas the EOF marker).\n\n /*\n- * In CSV mode, we only recognize \\. alone on a line. This is because\n- * \\. is a valid CSV data value.\n+ * In CSV mode, backslash is a normal character.\n */\n- if (c == '\\\\' && (!cstate->opts.csv_mode || first_char_in_line))\n+ if (c == '\\\\' && !cstate->opts.csv_mode)\n\nI don't think that the update comment accurately describes the\nbehavior. If I understand correctly what you're trying to fix, I'd\nexpect the new behavior to be that we only recognize \\. alone on a\nline and even then only if we're not inside a quoting string, but\nthat's not what the revised comment says. Instead, it claims that\nbackslash is just a normal character, but if that were true, the whole\nif-statement wouldn't exist at all, since its purpose is to provide\nspecial-handling for backslashes -- and indeed the patch does not\nchange that, since the if-statement is still looking for a backslash\nand doing something special if one is found.\n\nHmm. Looking at the rest of the patch, it seems like you're removing\nthe logic that prevents us from interpreting\n\n\\. lksdghksdhgjskdghjs\n\nas an end-of-file while in CSV mode. But I would have thought based on\nwhat problem you're trying to fix that you would have wanted to keep\nthat logic and further restrict it so that it only applies when not\nwithin a quoted string.\n\nMaybe I'm misunderstanding what bug you're trying to fix?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jan 2024 16:05:15 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\tRobert Haas wrote:\n\n> Part of my hesitancy, I suppose, is that I don't\n> understand why we even have this strange convention of making \\.\n> terminate the input in the first place -- I mean, why wouldn't that be\n> done in some kind of out-of-band way, rather than including a special\n> marker in the data?\n\nThe v3 protocol added the out-of-band method, but the v2 protocol\ndid not have it, and as far as I understand, this is the reason why\nCopyReadLineText() must interpret \\. as an end-of-data marker.\n\nThe v2 protocol was removed in pg14\nhttps://www.postgresql.org/docs/release/14.0/\n<quote>\n Remove server and libpq support for the version 2 wire protocol (Heikki\nLinnakangas)\n This was last used as the default in PostgreSQL 7.3 (released in 2002).\n</quote>\n\nAlso I hadnt' noticed this before, but the current doc has this mention\nthat is relevant to this patch:\n\nhttps://www.postgresql.org/docs/current/protocol-changes.html\n\"Summary of Changes since Protocol 2.0\"\n<quote>\n COPY data is now encapsulated into CopyData and CopyDone\n messages. There is a well-defined way to recover from errors during\n COPY. The special “\\.” last line is not needed anymore, and is not\n sent during COPY OUT. (It is still recognized as a terminator during\n COPY IN, but its use is deprecated and will eventually be removed.)\n</quote>\n\nWhat the present patch does is essentially, for the server-side part,\nstop recognizing \"\\.\" as as terminator, like this paragraph says, but\nit does that for CSV only, not for TEXT.\n\n> Hmm. Looking at the rest of the patch, it seems like you're removing\n> the logic that prevents us from interpreting\n> \n> \\. lksdghksdhgjskdghjs\n> \n> as an end-of-file while in CSV mode. But I would have thought based on\n> what problem you're trying to fix that you would have wanted to keep\n> that logic and further restrict it so that it only applies when not\n> within a quoted string.\n> \n> Maybe I'm misunderstanding what bug you're trying to fix?\n\nThe fix is that \\. is no longer recognized as special in CSV, whether\nalone on a line or not, and whether in a quoted section or not.\nIt's always interpreted as data, like it would have been in\nthe first place, I imagine, if the v2 protocol could have handled\nit. This is why the patch consists mostly of removing code and\nsimplifying comments.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 16 Jan 2024 15:42:45 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\tRobert Haas wrote:\n\n> Those links unfortunately seem not to be entirely specific to this\n> issue. Other, related things seem to be discussed there, and it's not\n> obvious that everyone agrees on what to do, or really that anyone\n> agrees on what to do. The best link that I found for this exact issue\n> is\n> https://www.postgresql.org/message-id/[email protected]\n> but the thread isn't very conclusive and is so old that any\n> conclusions reached then might no longer be considered valid today.\n\nTo refresh the problem statement, 4 cases that need fixing as\nof HEAD can be distinguished:\n\n#1. copy csv from file, single column, no quoting involved.\nCOPY will stop at \\. and ignore the rest of the file without\nany error or warning.\n\n$ cat >/tmp/file.csv <<EOF\nline1\n\\.\nline2\nEOF\n\n$ psql <<EOF\ncreate table contents(t text);\ncopy contents from '/tmp/file.csv' (format csv);\ntable contents;\nEOF\n\nResults in\n t \n-------\n line1\n(1 row)\n\nThe bug is that a single row is imported instead of the 3 rows of the file.\n\n\n#2. Same as the previous case, but with file_fdw\n\n$ psql <<EOF\nCREATE EXTENSION file_fdw;\n\nCREATE FOREIGN TABLE csv_data(line text) SERVER myserver\n OPTIONS (filename '/tmp/file.csv', format 'csv');\n\nTABLE csv_data;\nEOF\n\nResults in:\n\nline \n-------\n line1\n(1 row)\n\nThe bug is that rows 2 and 3 are missing, as in case #1.\n\n#3. \\copy csv from file with \\. inside a quoted multi-line section\n\nThis is the case that the above linked report mentioned,\nresulting in a failure to import.\nIn addition to being legal CSV, these contents can be produced by\nPostgres itself exporting in CSV.\n\n$ cat >/tmp/file-quoted.csv <<EOF\nline1\n\"\n\\.\n\"\nline2\nEOF\n\n$ psql <<EOF\ncreate table contents(t text);\n\\copy contents from '/tmp/file-quoted.csv' csv;\nEOF\n\nResults in an error:\n\nERROR:\tunterminated CSV quoted field\nCONTEXT: COPY contents, line 4: \"\"\n\\.\n\"\n\nThe expected result is that it imports 3 rows without error.\n\n\n#4. \\copy csv from file, single column, no quoting involved\nThis is the same case as #1 except it uses the client-server protocol.\n\n$ cat >/tmp/file.csv <<EOF\nline1\n\\.\nline2\nEOF\n\n$ psql <<EOF\ncreate table contents(t text);\n\\copy contents from '/tmp/file.csv' (format csv);\nTABLE contents;\nEOF\n\nResults in\n t \n-------\n line1\n(1 row)\n\n\nAs in case #1, a single row is imported instead of 3 rows.\n\n\nThe proposed patch addresses these cases by making the sequence\n\\. non-special in CSV (in fact backslash itself is a normal character in\nCSV).\nIt does not address the cases when the data is embedded after\nthe COPY command or typed interactively in psql, since these cases\nrequire an explicit end-of-data marker, and CSV does not have\nthe concept of an end-of-data marker.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 24 Jan 2024 17:01:15 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> \tRobert Haas wrote:\n>> Those links unfortunately seem not to be entirely specific to this\n>> issue. Other, related things seem to be discussed there, and it's not\n>> obvious that everyone agrees on what to do, or really that anyone\n>> agrees on what to do.\n\n> The proposed patch addresses these cases by making the sequence\n> \\. non-special in CSV (in fact backslash itself is a normal character in\n> CSV).\n> It does not address the cases when the data is embedded after\n> the COPY command or typed interactively in psql, since these cases\n> require an explicit end-of-data marker, and CSV does not have\n> the concept of an end-of-data marker.\n\nI've looked over this patch and I generally agree that this is a\nreasonable solution. While it's barely possible that somebody\nout there is depending on the current behavior of \\. in CSV mode,\nit seems considerably more likely that people who run into it\nwould consider it a bug. In any case, the patch isn't proposed\nfor back-patch, and as cross-version incompatibilities go this\nseems like a pretty small one.\n\nI concur with Robert's doubts about some of the doc changes though.\nIn particular, since we're not changing the behavior for non-CSV\nmode, we shouldn't remove any statements that apply to non-CSV mode.\n\nI'm also wondering why the patch adds a test for\n\"PQprotocolVersion(conn) >= 3\" in handleCopyIn. As already\nnoted upthread, we ripped out all support for protocol 2.0\nsome time ago, so that sure looks like dead code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 15:44:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Tom Lane wrote:\n\n> I've looked over this patch and I generally agree that this is a\n> reasonable solution.\n\nThanks for reviewing this!\n\n> I'm also wondering why the patch adds a test for\n> \"PQprotocolVersion(conn) >= 3\" in handleCopyIn. \n\nI've removed this in the attached update.\n\n> I concur with Robert's doubts about some of the doc changes though.\n> In particular, since we're not changing the behavior for non-CSV\n> mode, we shouldn't remove any statements that apply to non-CSV mode.\n\nI don't quite understand the issues with the doc changes. Let me\ndetail the changes.\n\nThe first change is under \n <refsect2>\n <title>CSV Format</title>\nso it does no concern non-CSV modes.\n\n--- a/doc/src/sgml/ref/copy.sgml\n+++ b/doc/src/sgml/ref/copy.sgml\n@@ -761,11 +761,7 @@ COPY <replaceable class=\"parameter\">count</replaceable>\n format, <literal>\\.</literal>, the end-of-data marker, could also appear\n as a data value. To avoid any misinterpretation, a <literal>\\.</literal>\n data value appearing as a lone entry on a line is automatically\n- quoted on output, and on input, if quoted, is not interpreted as the\n- end-of-data marker. If you are loading a file created by another\n- application that has a single unquoted column and might have a\n- value of <literal>\\.</literal>, you might need to quote that value in\nthe\n- input file.\n+ quoted on output.\n </para>\n\n\nThe part about quoting output is kept because the code still does that.\n\nAbout this bit: \n \"and on input, if quoted, is not interpreted as the end-of-data marker.\"\n\nSo the patch removes that part. The reason is \\. is now not interpreted as\nEOD in both cases, quoted or unquoted, conforming to spec.\nPreviously, what the reader should have understood by \"if quoted, ...\"\nis that it implies \"if not quoted, then .\\ will be interpreted as EOD\neven though that behavior does not conform to the CSV spec\".\nIf we documented the new behavior, it would be something like:\nwhen quoted, it works as expected, and when unquoted, it works as\nexpected too. Isn't it simpler not to say anything?\n\nAbout the next sentence:\n \"If you are loading a file created by another application\n that has a single unquoted column and might have a value of \\., you\n might need to quote that value in the input file.\"\n\nIt's removed as well because it's not longer necessary\nto do that.\n\nThe other hunk is in psql doc:\n\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -1119,10 +1119,6 @@ INSERT INTO tbl1 VALUES ($1, $2) \\bind 'first value'\n'second value' \\g\n\t destination, because all data must pass through the client/server\n\t connection. For large amounts of data the <acronym>SQL</acronym>\n\t command might be preferable.\n-\t Also, because of this pass-through method, <literal>\\copy\n-\t ... from</literal> in <acronym>CSV</acronym> mode will erroneously\n-\t treat a <literal>\\.</literal> data value alone on a line as an\n-\t end-of-input marker.\n\nThat behavior no longer happens, so this gets removed as well.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Fri, 05 Apr 2024 14:58:39 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> \tTom Lane wrote:\n>> I've looked over this patch and I generally agree that this is a\n>> reasonable solution.\n\n> Thanks for reviewing this!\n\nWhile testing this, I tried running the tests with an updated server\nand non-updated psql, and not only did the new test case fail, but\nso did a bunch of existing ones. That's because existing psql will\nsend the trailing \"\\.\" of inlined data to the server, and the updated\nserver will now think that's data if it's in CSV mode.\n\nSo this means that the patch introduces a rather serious cross-version\ncompatibility problem. I doubt we can consider inlined CSV data to be\na niche case that few people use; but it will fail every time if your\npsql is older than your server.\n\nNot sure what to do here. One idea is to install just the psql-side\nfix, which should break nothing now that version-2 protocol is dead,\nand then wait a few years before introducing the server-side change.\nThat seems kind of sad though.\n\nAn argument for not waiting is that psql may not be the only client\nthat needs this behavioral adjustment, and if so there's going to\nbe breakage anyway when we change the server; so we might as well\nget it over with.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 12:02:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "I wrote:\n> So this means that the patch introduces a rather serious cross-version\n> compatibility problem. I doubt we can consider inlined CSV data to be\n> a niche case that few people use; but it will fail every time if your\n> psql is older than your server.\n\nOn third thought, that may not be as bad as I was thinking.\nWe don't blink at the idea that an older psql's \\d commands may\nmalfunction with a newer server, and I think most users have\ninternalized the idea that they want psql >= server. If the\npatch created an incompatibility with that direction, it'd be\na problem, but I don't think it does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 12:27:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> Not sure what to do here. One idea is to install just the psql-side\n> fix, which should break nothing now that version-2 protocol is dead,\n> and then wait a few years before introducing the server-side change.\n> That seems kind of sad though.\n\nWouldn't backpatching solve this? Then only the users who don't\napply the minor updates would have non-matching server and psql.\nInitially I though that obsoleting the v2 protocol was a recent\nmove, but reading older messages from the list I've got the\nimpression that it was more or less in the pipeline since way\nbefore version 10.\n\nAlso one of the cases the patch fixes, the one when imported\ndata are silently truncated at the point of \\., is quite nasty\nIMO.\nI can imagine an app where user-supplied data would be\nappended row-by-row into a CSV file, and would be processed\nperiodically by batch. Under some conditions, in particular\nif newlines in the first column are allowed, a malevolent user could\nsubmit a \\. sequence to cause the batch to miss the rest of the data\nwithout any error being raised.\n\n\n[1]\nhttps://www.postgresql.org/message-id/11648.1403147417%40sss.pgh.pa.us\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 05 Apr 2024 18:54:46 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> \tTom Lane wrote:\n>> Not sure what to do here. One idea is to install just the psql-side\n>> fix, which should break nothing now that version-2 protocol is dead,\n>> and then wait a few years before introducing the server-side change.\n>> That seems kind of sad though.\n\n> Wouldn't backpatching solve this?\n\nNo, it'd just reduce the surface area a bit. People on less-than-\nthe-latest-minor-release would still have the issue. In any case\nback-patching further than v14 would be a nonstarter, because we\ndidn't remove protocol v2 support till then.\n\nHowever, the analogy to \"\\d commands might fail against a newer\nserver\" reduces my level of concern quite a lot: it's hard to\ndraw much of a line separating that kind of issue from \"inline\nCOPY CSV will fail against a newer server\". It's not like such\nfailures won't be obvious and fairly easy to diagnose.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 15:12:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "After some more poking at this topic, I realize that there is already\nvery strange and undocumented behavior for backslash-dot even in\nnon-CSV mode. Create a file like this:\n\n$ cat eofdata\nfoobar\nfoobaz\\.\nmore\n\\.\nyet more\n\nand try importing it with COPY:\n\nregression=# create table eofdata(f1 text);\nCREATE TABLE\nregression=# copy eofdata from '/home/tgl/pgsql/eofdata';\nCOPY 2\nregression=# table eofdata;\n f1 \n--------\n foobar\n foobaz\n(2 rows)\n\nThat's what you get in 9.0 and earlier versions, and it's already\nnot-as-documented, because we claim that only \\. alone on a line is an\nEOF marker; we certainly don't suggest that what's in front of it will\nbe taken as valid data. However, somebody broke it some more in 9.1,\nbecause 9.1 up to HEAD produce this result:\n\nregression=# create table eofdata(f1 text);\nCREATE TABLE\nregression=# copy eofdata from '/home/tgl/pgsql/eofdata';\nCOPY 3\nregression=# table eofdata;\n f1 \n--------\n foobar\n foobaz\n more\n(3 rows)\n\nSo the current behavior is that \\. that is on the end of a line,\nbut is not the whole line, is silently discarded and we keep going.\n\nAll versions throw \"end-of-copy marker corrupt\" if there is\nsomething after \\. on the same line.\n\nThis is sufficiently weird that I'm starting to come around to\nDaniel's original proposal that we just drop the server's recognition\nof \\. altogether (which would allow removal of some dozens of lines of\ncomplicated and now known-buggy code). Alternatively, we could fix it\nso that \\. at the end of a line draws \"end-of-copy marker corrupt\",\nwhich would at least make things consistent, but I'm not sure that has\nany great advantage. I surely don't want to document the current\nbehavioral details as being the right thing that we're going to keep\ndoing.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 16:34:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "I wrote:\n> So the current behavior is that \\. that is on the end of a line,\n> but is not the whole line, is silently discarded and we keep going.\n\n> All versions throw \"end-of-copy marker corrupt\" if there is\n> something after \\. on the same line.\n\n> This is sufficiently weird that I'm starting to come around to\n> Daniel's original proposal that we just drop the server's recognition\n> of \\. altogether (which would allow removal of some dozens of lines of\n> complicated and now known-buggy code).\n\nI experimented with that and soon ran into a nasty roadblock: it\nbreaks dump/restore, because pg_dump includes a \"\\.\" line after\nCOPY data whether or not it really needs one. Worse, that's\nimplemented by including the \"\\.\" line into the archive format,\nso that existing dump files contain it. Getting rid of it would\nrequire an archive format version bump, plus some hackery to allow\nremoval of the line when reading old dump files.\n\nWhile that's surely doable with enough effort, it's not the kind\nof thing to be undertaking with less than 2 days to feature freeze.\nNot to mention that I'm not sure we have consensus to do it at all.\n\nMore fun stuff: PQgetline actually invents a \"\\.\" line when it\nsees server end-of-copy, and we tell users of that function to\ncheck for that not an out-of-band return value to detect EOF.\nIt looks like we have no callers of that in the core distro,\nbut do we want to deprecate it completely?\n\nSo I feel like we need to put this patch on the shelf for the moment\nand come back to it early in v18. Although it seems reasonably clear\nwhat to do on the CSV side of things, it's very much less clear what\nto do about text-format handling of EOD markers, and I don't want to\nchange one of those things in v17 and the other in v18. Also it\nseems like there are more dependencies on \"\\.\" than we realized.\n\nThere could be an argument for applying just the psql change now,\nto remove its unnecessary sending of \"\\.\". That won't break\nanything and it would give us at least one year's leg up on\ncompatibility issues.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2024 13:03:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> This is sufficiently weird that I'm starting to come around to\n> Daniel's original proposal that we just drop the server's recognition\n> of \\. altogether (which would allow removal of some dozens of lines of\n> complicated and now known-buggy code)\n\nFWIW my plan was to not change anything in the TEXT mode,\nbut I wasn't aware it had this issue that you found when\n\\. is not in a line by itself.\n\n> Alternatively, we could fix it so that \\. at the end of a line draws\n> \"end-of-copy marker corrupt\"\n> which would at least make things consistent, but I'm not sure that has\n> any great advantage. I surely don't want to document the current\n> behavioral details as being the right thing that we're going to keep\n> doing.\n\nAgreed we don't want to document that, but also why doesn't \\. in the\ncontents represent just a dot (as opposed to being an error),\njust like \\a is a?\n\nI mean if eofdata contains\n\n foobar\\a\n foobaz\\aother\n\nthen we get after import:\n f1 \n--------------\n foobara\n foobazaother\n(2 rows)\n\nReading the current doc on the text format, I can't see why\nimporting:\n\n foobar\\.\n foobar\\.other\n\nis not supposed to produce\n f1 \n--------------\n foobar.\n foobaz.other\n(2 rows)\n\n\nI see these rules in [1] about backslash:\n\n#1. \n \"End of data can be represented by a single line containing just\n backslash-period (\\.).\"\n\nfoobar\\. and foobar\\.other do not match that so #1 does not describe\nhow they're interpreted.\n\n#2.\n \"Backslash characters (\\) can be used in the COPY data to quote data\n characters that might otherwise be taken as row or column\n delimiters.\"\n\nDot is not a column delimiter (it's forbidden anyway), so #2 does\nnot apply.\n\n#3.\n \"In particular, the following characters must be preceded by a\n backslash if they appear as part of a column value: backslash itself,\n newline, carriage return, and the current delimiter character\"\n\nDot is not in that list so #3 does not apply.\n\n#4.\n \"The following special backslash sequences are recognized by COPY\n FROM:\" (followed by the table with \\b \\f, ...,)\n\nDot is not mentioned.\n\n#5.\n \"Any other backslashed character that is not mentioned in the above\n table will be taken to represent itself\"\n\nHere we say that backslash dot represents a dot (unless other\nrules apply)\n\n foobar\\. => foobar. \n foobar\\.other => foobar.other\n\n#6.\n \"However, beware of adding backslashes unnecessarily, since that\n might accidentally produce a string matching the end-of-data marker\n (\\.) or the null string (\\N by default).\"\n\nSo we *recommend* not to use \\. but as I understand it, the warning\nwith the EOD marker is about accidentally creating a line that matches #1,\nthat is, \\. alone on a line.\n\n#7\n \"These strings will be recognized before any other backslash\n processing is done.\"\n\nTBH I don't understand what #7 implies. The order in backslash\nprocessing looks like an implementation detail that should not\nmatter in understanding the format?\n\n\nConsidering this, it seems to me that #5 says that\nbackslash-dot represents a dot unless #1 applies, and the\nother #2 #3 #4 #6 #7 rules do not state anything that would\ncontradict that.\n\n\n[1] https://www.postgresql.org/docs/current/sql-copy.html\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sun, 07 Apr 2024 00:00:25 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 12:00:25AM +0200, Daniel Verite wrote:\n> \tTom Lane wrote:\n> \n> > This is sufficiently weird that I'm starting to come around to\n> > Daniel's original proposal that we just drop the server's recognition\n> > of \\. altogether (which would allow removal of some dozens of lines of\n> > complicated and now known-buggy code)\n> \n> FWIW my plan was to not change anything in the TEXT mode,\n> but I wasn't aware it had this issue that you found when\n> \\. is not in a line by itself.\n> \n> > Alternatively, we could fix it so that \\. at the end of a line draws\n> > \"end-of-copy marker corrupt\"\n> > which would at least make things consistent, but I'm not sure that has\n> > any great advantage. I surely don't want to document the current\n> > behavioral details as being the right thing that we're going to keep\n> > doing.\n> \n> Agreed we don't want to document that, but also why doesn't \\. in the\n> contents represent just a dot (as opposed to being an error),\n> just like \\a is a?\n\nI looked into this and started to realize that \\. is the only copy\nbackslash command where we define the behavior only alone at the\nbeginning of a line, and not in other circumstances. The \\a example\nabove suggests \\. should be period in all other cases, as suggested, but\nI don't know the ramifications if that.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sat, 6 Apr 2024 22:55:04 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Sun, Apr 7, 2024 at 12:00:25AM +0200, Daniel Verite wrote:\n>> Agreed we don't want to document that, but also why doesn't \\. in the\n>> contents represent just a dot (as opposed to being an error),\n>> just like \\a is a?\n\n> I looked into this and started to realize that \\. is the only copy\n> backslash command where we define the behavior only alone at the\n> beginning of a line, and not in other circumstances. The \\a example\n> above suggests \\. should be period in all other cases, as suggested, but\n> I don't know the ramifications if that.\n\nHere's the problem: if some client-side code thinks it's okay to\nquote \".\" as \"\\.\", what is likely to happen when it's presented\na data value consisting only of \".\"? It could very easily fall\ninto the trap of producing an end-of-data marker.\n\nIf we get rid of the whole concept of end-of-data markers, then\nit'd be totally reasonable to accept \"\\.\" as \".\". But as long\nas we still have end-of-data markers, I think it's unwise to allow\n\"\\.\" to appear as anything but an end-of-data marker. It'd just\nadd camouflage to the booby trap.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 00:07:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Hi,\n\nI'm reviewing patches in Commitfest 2024-07 from top to bottom:\nhttps://commitfest.postgresql.org/48/\n\nThis is the 4th patch:\nhttps://commitfest.postgresql.org/48/4710/\n\nFYI: https://commitfest.postgresql.org/48/4681/ is my patch.\n\nIn <[email protected]>\n \"Re: Fixing backslash dot for COPY FROM...CSV\" on Sun, 07 Apr 2024 00:07:13 -0400,\n Tom Lane <[email protected]> wrote:\n\n> Here's the problem: if some client-side code thinks it's okay to\n> quote \".\" as \"\\.\", what is likely to happen when it's presented\n> a data value consisting only of \".\"? It could very easily fall\n> into the trap of producing an end-of-data marker.\n> \n> If we get rid of the whole concept of end-of-data markers, then\n> it'd be totally reasonable to accept \"\\.\" as \".\". But as long\n> as we still have end-of-data markers, I think it's unwise to allow\n> \"\\.\" to appear as anything but an end-of-data marker. It'd just\n> add camouflage to the booby trap.\n\nI read through this thread. It seems that this thread\ndiscuss 2 things:\n\n1. \\. in CSV mode\n2. \\. in non-CSV mode\n\nRecent messages discussed mainly 2. but how about create a\nseparated thread for 2.? Because the original mail focused\non 1. and it seems that we can handle them separately.\n\n\nHere are comments for the latest v4 patch:\n\n----\ndiff --git a/src/bin/psql/copy.c b/src/bin/psql/copy.c\nindex 961ae32694..a39818b193 100644\n--- a/src/bin/psql/copy.c\n+++ b/src/bin/psql/copy.c\n@@ -620,20 +620,32 @@ handleCopyIn(PGconn *conn, FILE *copystream, bool isbinary, PGresult **res)\n...\n+\t\t\t\t\t\t\tif (copystream == pset.cur_cmd_source)\n+\t\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\t*fgresult='\\0';\n+\t\t\t\t\t\t\t\tbuflen -= linelen;\n----\n\nSpaces around \"=\" are missing for the fgresult line.\n\nBTW, here is a diff after pgindent:\n\n----\ndiff --git a/src/bin/psql/copy.c b/src/bin/psql/copy.c\nindex a39818b1933..4ee8481998a 100644\n--- a/src/bin/psql/copy.c\n+++ b/src/bin/psql/copy.c\n@@ -621,13 +621,13 @@ handleCopyIn(PGconn *conn, FILE *copystream, bool isbinary, PGresult **res)\n \t\t\t\tif (buf[buflen - 1] == '\\n')\n \t\t\t\t{\n \t\t\t\t\t/*\n-\t\t\t\t\t * When at the beginning of the line, check for EOF marker.\n-\t\t\t\t\t * If the marker is found and the data is inlined,\n+\t\t\t\t\t * When at the beginning of the line, check for EOF\n+\t\t\t\t\t * marker. If the marker is found and the data is inlined,\n \t\t\t\t\t * we must stop at this point. If not, the \\. line can be\n-\t\t\t\t\t * sent to the server, and we let it decide whether\n-\t\t\t\t\t * it's an EOF or not depending on the format: in\n-\t\t\t\t\t * basic TEXT, \\. is going to be interpreted as an EOF, in\n-\t\t\t\t\t * CSV, it will not.\n+\t\t\t\t\t * sent to the server, and we let it decide whether it's\n+\t\t\t\t\t * an EOF or not depending on the format: in basic TEXT,\n+\t\t\t\t\t * \\. is going to be interpreted as an EOF, in CSV, it\n+\t\t\t\t\t * will not.\n \t\t\t\t\t */\n \t\t\t\t\tif (at_line_begin && copystream == pset.cur_cmd_source)\n \t\t\t\t\t{\n@@ -635,15 +635,16 @@ handleCopyIn(PGconn *conn, FILE *copystream, bool isbinary, PGresult **res)\n \t\t\t\t\t\t\t(linelen == 4 && memcmp(fgresult, \"\\\\.\\r\\n\", 4) == 0))\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\tcopydone = true;\n+\n \t\t\t\t\t\t\t/*\n-\t\t\t\t\t\t\t * Remove the EOF marker from the data sent.\n-\t\t\t\t\t\t\t * In the case of CSV, the EOF marker must be\n+\t\t\t\t\t\t\t * Remove the EOF marker from the data sent. In\n+\t\t\t\t\t\t\t * the case of CSV, the EOF marker must be\n \t\t\t\t\t\t\t * removed, otherwise it would be interpreted by\n \t\t\t\t\t\t\t * the server as valid data.\n \t\t\t\t\t\t\t */\n \t\t\t\t\t\t\tif (copystream == pset.cur_cmd_source)\n \t\t\t\t\t\t\t{\n-\t\t\t\t\t\t\t\t*fgresult='\\0';\n+\t\t\t\t\t\t\t\t*fgresult = '\\0';\n \t\t\t\t\t\t\t\tbuflen -= linelen;\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t}\n----\n\n\nI also confirmed that the updated server and non-updated\npsql compatibility problem (the end-of-data \"\\.\" is\ninserted). It seems that it's difficult to solve without\nintroducing incompatibility.\n\nHow about introducing a new COPY option that controls\nwhether \"\\.\" is ignored or not instead of this approach?\nHere is a migration idea:\n\n1. Add the new COPY option to v18\n2. psql detects whether a server has the new COPY option\n 2.1. If it's available, psql uses the option and doesn't send \"\\.\"\n 2.2. If it's not available, psql doesn't use the option and sends \"\\.\"\n3. psql always doesn't sends \"\\.\" after v17 or earlier reach EOL\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:10:31 +0900 (JST)",
"msg_from": "Sutou Kouhei <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Sutou Kouhei <[email protected]> writes:\n> I read through this thread. It seems that this thread\n> discuss 2 things:\n> 1. \\. in CSV mode\n> 2. \\. in non-CSV mode\n> Recent messages discussed mainly 2. but how about create a\n> separated thread for 2.? Because the original mail focused\n> on 1. and it seems that we can handle them separately.\n\nWell, we don't want to paint ourselves into a corner by considering\nonly part of the problem.\n\nWhat I'm currently thinking is that we can't remove the special\ntreatment of \\. in text mode. It's not arguably a bug, because\nit's been part of the specification since day one; and there\nare too many compatibility risks in pg_dump and elsewhere.\nI think we should fix it so that \\. that's not alone on a line\nthrows an error, but I wouldn't go further than that.\n\n> How about introducing a new COPY option that controls\n> whether \"\\.\" is ignored or not instead of this approach?\n\nNo thanks. Once we create such an option we'll never be\nable to get rid of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 13:36:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Sutou Kouhei wrote:\n\n> BTW, here is a diff after pgindent:\n\nPFA a v5 with the cosmetic changes applied.\n\n> I also confirmed that the updated server and non-updated\n> psql compatibility problem (the end-of-data \"\\.\" is\n> inserted). It seems that it's difficult to solve without\n> introducing incompatibility.\n\nTo clarify the compatibility issue, the attached bash script\ncompares pre-patch and post-patch client/server combinations with\ndifferent cases, submitted with different copy variants.\n\ncase A: quoted backslash-dot sequence in CSV\ncase B: unquoted backslash-dot sequence in CSV\ncase C: CSV without backslash-dot\ncase D: text with backslash-dot at the end\ncase E: text without backslash-dot at the end\n\nThe different ways to submit the data:\ncopy from file\n\\copy from file\n\\copy from pstdin\ncopy from stdin with embedded data after the command\n\nAlso attaching the tables of results with the patch as it stands.\n\"Failed\" is when psql reports an error and \"Data mismatch\"\nis when it succeeds but with copied data differing from what was\nexpected.\n\nDoes your test has an equivalent in these results or is it a different\ncase?\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n=======================",
"msg_date": "Wed, 31 Jul 2024 15:42:41 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Fixing backslash dot for COPY FROM...CSV\" on Wed, 31 Jul 2024 15:42:41 +0200,\n \"Daniel Verite\" <[email protected]> wrote:\n\n>> BTW, here is a diff after pgindent:\n> \n> PFA a v5 with the cosmetic changes applied.\n\nThanks for updating the patch.\n\n>> I also confirmed that the updated server and non-updated\n>> psql compatibility problem (the end-of-data \"\\.\" is\n>> inserted). It seems that it's difficult to solve without\n>> introducing incompatibility.\n> \n> To clarify the compatibility issue, the attached bash script\n> compares pre-patch and post-patch client/server combinations with\n> different cases, submitted with different copy variants.\n> \n> case A: quoted backslash-dot sequence in CSV\n> case B: unquoted backslash-dot sequence in CSV\n> case C: CSV without backslash-dot\n> case D: text with backslash-dot at the end\n> case E: text without backslash-dot at the end\n> \n> The different ways to submit the data:\n> copy from file\n> \\copy from file\n> \\copy from pstdin\n> copy from stdin with embedded data after the command\n> \n> Also attaching the tables of results with the patch as it stands.\n> \"Failed\" is when psql reports an error and \"Data mismatch\"\n> is when it succeeds but with copied data differing from what was\n> expected.\n> \n> Does your test has an equivalent in these results or is it a different\n> case?\n\nSorry for not clarify my try. My try was:\n\nCase C\n+----------+----------------+\n| method | patched-server+|\n| | unpatched-psql |\n+----------+----------------+\n| embedded | Data mismatch |\n+----------+----------------+\n\nI confirmed that this case is \"Data mismatch\".\n\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Thu, 01 Aug 2024 06:49:49 +0900 (JST)",
"msg_from": "Sutou Kouhei <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 15:42, Daniel Verite <[email protected]> wrote:\n>\n> Sutou Kouhei wrote:\n>\n> > BTW, here is a diff after pgindent:\n>\n> PFA a v5 with the cosmetic changes applied.\n\nThank you Daniel for working on it. I've tested the patch and it seems\nit works as expected.\n\nI have a couple of minor comments.\n\nIt seems it isn't necessary to handle \"\\.\" within\n\"CopyAttributeOutCSV()\" (file \"src/backend/commands/copyto.c\")\nanymore.\n\n /*\n * Because '\\.' can be a data value, quote it if it appears alone on a\n * line so it is not interpreted as the end-of-data marker.\n */\n if (single_attr && strcmp(ptr, \"\\\\.\") == 0)\n use_quote = true;\n\nYou might see the difference in the test \"cooy2.sql\". Without changes\nthe output is:\n\n =# COPY testeoc TO stdout CSV;\n a\\.\n \\.b\n c\\.d\n \"\\.\"\n\nAnother thing is that the comparison \"copystream ==\npset.cur_cmd_source\" happens twice within \"handleCopyIn()\". TBH it is\na bit confusing to me what is the real purpose of that check, but one\nof the comparisons looks unnecessary.\n\n-- \nKind regards,\nArtur\nSupabase\n\n\n",
"msg_date": "Mon, 23 Sep 2024 17:38:56 +0200",
"msg_from": "Artur Zakirov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> To clarify the compatibility issue, the attached bash script\n> compares pre-patch and post-patch client/server combinations with\n> different cases, submitted with different copy variants.\n> ...\n> Also attaching the tables of results with the patch as it stands.\n> \"Failed\" is when psql reports an error and \"Data mismatch\"\n> is when it succeeds but with copied data differing from what was\n> expected.\n\nThanks for doing that --- that does indeed clarify matters.\nObviously, the cases we need to worry about most are the \"Data\nmismatch\" ones, since those might lead to silent misbehavior.\nI modified your script to show me the exact nature of the mismatches.\nThe results are:\n\n* For case B, unpatched both sides, the \"mismatch\" is that the copy\nstops at the \\. line. We should really call this the \"old expected\nbehavior\", since it's not impossible that somebody is relying on it.\nWe're okay with breaking that expectation, but the question is whether\nthere might be other surprises.\n\n* For case B, unpatched-server+patched-psql, the mismatches are\nexactly the same, i.e. the user sees the old behavior. That's OK.\n\n* For case B, patched-server+unpatched-psql, the mismatches are\ndifferent: the copy absorbs the \\. as a data line and then stops.\nSo that's surprising and bad, as it's neither the old expected\nbehavior nor the new intended behavior. However, there are two\nthings that make me less worried about this than I might be.\nFirst, it's pretty likely that the server will not find \"\\.\"\nalone on a line to be valid input data for the COPY, so that the\nresult will be an error not silently wrong data. Second, the case\nof patched-server+unpatched-psql is fairly uncommon, and people\nknow to expect that an old psql might not behave perfectly against\na newer server. The other direction of version mismatch is of far\nmore concern: people do expect a new psql to work with an old server.\n\n* For case C, patched-server+unpatched-psql, the mismatch for embedded\ndata is again that the copy absorbs the \\. as a data line and then\nstops. Again, this isn't great but the mitigating factors are the\nsame as above.\n\nIn all other cases, the results are the same or strictly better than\nbefore. So on the whole I think we are good on the compatibility\nfront and I'm ready to press ahead with the v5 behavior.\n\nHowever, I've not looked at Artur's coding suggestions downthread.\nDo you want to review that and see if you want to adopt any of\nthose changes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Sep 2024 16:11:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "Artur Zakirov wrote:\n\n> I've tested the patch and it seems it works as expected.\n\nThanks for looking at this!\n\n> It seems it isn't necessary to handle \"\\.\" within\n> \"CopyAttributeOutCSV()\" (file \"src/backend/commands/copyto.c\")\n> anymore.\n\nIt's still useful to produce CSV data that can be safely\nreloaded by previous versions.\nUntil these versions are EOL'ed, I assume we'd better\ncontinue to quote \"\\.\"\n\n> Another thing is that the comparison \"copystream ==\n> pset.cur_cmd_source\" happens twice within \"handleCopyIn()\". TBH it is\n> a bit confusing to me what is the real purpose of that check, but one\n> of the comparisons looks unnecessary.\n\nIndeed, good catch. The redundant comparison is removed in the\nattached v6.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Mon, 30 Sep 2024 21:15:44 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
},
{
"msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> [ v6-0001-Support-backslash-dot-on-a-line-by-itself-as-vali.patch ]\n\nI did some more work on the docs and comments, and pushed that.\n\nReturning to my upthread thought that\n\n>>> I think we should fix it so that \\. that's not alone on a line\n>>> throws an error, but I wouldn't go further than that.\n\nhere's a quick follow-on patch to make that happen. It could\nprobably do with a test case to demonstrate the error, but\nI didn't bother yet pending approval that we want to do this.\n(This passes check-world as it stands, indicating we have no\nexisting test that expects this case to work.)\n\nAlso, I used the same error message \"end-of-copy marker corrupt\"\nthat we have for the case of junk following the marker, but\nI can't say that I think much of that phrasing. What do people\nthink of \"end-of-copy marker is not alone on its line\", instead?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 30 Sep 2024 18:45:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing backslash dot for COPY FROM...CSV"
}
] |
[
{
"msg_contents": "By chance I discovered that the comment for the typedefs of \"double\"s\ndoes not cover Cardinality. Should we update that comment accordingly,\nmaybe something like below?\n\n- * Typedefs for identifying qualifier selectivities and plan costs as such.\n- * These are just plain \"double\"s, but declaring a variable as Selectivity\n- * or Cost makes the intent more obvious.\n+ * Typedefs for identifying qualifier selectivities, plan costs and\n+ * estimated rows or other count as such. These are just plain \"double\"s,\n+ * but declaring a variable as Selectivity, Cost or Cardinality makes the\n+ * intent more obvious.\n\nThanks\nRichard",
"msg_date": "Tue, 19 Dec 2023 14:23:25 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update the comment in nodes.h to cover Cardinality"
},
{
"msg_contents": "On 19.12.23 07:23, Richard Guo wrote:\n> By chance I discovered that the comment for the typedefs of \"double\"s\n> does not cover Cardinality. Should we update that comment accordingly,\n> maybe something like below?\n> \n> - * Typedefs for identifying qualifier selectivities and plan costs as such.\n> - * These are just plain \"double\"s, but declaring a variable as Selectivity\n> - * or Cost makes the intent more obvious.\n> + * Typedefs for identifying qualifier selectivities, plan costs and\n> + * estimated rows or other count as such. These are just plain \"double\"s,\n> + * but declaring a variable as Selectivity, Cost or Cardinality makes the\n> + * intent more obvious.\n\nFixed, thanks.\n\n\n\n",
"msg_date": "Tue, 19 Dec 2023 15:50:37 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update the comment in nodes.h to cover Cardinality"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 10:50 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 19.12.23 07:23, Richard Guo wrote:\n> > By chance I discovered that the comment for the typedefs of \"double\"s\n> > does not cover Cardinality. Should we update that comment accordingly,\n> > maybe something like below?\n> >\n> > - * Typedefs for identifying qualifier selectivities and plan costs as\n> such.\n> > - * These are just plain \"double\"s, but declaring a variable as\n> Selectivity\n> > - * or Cost makes the intent more obvious.\n> > + * Typedefs for identifying qualifier selectivities, plan costs and\n> > + * estimated rows or other count as such. These are just plain\n> \"double\"s,\n> > + * but declaring a variable as Selectivity, Cost or Cardinality makes\n> the\n> > + * intent more obvious.\n>\n> Fixed, thanks.\n\n\nThanks for the fix!\n\nThanks\nRichard\n\nOn Tue, Dec 19, 2023 at 10:50 PM Peter Eisentraut <[email protected]> wrote:On 19.12.23 07:23, Richard Guo wrote:\n> By chance I discovered that the comment for the typedefs of \"double\"s\n> does not cover Cardinality. Should we update that comment accordingly,\n> maybe something like below?\n> \n> - * Typedefs for identifying qualifier selectivities and plan costs as such.\n> - * These are just plain \"double\"s, but declaring a variable as Selectivity\n> - * or Cost makes the intent more obvious.\n> + * Typedefs for identifying qualifier selectivities, plan costs and\n> + * estimated rows or other count as such. These are just plain \"double\"s,\n> + * but declaring a variable as Selectivity, Cost or Cardinality makes the\n> + * intent more obvious.\n\nFixed, thanks.Thanks for the fix!ThanksRichard",
"msg_date": "Wed, 20 Dec 2023 08:39:52 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update the comment in nodes.h to cover Cardinality"
}
] |
[
{
"msg_contents": "Hi,\nIs it possible to visualize the DDL with the pg_waldump tool. I created a\npostgres user but I cannot find the creation command in the wals\n\nThanks for help\n\nFabrice\n\nHi,Is it possible to visualize the DDL with the pg_waldump tool. I created a postgres user but I cannot find the creation command in the walsThanks for helpFabrice",
"msg_date": "Tue, 19 Dec 2023 12:26:54 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_waldump"
},
{
"msg_contents": "Hello\n\nIt is something like\n\nrmgr: Heap len (rec/tot): 143/ 143, tx: 748, lsn: 0/01530AF0, prev 0/01530AB8, desc: INSERT off: 17, flags: 0x00, blkref #0: rel 1664/0/1260 blk 0\n\njust insertion into pg_authid (oid=1260)\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 19 Dec 2023 15:07:07 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:pg_waldump"
},
{
"msg_contents": "On Tue, 19 Dec 2023, 12:27 Fabrice Chapuis, <[email protected]> wrote:\n>\n> Hi,\n> Is it possible to visualize the DDL with the pg_waldump tool. I created a postgres user but I cannot find the creation command in the wals\n\nNot really, no. PostgreSQL does not log DDL or DML as such in WAL.\nEssentially all catalog updates are logged only as changes on a\ncertain page in some file: a new user getting inserted would be\napproximately \"Insert tuple [user's pg_role row data] on page X in\nfile [the file corresponding to the pg_role table]\".\n\nYou could likely derive most DDL commands from Heap/Insert,\nHeap/Delete, and Heap/Update records (after cross-referencing the\ndatabase's relfilemap), as most DDL is \"just\" a lot of in-memory\noperations plus some record insertions/updates/deletes in catalog\ntables. You'd also need to keep track of any relfilemap changes while\nprocessing the WAL, as VACUUM FULL on the catalog tables would change\nthe file numbering of catalog tables...\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Dec 2023 14:00:07 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump"
},
{
"msg_contents": "Ok thanks for all these precisions\nRegards\nFabrice\n\nOn Tue, Dec 19, 2023 at 2:00 PM Matthias van de Meent <\[email protected]> wrote:\n\n> On Tue, 19 Dec 2023, 12:27 Fabrice Chapuis, <[email protected]>\n> wrote:\n> >\n> > Hi,\n> > Is it possible to visualize the DDL with the pg_waldump tool. I created\n> a postgres user but I cannot find the creation command in the wals\n>\n> Not really, no. PostgreSQL does not log DDL or DML as such in WAL.\n> Essentially all catalog updates are logged only as changes on a\n> certain page in some file: a new user getting inserted would be\n> approximately \"Insert tuple [user's pg_role row data] on page X in\n> file [the file corresponding to the pg_role table]\".\n>\n> You could likely derive most DDL commands from Heap/Insert,\n> Heap/Delete, and Heap/Update records (after cross-referencing the\n> database's relfilemap), as most DDL is \"just\" a lot of in-memory\n> operations plus some record insertions/updates/deletes in catalog\n> tables. You'd also need to keep track of any relfilemap changes while\n> processing the WAL, as VACUUM FULL on the catalog tables would change\n> the file numbering of catalog tables...\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n\nOk thanks for all these precisionsRegards FabriceOn Tue, Dec 19, 2023 at 2:00 PM Matthias van de Meent <[email protected]> wrote:On Tue, 19 Dec 2023, 12:27 Fabrice Chapuis, <[email protected]> wrote:\n>\n> Hi,\n> Is it possible to visualize the DDL with the pg_waldump tool. I created a postgres user but I cannot find the creation command in the wals\n\nNot really, no. PostgreSQL does not log DDL or DML as such in WAL.\nEssentially all catalog updates are logged only as changes on a\ncertain page in some file: a new user getting inserted would be\napproximately \"Insert tuple [user's pg_role row data] on page X in\nfile [the file corresponding to the pg_role table]\".\n\nYou could likely derive most DDL commands from Heap/Insert,\nHeap/Delete, and Heap/Update records (after cross-referencing the\ndatabase's relfilemap), as most DDL is \"just\" a lot of in-memory\noperations plus some record insertions/updates/deletes in catalog\ntables. You'd also need to keep track of any relfilemap changes while\nprocessing the WAL, as VACUUM FULL on the catalog tables would change\nthe file numbering of catalog tables...\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Tue, 19 Dec 2023 16:11:54 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump"
}
] |
[
{
"msg_contents": "Hi hacker,\n\nAt the moment the only requirement for custom parameter names is that they\nshould have one or more dots.\nFor example:\ntest5=# set a.b.c.d = 1;\nSET\ntest5=# show a.b.c.d;\n a.b.c.d\n---------\n 1\n(1 row)\n\nBut, Postgres fails to start if such parameters are set in the\nconfiguration file with the following error:\nLOG: syntax error in file \"/tmp/cluster/postgresql.auto.conf\" line 8, near\ntoken \"a.b.c.d\"\nFATAL: configuration file \"postgresql.auto.conf\" contains errors\n\nWhat is more fun, ALTER SYSTEM allows writing such parameters to the\npostgresql.auto.conf file if they are defined in a session:\ntest5=# show a.b.c.d;\nERROR: unrecognized configuration parameter \"a.b.c.d\"\ntest5=# alter system set a.b.c.d = 1;\nERROR: unrecognized configuration parameter \"a.b.c.d\"\ntest5=# set a.b.c.d = 1;\nSET\ntest5=# alter system set a.b.c.d = 1;\nALTER SYSTEM\n\nIn my opinion it would be fair to make parsing of config files with the\nrest of the code responsible for GUC handling by allowing custom parameters\ncontaining more than one dot.\nThe fix is rather simple, please find the patch attached.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Tue, 19 Dec 2023 13:25:08 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allow custom parameters with more than one dot in config files."
},
{
"msg_contents": "Alexander Kukushkin <[email protected]> writes:\n> At the moment the only requirement for custom parameter names is that they\n> should have one or more dots.\n> ...\n> But, Postgres fails to start if such parameters are set in the\n> configuration file with the following error:\n\nHmm.\n\n> In my opinion it would be fair to make parsing of config files with the\n> rest of the code responsible for GUC handling by allowing custom parameters\n> containing more than one dot.\n\nI wonder if we wouldn't be better advised to require exactly one dot.\nThis isn't a feature that we really encourage users to use, and the\nfurther we move the goalposts for it, the harder it will be to replace\nit. In particular I doubt the long-stalled session-variables patch\ncould support such names, since it needs the names to conform to\nnormal SQL rules.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Dec 2023 10:13:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow custom parameters with more than one dot in config files."
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 16:13, Tom Lane <[email protected]> wrote:\n\n>\n> I wonder if we wouldn't be better advised to require exactly one dot.\n> This isn't a feature that we really encourage users to use, and the\n> further we move the goalposts for it, the harder it will be to replace\n> it. In particular I doubt the long-stalled session-variables patch\n> could support such names, since it needs the names to conform to\n> normal SQL rules.\n>\n\nIf I understand correctly, session variables don't even intersect with\ncustom parameters.\nAlso, they seems to work fine with FQDN:\npostgres=# create database testdb;\nCREATE DATABASE\npostgres=# \\c testdb\nYou are now connected to database \"testdb\" as user \"akukushkin\".\ntestdb=# create schema testschema;\nCREATE SCHEMA\ntestdb=# create variable testdb.testschema.testvar int;\nCREATE VARIABLE\ntestdb=# let testdb.testschema.testvar = 1;\nLET\ntestdb=# select testdb.testschema.testvar;\n testvar\n---------\n 1\n(1 row)\n\nRegards,\n--\nAlexander Kukushkin\n\nOn Tue, 19 Dec 2023 at 16:13, Tom Lane <[email protected]> wrote:\nI wonder if we wouldn't be better advised to require exactly one dot.\nThis isn't a feature that we really encourage users to use, and the\nfurther we move the goalposts for it, the harder it will be to replace\nit. In particular I doubt the long-stalled session-variables patch\ncould support such names, since it needs the names to conform to\nnormal SQL rules.If I understand correctly, session variables don't even intersect with custom parameters.Also, they seems to work fine with FQDN:postgres=# create database testdb;CREATE DATABASEpostgres=# \\c testdb You are now connected to database \"testdb\" as user \"akukushkin\".testdb=# create schema testschema;CREATE SCHEMAtestdb=# create variable testdb.testschema.testvar int;CREATE VARIABLEtestdb=# let testdb.testschema.testvar = 1;LETtestdb=# select testdb.testschema.testvar; testvar --------- 1(1 row) Regards,--Alexander Kukushkin",
"msg_date": "Wed, 20 Dec 2023 11:29:24 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow custom parameters with more than one dot in config files."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that the documentation of extension building infrastructure [1]\nhas a paragraph about isolation tests since v12, but they actually can be\nrun since v14 only.\n\nActual isolation tests support for extensions built with PGXS was added\nin [2] and it was agreed there that it should not be backpatched. How\nabout fixing the documentation for v12 and v13 then?\n\n\n[1]: https://www.postgresql.org/docs/12/extend-pgxs.html\n[2]:\nhttps://www.postgresql.org/message-id/flat/CAMsr%2BYFsCMH3B4uOPFE%2B2qWM6k%3Do%3Dhf9LGiPNCfwqKdUPz_BsQ%40mail.gmail.com\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nHi hackers,I noticed that the documentation of extension building infrastructure [1]has a paragraph about isolation tests since v12, but they actually can berun since v14 only.Actual isolation tests support for extensions built with PGXS was addedin [2] and it was agreed there that it should not be backpatched. Howabout fixing the documentation for v12 and v13 then?[1]: https://www.postgresql.org/docs/12/extend-pgxs.html[2]: https://www.postgresql.org/message-id/flat/CAMsr%2BYFsCMH3B4uOPFE%2B2qWM6k%3Do%3Dhf9LGiPNCfwqKdUPz_BsQ%40mail.gmail.comBest regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Tue, 19 Dec 2023 16:48:14 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Correct the documentation of extension building infrastructure"
}
] |
[
{
"msg_contents": "Hello PG developers!\n\nI would like to introduce an updated take on libpq protocol-level\ncompression, building off off the work in\nhttps://www.postgresql.org/message-id/[email protected]\nand the followon work in\nhttps://www.postgresql.org/message-id/[email protected]\nalong with all of the (nice and detailed) feedback and discussion therein.\n\nThe first patch in the stack replaces `secure_read` and `secure_write`\n(and their frontend counterparts) with an \"IO stream\" abstraction,\nwhich should address a lot of the concerns from all parties around\nsecure_read. The fundamental idea of an IO stream is that it is a\nlinked list of \"IoStreamProcessor\"s. The \"base\" processor is the\nactual socket-backed API, and then SSL, GSSAPI, and compression can\nall add layers on top of that base that add functionality and rely on\nthe layer below to read/write data. This structure makes it easy to\nadd compression on top of either plain or encrypted sockets through a\nunified, unconditional API, and also makes it easy for callers to use\nplain, plain-compressed, secure, and secure-compressed communication\nchannels equivalently.\n\nThe second patch is the refactored implementation of compression\nitself, with ZSTD support merged into the main patch because the\nconfiguration-level work is now already merged in master. There was a\ngood bit of rebasing, housekeeping, and bugfixing (including fixing\nlz4 by making it now be explicitly buffered inside ZStream), along\nwith taking into account a lot of the feedback from this mailing list.\nI reworked the API to use the general compression processing types and\nmethods from `common/compression`. This change also refactors the\nprotocol to require the minimum amount of new message types and\nexchanges possible, while also enabling one-directional compression.\nThe compression \"handshaking\" process now looks as follows:\n1. Client sends startup packet with `_pq_.libpq_compression = alg1;alg2`\n2. At this point, the server can immediately begin compressing packets\nto the client with any of the specified algorithms it supports if it\nso chooses\n3. Server includes `libpq_compression` in the automatically sent\n`ParameterStatus` messages before handshaking\n4. At this point, the client can immediately begin compressing packets\nto the server with any of the supported algorithms\nBoth the server and client will prefer to compress using the first\nalgorithm in their list that the other side supports, and we\nexplicitly support `none` in the algorithm list. This allows e.g. a\nclient to use `none;gzip` and a server to use `zstd;gzip;lz4`, and\nthen the client will not compress its data but the server will send\nits data using gzip. Each side uses its own configured compression\nlevel (if set), since compression levels affect compression effort\nmuch more than decompression effort. This change also allows\nconnections to succeed if compression was requested but not available\n(most of the time, I imagine that a client would prefer to just not\nuse compression if the server doesn't support it; unlike SSL, it's a\nnice to have not an essential. If a client application actually\nreally *needs* compression, that can still be facilitated by\nexplicitly checking the negotiated compression methods.)\n\nThe third patch adds the traffic monitoring statistics that had been\nin the main patch of the previous series. I've renamed them and\nchanged slightly where to measure the actual raw network bytes and the\n\"logical\" protocol bytes, which also means this view can measure\nSSL/GSSAPI overhead (not that that's a particularly *important* thing\nto measure, but it's worth nothing what the view will actually\nmeasure.\n\nThe fourth patch adds a TAP test that validates all of the compression\nmethods and compression negotiation. Ideally it would probably be\npart of patch #2, but it uses the monitoring from #3 to be able to\nvalidate that compression is actually working.\n\nThe fifth patch is just a placeholder to allow running the test suite\nwith compression maximally enabled to work out any kinks.\n\nI believe this patch series is ready for detailed review/testing, with\none caveat: as can be seen here\nhttps://cirrus-ci.com/build/6732518292979712 , the build is passing on\nall platforms and all tests except for the primary SSL test on\nWindows. After some network-level debugging, it appears that we are\nbumping into a version of the issues seen here\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n, where on Windows some SSL error messages end up getting swallowed up\nby the the process exiting and closing the socket with a RST rather\nthan a nice clean shutdown. I may have the cause/effect wrong here,\nbut the issues appear before the compression is actually fully set up\nin the client used and would appear to be a side effect of timing\ndifferences and/or possibly size differences in the startup packet.\nAny pointers on how to resolve this would be appreciated. It does\nreproduce on Windows fairly readily, though any one particular test\nstill sometimes succeeds, and the relevant SSL connection failure\nmessage reliably shows up in Wireshark.\n\nAlso please let me know if I have made any notable mailing list/patch\netiquette/format/structure errors. This is my first time submitting a\npatch to a mailing-list driven open source project and while I have\ntried to carefully review the various wiki guides I'm sure I didn't\ntake everything in perfectly.\n\nThanks,\nJacob Burroughs",
"msg_date": "Tue, 19 Dec 2023 10:40:34 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq compression (part 3)"
},
{
"msg_contents": "> I believe this patch series is ready for detailed review/testing, with one caveat: as can be seen here https://cirrus-ci.com/build/6732518292979712 , the build is passing on all platforms and all tests except for the primary SSL test on Windows.\n\nOne correction: I apparently missed a kerberos timeout failure on\nfreebsd with compression enabled (being color blind the checkmark and\nstill running colors are awfully similar, and I misread what I saw).\nI haven't yet successfully reproduced that one, so I may or may not\nneed some pointers to sort it out, but I think whatever it is the fix\nwill be small enough that the patch overall is still reviewable.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 11:02:41 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "> One correction: I apparently missed a kerberos timeout failure on\n> freebsd with compression enabled (being color blind the checkmark and\n> still running colors are awfully similar, and I misread what I saw).\n> I haven't yet successfully reproduced that one, so I may or may not\n> need some pointers to sort it out, but I think whatever it is the fix\n> will be small enough that the patch overall is still reviewable.\n\nI have now sorted out all of the non-Windows build issues (and removed\nmy stray misguided attempt at fixing the Windows issue that I hadn't\nintended to post the first time around). The build that is *actually*\npassing every platform except for the one Windows SSL test mentioned\nin my original message can be seen here:\nhttps://cirrus-ci.com/build/5924321042890752",
"msg_date": "Wed, 20 Dec 2023 13:39:31 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 11:41 AM Jacob Burroughs\n<[email protected]> wrote:\n> The compression \"handshaking\" process now looks as follows:\n> 1. Client sends startup packet with `_pq_.libpq_compression = alg1;alg2`\n> 2. At this point, the server can immediately begin compressing packets\n> to the client with any of the specified algorithms it supports if it\n> so chooses\n> 3. Server includes `libpq_compression` in the automatically sent\n> `ParameterStatus` messages before handshaking\n> 4. At this point, the client can immediately begin compressing packets\n> to the server with any of the supported algorithms\n> Both the server and client will prefer to compress using the first\n> algorithm in their list that the other side supports, and we\n> explicitly support `none` in the algorithm list. This allows e.g. a\n> client to use `none;gzip` and a server to use `zstd;gzip;lz4`, and\n> then the client will not compress its data but the server will send\n> its data using gzip.\n\nI'm having difficulty understanding the details of this handshaking\nalgorithm from this description. It seems good that the handshake\nproceeds in each direction somewhat separately from the other, but I\ndon't quite understand how the whole thing fits together. If the\nclient tells the server that 'none,gzip' is supported, and I elect to\nstart using gzip, how does the client know that I picked gzip rather\nthan none? Are the compressed packets self-identifying?\n\nIt's also slightly odd to me that the same parameter seems to specify\nboth what we want to send, and what we're able to receive. I'm not\nreally sure we should have separate parameters for those things, but I\ndon't quite understand how this works without it. The \"none\" thing\nseems like a bit of a hack. It lets you say \"I'd like to receive\ncompressed data but send uncompressed data\" ... but what about the\nreverse? How do you say \"don't bother compressing what you receive\nfrom the server, but please lz4 everything you send to the server\"? Or\nhow do you say \"use zstd from server to client, but lz4 from client to\nserver\"? It seems like you can't really say that kind of thing.\n\nWhat if we had, on the server side, a GUC saying what compression to\naccept and a GUC saying what compression to be willing to do? And then\nlet the client request whatever it wants for each direction.\n\n> Also please let me know if I have made any notable mailing list/patch\n> etiquette/format/structure errors. This is my first time submitting a\n> patch to a mailing-list driven open source project and while I have\n> tried to carefully review the various wiki guides I'm sure I didn't\n> take everything in perfectly.\n\nSeems fine to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 15:49:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "> I'm having difficulty understanding the details of this handshaking\n> algorithm from this description. It seems good that the handshake\n> proceeds in each direction somewhat separately from the other, but I\n> don't quite understand how the whole thing fits together. If the\n> client tells the server that 'none,gzip' is supported, and I elect to\n> start using gzip, how does the client know that I picked gzip rather\n> than none? Are the compressed packets self-identifying?\n\nI agree I could have spelled this out more clearly. I forgot to\nmention that I added a byte to the CompressedMessage message type that\nspecifies the chosen algorithm. So if the server receives\n'none,gzip', it can either keep sending uncompressed regular messages,\nor it can compress them in CompressedMessage packets which now look\nlike \"z{len}{format}{data}\" (format just being a member of the\npg_compress_algorithm enum, so `1` in the case of gzip). Overall the\nintention is that both the client and the server can just start\nsending CompressedMessages once they receive the list of ones other\nparty supports without any negotiation or agreement needed and without\nan extra message type to first specify the compression algorithm. (One\nbyte per message seemed to me like a reasonable overhead for the\nsimplicity, but it wouldn't be hard to bring back SetCompressionMethod\nif we prefer.)\n\n> It's also slightly odd to me that the same parameter seems to specify\n> both what we want to send, and what we're able to receive. I'm not\n> really sure we should have separate parameters for those things, but I\n> don't quite understand how this works without it. The \"none\" thing\n> seems like a bit of a hack. It lets you say \"I'd like to receive\n> compressed data but send uncompressed data\" ... but what about the\n> reverse? How do you say \"don't bother compressing what you receive\n> from the server, but please lz4 everything you send to the server\"? Or\n> how do you say \"use zstd from server to client, but lz4 from client to\n> server\"? It seems like you can't really say that kind of thing.\n\nWhen I came up with the protocol I was imagining that basically both\nserver admins and clients might want a decent bit more control over\nthe compression they do rather than the decompression they do, since\ncompression is generally much more computationally expensive than\ndecompression. Now that you point it out though, I don't think that\nactually makes that much sense.\n\n> What if we had, on the server side, a GUC saying what compression to\n> accept and a GUC saying what compression to be willing to do? And then\n> let the client request whatever it wants for each direction.\n\nHere's two proposals:\nOption 1:\nGUCs:\nlibpq_compression (default \"off\")\nlibpq_decompression (default \"auto\", which is defined to be equal to\nlibpq_compression)\nConnection parameters:\ncompression (default \"off\")\ndecompression (default \"auto\", which is defined to be equal to compression)\n\nI think we would only send the decompression fields over the wire to\nthe other side, to be used to filter for the first chosen compression\nfield. We would send the `_pq_.libpq_decompression` protocol\nextension even if only compression was enabled and not decompression\nso that the server knows to enable compression processing for the\nconnection (I think this would be the only place we would still use\n`none`, and not as part of a list in this case.) I think we also\nwould want to add libpq functions to allow a client to check the\nlast-used compression algorithm in each direction for any\nmonitoring/inspection purposes (actually that's probably a good idea\nregardless, so a client application that cares doesn't need to/try to\nimplement the intersection and assumption around choosing the first\nalgorithm in common). Also I'm open to better names than \"auto\", I\njust would like it to avoid unnecessary verbosity for the common case\nof \"I just want to enable bidirectional compression with whatever\nalgorithms are available with default parameters\".\n\nOption 2:\nThis one is even cleaner in the common case but a bit worse in the\nuncommon case: just use one parameter and have\ncompression/decompression enabling be part of the compression detail\n(e.g. \"libpq_compression='gzip:no_decompress;lz4:level=2,no_compress;zstd'\"\nor something like that, in which case the \"none,gzip\" case would\nbecome \"'libpq_compression=gzip:no_compress'\"). See\nhttps://www.postgresql.org/docs/current/app-pgbasebackup.html ,\nspecifically the `--compress` flag, for how specifying compression\nalgorithms and details works.\n\nI'm actually not sure which of the two I prefer; opinions are welcome :)\n\nThanks,\nJacob\n\n\n",
"msg_date": "Wed, 20 Dec 2023 15:48:13 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "Thanks for working on this!\n\nOne thing I'm wondering: should it be possible for the client to change the\ncompression it wants mid-connection? I can think of some scenarios where\nthat would be useful to connection poolers: if a pooler does plain\nforwarding of the compressed messages, then it would need to be able to\ndisable/enable compression if it wants to multiplex client connections with\ndifferent compression settings over the same server connection.\n\nThanks for working on this!One thing I'm wondering: should it be possible for the client to change the compression it wants mid-connection? I can think of some scenarios where that would be useful to connection poolers: if a pooler does plain forwarding of the compressed messages, then it would need to be able to disable/enable compression if it wants to multiplex client connections with different compression settings over the same server connection.",
"msg_date": "Thu, 21 Dec 2023 01:30:54 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "\n\n> On 21 Dec 2023, at 05:30, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> One thing I'm wondering: should it be possible for the client to change the compression it wants mid-connection?\n\nThis patchset allows sending CompressionMethod message, which allows to set another codec\\level picked from the set of negotiated codec sets (during startup).\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 29 Dec 2023 15:02:48 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 11:02, Andrey M. Borodin <[email protected]> wrote:\n> This patchset allows sending CompressionMethod message, which allows to set another codec\\level picked from the set of negotiated codec sets (during startup).\n\nDid you mean to attach a patchset? I don't see the CompressionMethod\nmessage in the v2 patchset. Only a CompressedData one.\n\n\n",
"msg_date": "Fri, 29 Dec 2023 17:15:17 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "> One thing I'm wondering: should it be possible for the client to change the compression it wants mid-connection? I can think of some scenarios where that would be useful to connection poolers: if a pooler does plain forwarding of the compressed messages, then it would need to be able to disable/enable compression if it wants to multiplex client connections with different compression settings over the same server connection.\n\nI have reworked this patch series to make it easier to extend to\nrestart compression mid-connection once something in the vein of the\ndiscussion in \"Add new protocol message to change GUCs for usage with\nfuture protocol-only GUCs\" [1] happens. In particular, I have changed\nthe `CompressedMessage` protocol message to signal the current\ncompression algorithm any time the client should restart its streaming\ndecompressor and otherwise implicitly use whatever compression\nalgorithm and decompressor was used for previous `CompressedMessage` ,\nwhich future work can leverage to trigger such a restart on update of\nthe client-supported compression algorithms.\n\n> Option 2:\n> This one is even cleaner in the common case but a bit worse in the\n> uncommon case: just use one parameter and have\n> compression/decompression enabling be part of the compression detail\n> (e.g. \"libpq_compression='gzip:no_de\n> compress;lz4:level=2,no_compress;zstd'\"\n> or something like that, in which case the \"none,gzip\" case would\n> become \"'libpq_compression=gzip:no_compress'\"). See\n> https://www.postgresql.org/docs/current/app-pgbasebackup.html ,\n> specifically the `--compress` flag, for how specifying compression\n> algorithms and details works.\n\nI ended up reworking this to use a version of this option in place of\nthe `none` hackery, but naming the parameters `compress` and\n`decompress, so to disable compression but allow decompression you\nwould specify `libpq_compression=gzip:compress=off`.\n\nAlso my windows SSL test failures seem to have resolved themselves\nwith either these changes or a rebase, so I think things are truly in\na reviewable state now.",
"msg_date": "Sun, 31 Dec 2023 01:32:19 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 2:32 AM Jacob Burroughs\n<[email protected]> wrote:\n> I ended up reworking this to use a version of this option in place of\n> the `none` hackery, but naming the parameters `compress` and\n> `decompress, so to disable compression but allow decompression you\n> would specify `libpq_compression=gzip:compress=off`.\n\nI'm still a bit befuddled by this interface.\nlibpq_compression=gzip:compress=off looks a lot like it's saying \"do\nit, except don't\". I guess that you're using \"compress\" and\n\"decompress\" to distinguish the two directions - i.e. server to client\nand client to server - but of course in both directions the sender\ncompresses and the receiver decompresses, so I don't find that very\nclear.\n\nI wonder if we could use \"upstream\" and \"downstream\" to be clearer? Or\nsome other terminology?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 15:45:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "> I wonder if we could use \"upstream\" and \"downstream\" to be clearer? Or\n> some other terminology?\n\nWhat about `send` and `receive`?\n\n\n",
"msg_date": "Fri, 12 Jan 2024 15:02:47 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 4:02 PM Jacob Burroughs\n<[email protected]> wrote:\n> > I wonder if we could use \"upstream\" and \"downstream\" to be clearer? Or\n> > some other terminology?\n>\n> What about `send` and `receive`?\n\nI think that would definitely be better than \"compress\" and\n\"decompress,\" but I was worried that it might be unclear to the user\nwhether the parameter that they specified was from the point of view\nof the client or the server. Perhaps that's a dumb thing to worry\nabout, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:11:19 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 4:11 PM Robert Haas <[email protected]> wrote:\n> I think that would definitely be better than \"compress\" and\n> \"decompress,\" but I was worried that it might be unclear to the user\n> whether the parameter that they specified was from the point of view\n> of the client or the server. Perhaps that's a dumb thing to worry\n> about, though.\n\nAccording to https://commitfest.postgresql.org/48/4746/ this patch set\nneeds review, but:\n\n1. Considering that there have been no updates for 5 months, maybe\nit's actually dead?\n\nand\n\n2. I still think it needs to be more clear how the interface is\nsupposed to work. I do not want to spend time reviewing a patch to see\nwhether it works without understanding how it is intended to work --\nand I also think that reviewing the patch in detail before we've got\nthe user interface right makes a whole lot of sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 12:08:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 11:08 AM Robert Haas <[email protected]> wrote:\n>\n> According to https://commitfest.postgresql.org/48/4746/ this patch set\n> needs review, but:\n>\n> 1. Considering that there have been no updates for 5 months, maybe\n> it's actually dead?\n\nI've withdrawn this patch from the commitfest. I had been waiting for\nsome resolution on \"Add new protocol message to change GUCs for usage\nwith future protocol-only GUCs\" before I rebased/refactored this one,\nbecause this would be introducing the first protocol extension so far,\nand that discussion appeared to be working out some meaningful issues\non how GUCs and protocol parameters should interact. If you think it\nis worthwhile to proceed here though, I am very happy to do so. (I\nwould love to see this feature actually make it into postgres; it\nwould almost certainly be a big efficiency and cost savings win for\nhow my company deploys postgres internally :) )\n\n> 2. I still think it needs to be more clear how the interface is\n> supposed to work. I do not want to spend time reviewing a patch to see\n> whether it works without understanding how it is intended to work --\n> and I also think that reviewing the patch in detail before we've got\n> the user interface right makes a whole lot of sense.\n\nRegarding the interface, what I had originally gone for was the idea\nthat the naming of the options was from the perspective of the side\nyou were setting them on. Therefore, when setting `libpq_compression`\nas a server-level GUC, `compress` would control if the server would\ncompress (send compressed data) with the given algorithm, and\n`decompress` would control if the the server would decompress (receive\ncompressed data) with the given algorithm. And likewise on the client\nside, when setting `compression` as a connection config option,\n`compress` would control if the *client* would compress (send\ncompressed data) with the given algorithm, and `decompress` would\ncontrol if the the *client* would decompress (receive compressed data)\nwith the given algorithm. So for a client to pick what to send, it\nwould choose from the intersection of its own `compress=true` and the\nserver's `decompress=true` algorithms sent in the `ParameterStatus`\nmessage with `libpq_compression`. And likewise on the server side, it\nwould choose from the intersection of the server's `compress=true`\nalgorithms and the client's `decompress=true` algorithms sent in the\n`_pq_.libpq_compression` startup option. If you have a better\nsuggestion I am very open to it though.\n\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 14 May 2024 11:30:04 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 12:30 PM Jacob Burroughs\n<[email protected]> wrote:\n> I've withdrawn this patch from the commitfest. I had been waiting for\n> some resolution on \"Add new protocol message to change GUCs for usage\n> with future protocol-only GUCs\" before I rebased/refactored this one,\n> because this would be introducing the first protocol extension so far,\n> and that discussion appeared to be working out some meaningful issues\n> on how GUCs and protocol parameters should interact. If you think it\n> is worthwhile to proceed here though, I am very happy to do so. (I\n> would love to see this feature actually make it into postgres; it\n> would almost certainly be a big efficiency and cost savings win for\n> how my company deploys postgres internally :) )\n\nI don't think you should wait for that to be resolved; IMHO, this\npatch needs to inform that discussion more than the other way around.\n\n> > 2. I still think it needs to be more clear how the interface is\n> > supposed to work. I do not want to spend time reviewing a patch to see\n> > whether it works without understanding how it is intended to work --\n> > and I also think that reviewing the patch in detail before we've got\n> > the user interface right makes a whole lot of sense.\n>\n> Regarding the interface, what I had originally gone for was the idea\n> that the naming of the options was from the perspective of the side\n> you were setting them on. Therefore, when setting `libpq_compression`\n> as a server-level GUC, `compress` would control if the server would\n> compress (send compressed data) with the given algorithm, and\n> `decompress` would control if the the server would decompress (receive\n> compressed data) with the given algorithm. And likewise on the client\n> side, when setting `compression` as a connection config option,\n> `compress` would control if the *client* would compress (send\n> compressed data) with the given algorithm, and `decompress` would\n> control if the the *client* would decompress (receive compressed data)\n> with the given algorithm. So for a client to pick what to send, it\n> would choose from the intersection of its own `compress=true` and the\n> server's `decompress=true` algorithms sent in the `ParameterStatus`\n> message with `libpq_compression`. And likewise on the server side, it\n> would choose from the intersection of the server's `compress=true`\n> algorithms and the client's `decompress=true` algorithms sent in the\n> `_pq_.libpq_compression` startup option. If you have a better\n> suggestion I am very open to it though.\n\nWell, in my last response before the thread died, I complained that\nlibpq_compression=gzip:compress=off was confusing, and I stand by\nthat, because \"compress\" is used both in the name of the parameter and\nas an option within the value of that parameter. I think there's more\nthan one acceptable way to resolve that problem, but I think leaving\nit like that is unacceptable.\n\nEven more broadly, I think there have been a couple of versions of\nthis patch now where I read the documentation and couldn't understand\nhow the feature was supposed to work, and I'm not really willing to\nspend time trying to review a complex patch for conformity with a\ndesign that I can't understand in the first place. I don't want to\npretend like I'm the smartest person on this mailing list, and in fact\nI know that I'm definitely not, but I think I'm smart enough and\nexperienced enough with PostgreSQL that if I look at the description\nof a parameter and say \"I don't understand how the heck this is\nsupposed to work\", probably a lot of users are going to have the same\nreaction. That lack of understanding on my part my come either from\nthe explanation of the parameter not being as good as it needs to be,\nor from the design itself not being as good as it needs to be, or from\nsome combination of the two, but whichever is the case, IMHO you or\nsomebody else has got to figure out how to fix it.\n\nI do also admit that there is a possibility that everything is totally\nfine and I've just been kinda dumb on the days when I've looked at the\npatch. If a chorus of other hackers shows up and gives me a few whacks\nwith the cluestick and after that I look at the proposed options and\ngo \"oh, yeah, these totally make sense, I was just being stupid,\" fair\nenough! But right now that's not where I'm at. I don't want you to\nexplain to me how it works; I want you to change it in some way so\nthat when I or some end user looks at it, they go \"I don't need an\nexplanation of how that works because it's extremely clear to me\nalready,\" or at least \"hmm, this is a bit complicated but after a\nquick glance at the documentation it makes sense\".\n\nI would really like to see this patch go forward, but IMHO these UI\nquestions are blockers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 14:35:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 1:35 PM Robert Haas <[email protected]> wrote:\n>\n> Well, in my last response before the thread died, I complained that\n> libpq_compression=gzip:compress=off was confusing, and I stand by\n> that, because \"compress\" is used both in the name of the parameter and\n> as an option within the value of that parameter. I think there's more\n> than one acceptable way to resolve that problem, but I think leaving\n> it like that is unacceptable.\n\nWhat if we went with:\nServer side:\n* `libpq_compression=on` (I just want everything the server supports\navailable; probably the most common case)\n* `libpq_compression=off` (I don't want any compression ever with this server)\n* `libpq_compression=lzma;gzip` (I only want these algorithms for\nwhatever reason)\n* `libpq_compression=lzma:client_to_server=off;gzip:server_to_client=off`\n(I only want to send data with lzma and receive data with gzip)\nClient side:\n*`compression=on` (I just want compression; pick sane defaults\nautomatically for me; probably the most common case)\n* `compression=off` (I don't want any compression)\n* `compression=lzma;gzip` (I only want these algorithms for whatever reason)\n* `compression=lzma:client_to_server=off;gzip:server_to_client=off` (I\nonly want to receive data with lzma and send data with gzip)\n\n`client_to_server`/`server_to_client` is a bit verbose, but it's very\nexplicit in what it means, so you don't need to reason about who is\nsending/receiving/etc in a given context, and a given config string\napplied to the server or the client side has the same effect on the\nconnection.\n\n> Even more broadly, I think there have been a couple of versions of\n> this patch now where I read the documentation and couldn't understand\n> how the feature was supposed to work, and I'm not really willing to\n> spend time trying to review a complex patch for conformity with a\n> design that I can't understand in the first place. I don't want to\n> pretend like I'm the smartest person on this mailing list, and in fact\n> I know that I'm definitely not, but I think I'm smart enough and\n> experienced enough with PostgreSQL that if I look at the description\n> of a parameter and say \"I don't understand how the heck this is\n> supposed to work\", probably a lot of users are going to have the same\n> reaction. That lack of understanding on my part my come either from\n> the explanation of the parameter not being as good as it needs to be,\n> or from the design itself not being as good as it needs to be, or from\n> some combination of the two, but whichever is the case, IMHO you or\n> somebody else has got to figure out how to fix it.\n\nIf the above proposal seems better to you I'll both rework the patch\nand then also try to rewrite the relevant bits of documentation to\nseparate out \"what knobs are there\" and \"how do I specify the flags to\nturn the knobs\", because I think those two being integrated is making\nthe parameter documentation less readable/followable.\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 14 May 2024 14:22:01 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 3:22 PM Jacob Burroughs\n<[email protected]> wrote:\n> What if we went with:\n> Server side:\n> * `libpq_compression=on` (I just want everything the server supports\n> available; probably the most common case)\n> * `libpq_compression=off` (I don't want any compression ever with this server)\n> * `libpq_compression=lzma;gzip` (I only want these algorithms for\n> whatever reason)\n> * `libpq_compression=lzma:client_to_server=off;gzip:server_to_client=off`\n> (I only want to send data with lzma and receive data with gzip)\n> Client side:\n> *`compression=on` (I just want compression; pick sane defaults\n> automatically for me; probably the most common case)\n> * `compression=off` (I don't want any compression)\n> * `compression=lzma;gzip` (I only want these algorithms for whatever reason)\n> * `compression=lzma:client_to_server=off;gzip:server_to_client=off` (I\n> only want to receive data with lzma and send data with gzip)\n>\n> `client_to_server`/`server_to_client` is a bit verbose, but it's very\n> explicit in what it means, so you don't need to reason about who is\n> sending/receiving/etc in a given context, and a given config string\n> applied to the server or the client side has the same effect on the\n> connection.\n\nIMHO, that's a HUGE improvement. But:\n\n* I would probably change is the name \"libpq_compression\", because\neven though we have src/backend/libpq, we typically use libpq to refer\nto the client library, not the server's implementation of the wire\nprotocol. I think we could call it connection_encryption or\nwire_protocol_encryption or something like that, but I'm not a huge\nfan of libpq_compression.\n\n* I would use commas, not semicolons, to separate items in a list,\ni.e. lzma,gzip not lzma;gzip. I think that convention is nearly\nuniversal in PostgreSQL, but feel free to point out counterexamples if\nyou were modelling this on something.\n\n* libpq_compression=lzma:client_to_server=off;gzip:server_to_client=off\nreads strangely to me. How about making it so that the syntax is like\nthis:\n\nlibpq_compression=DEFAULT_VALUE_FOR_BOTH_DIRECTIONS:client_to_server=OVERRIDE_FOR_THIS_DIRECTION:servert_to_client=OVERRIDE_FOR_THIS_DIRECTION\n\nWith all components being optional. So this example could be written\nin any of these ways:\n\nlibpq_compression=lzma;server_to_client=gzip\nlibpq_compression=gzip;client_to_server=lzma\nlibpq_compression=server_to_client=gzip;client_to_server=lzma\nlibpq_compression=client_to_server=lzma;client_to_server=gzip\n\nAnd if I wrote libpq_compression=server_to_client=gzip that would mean\nsend data to the client using gzip and in the other direction use\nwhatever the default is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 16:23:53 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 3:24 PM Robert Haas <[email protected]> wrote:\n>\n> IMHO, that's a HUGE improvement. But:\n>\n> * I would probably change is the name \"libpq_compression\", because\n> even though we have src/backend/libpq, we typically use libpq to refer\n> to the client library, not the server's implementation of the wire\n> protocol. I think we could call it connection_encryption or\n> wire_protocol_encryption or something like that, but I'm not a huge\n> fan of libpq_compression.\n>\nI think connection_compression would seem like a good name to me.\n\n> * I would use commas, not semicolons, to separate items in a list,\n> i.e. lzma,gzip not lzma;gzip. I think that convention is nearly\n> universal in PostgreSQL, but feel free to point out counterexamples if\n> you were modelling this on something.\n>\n> * libpq_compression=lzma:client_to_server=off;gzip:server_to_client=off\n> reads strangely to me. How about making it so that the syntax is like\n> this:\n>\n> libpq_compression=DEFAULT_VALUE_FOR_BOTH_DIRECTIONS:client_to_server=OVERRIDE_FOR_THIS_DIRECTION:servert_to_client=OVERRIDE_FOR_THIS_DIRECTION\n>\n> With all components being optional. So this example could be written\n> in any of these ways:\n>\n> libpq_compression=lzma;server_to_client=gzip\n> libpq_compression=gzip;client_to_server=lzma\n> libpq_compression=server_to_client=gzip;client_to_server=lzma\n> libpq_compression=client_to_server=lzma;client_to_server=gzip\n>\n> And if I wrote libpq_compression=server_to_client=gzip that would mean\n> send data to the client using gzip and in the other direction use\n> whatever the default is.\n\nThe reason for both the semicolons and for not doing this is related\nto using the same specification structure as here:\nhttps://www.postgresql.org/docs/current/app-pgbasebackup.html\n(specifically the --compress argument). Reusing that specification\nrequires that we use commas to separate the flags for a compression\nmethod, and therefore left me with semicolons as the leftover\nseparator character. I think we could go with something like your\nproposal, and in a lot of ways I like it, but there's still the\npossibility of e.g.\n`libpq_compression=client_to_server=zstd:level=10,long=true,gzip;client_to_server=gzip`\nand I'm not quite sure how to make the separator characters work\ncoherently if we need to treat `zstd:level=10,long=true` as a unit.\nAlternatively, we could have `connection_compression`,\n`connection_compression_server_to_client`, and\n`connection_compression_client_to_server` as three separate GUCs (and\non the client side `compression`, `compression_server_to_client`, and\n`compression_client_to_server` as three separate connection\nparameters), where we would treat `connection_compression` as a\ndefault that could be overridden by an explicit\nclient_to_server/server_to_client. That creates the slightly funky\ncase where if you specify all three then the base one ends up unused\nbecause the two more specific ones are being used instead, but that\nisn't necessarily terrible. On the server side we *could* go with\njust the server_to_client and client_to_server ones, but I think we\nwant it to be easy to use this feature in the simple case with a\nsingle libpq parameter.\n\n--\nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 14 May 2024 16:21:40 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 14, 2024 at 5:21 PM Jacob Burroughs\n<[email protected]> wrote:\n> The reason for both the semicolons and for not doing this is related\n> to using the same specification structure as here:\n> https://www.postgresql.org/docs/current/app-pgbasebackup.html\n> (specifically the --compress argument).\n\nI agree with that goal, but I'm somewhat confused by how your proposal\nachieves it. You had\nlibpq_compression=lzma:client_to_server=off;gzip:server_to_client=off,\nso how do we parse that? Is that two completely separate\nspecifications, one for lzma and one for gzip, and each of those has\none option which is set to off? And then they are separated from each\nother by a semicolon? That actually does make sense, and I think it\nmay do a better job allowing for compression options than my proposal,\nbut it also seems a bit weird, because client_to_server and\nserver_to_client are not really compression options at all. They're\nframing when this compression specification applies, rather than what\nit does when it applies. In a way it's a bit like the fact that you\ncan prefix a pg_basebackup's --compress option with client- or server-\nto specify where the compression should happen. But we can't quite\nreuse that idea here, because in that case there's no question of\ndoing it in both places, whereas here, you might want one thing for\nupstream and another thing for downstream.\n\n> Alternatively, we could have `connection_compression`,\n> `connection_compression_server_to_client`, and\n> `connection_compression_client_to_server` as three separate GUCs (and\n> on the client side `compression`, `compression_server_to_client`, and\n> `compression_client_to_server` as three separate connection\n> parameters), where we would treat `connection_compression` as a\n> default that could be overridden by an explicit\n> client_to_server/server_to_client. That creates the slightly funky\n> case where if you specify all three then the base one ends up unused\n> because the two more specific ones are being used instead, but that\n> isn't necessarily terrible. On the server side we *could* go with\n> just the server_to_client and client_to_server ones, but I think we\n> want it to be easy to use this feature in the simple case with a\n> single libpq parameter.\n\nI'm not a fan of three settings; I could go with two settings, one for\neach direction, and if you want both you have to set both. Or, another\nidea, what if we just separated the two directions with a slash,\nSEND/RECEIVE, and if there's no slash, then it applies to both\ndirections. So you could say\nconnection_compression='gzip:level=9/lzma' or whatever.\n\nBut now I'm wondering whether these options should really be symmetric\non the client and server sides? Isn't it for the server just to\nspecify a list of acceptable algorithms, and the client to set the\ncompression options? If both sides are trying to set the compression\nlevel, for example, who wins?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 09:38:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Wed, May 15, 2024 at 8:38 AM Robert Haas <[email protected]> wrote:\n>\n> I agree with that goal, but I'm somewhat confused by how your proposal\n> achieves it. You had\n> libpq_compression=lzma:client_to_server=off;gzip:server_to_client=off,\n> so how do we parse that? Is that two completely separate\n> specifications, one for lzma and one for gzip, and each of those has\n> one option which is set to off? And then they are separated from each\n> other by a semicolon? That actually does make sense, and I think it\n> may do a better job allowing for compression options than my proposal,\n> but it also seems a bit weird, because client_to_server and\n> server_to_client are not really compression options at all. They're\n> framing when this compression specification applies, rather than what\n> it does when it applies. In a way it's a bit like the fact that you\n> can prefix a pg_basebackup's --compress option with client- or server-\n> to specify where the compression should happen. But we can't quite\n> reuse that idea here, because in that case there's no question of\n> doing it in both places, whereas here, you might want one thing for\n> upstream and another thing for downstream.\n\nYour interpretation is correct, but I don't disagree that it ends up\nfeeling confusing.\n\n> I'm not a fan of three settings; I could go with two settings, one for\n> each direction, and if you want both you have to set both. Or, another\n> idea, what if we just separated the two directions with a slash,\n> SEND/RECEIVE, and if there's no slash, then it applies to both\n> directions. So you could say\n> connection_compression='gzip:level=9/lzma' or whatever.\n>\n> But now I'm wondering whether these options should really be symmetric\n> on the client and server sides? Isn't it for the server just to\n> specify a list of acceptable algorithms, and the client to set the\n> compression options? If both sides are trying to set the compression\n> level, for example, who wins?\n\nCompression options really only ever apply to the side doing the\ncompressing, and at least as I had imagined things each party\n(client/server) only used its own level/other compression params.\nThat leaves me thinking, maybe we really want two independent GUCs,\none for \"what algorithms are enabled/negotiable\" and one for \"how\nshould I configure my compressors\" and then we reduce the dimensions\nwe are trying to shove into one GUC and each one ends up with a very\nclear purpose:\nconnection_compression=(yes|no|alg1,alg2:server_to_client=alg1,alg2:client_to_server=alg3)\nconnection_compression_opts=gzip:level=2\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Wed, 15 May 2024 11:24:27 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:24 PM Jacob Burroughs\n<[email protected]> wrote:\n> > But now I'm wondering whether these options should really be symmetric\n> > on the client and server sides? Isn't it for the server just to\n> > specify a list of acceptable algorithms, and the client to set the\n> > compression options? If both sides are trying to set the compression\n> > level, for example, who wins?\n>\n> Compression options really only ever apply to the side doing the\n> compressing, and at least as I had imagined things each party\n> (client/server) only used its own level/other compression params.\n> That leaves me thinking, maybe we really want two independent GUCs,\n> one for \"what algorithms are enabled/negotiable\" and one for \"how\n> should I configure my compressors\" and then we reduce the dimensions\n> we are trying to shove into one GUC and each one ends up with a very\n> clear purpose:\n> connection_compression=(yes|no|alg1,alg2:server_to_client=alg1,alg2:client_to_server=alg3)\n> connection_compression_opts=gzip:level=2\n\n From my point of view, it's the client who knows what it wants to do\nwith the connection. If the client plans to read a lot of data, it\nmight want the server to compress that data, especially if it knows\nthat it's on a slow link. If the client plans to send a lot of data --\nbasically COPY, I'm not thinking this is going to matter much\notherwise -- then it might want to compress that data before sending\nit, again, especially if it knows that it's on a slow link.\n\nBut what does the server know, really? If some client connects and\nsends a SELECT query, the server can't guess whether that query is\ngoing to return 1 row or 100 million rows, so it has no idea of\nwhether compression is likely to make sense or not. It is entitled to\ndecide, as a matter of policy, that it's not willing to perform\ncompression, either because of CPU consumption or security concerns or\nwhatever, but it has no knowledge of what the purpose of this\nparticular connection is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 12:31:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Wed, May 15, 2024 at 11:31 AM Robert Haas <[email protected]> wrote:\n> From my point of view, it's the client who knows what it wants to do\n> with the connection. If the client plans to read a lot of data, it\n> might want the server to compress that data, especially if it knows\n> that it's on a slow link. If the client plans to send a lot of data --\n> basically COPY, I'm not thinking this is going to matter much\n> otherwise -- then it might want to compress that data before sending\n> it, again, especially if it knows that it's on a slow link.\n>\n> But what does the server know, really? If some client connects and\n> sends a SELECT query, the server can't guess whether that query is\n> going to return 1 row or 100 million rows, so it has no idea of\n> whether compression is likely to make sense or not. It is entitled to\n> decide, as a matter of policy, that it's not willing to perform\n> compression, either because of CPU consumption or security concerns or\n> whatever, but it has no knowledge of what the purpose of this\n> particular connection is.\n\nI think I would agree with that. That said, I don't think the client\nshould be in the business of specifying what configuration of the\ncompression algorithm the server should use. The server administrator\n(or really most of the time, the compression library developer's\ndefaults) gets to pick the compression/compute tradeoff for\ncompression that runs on the server (which I would imagine would be\nthe vast majority of it), and the client gets to pick those same\nparameters for any compression that runs on the client machine\n(probably indeed in practice only for large COPYs). The *algorithm*\nneeds to actually be communicated/negotiated since different\nclient/server pairs may be built with support for different\ncompression libraries, but I think it is reasonable to say that the\nside that actually has to do the computationally expensive part owns\nthe configuration of that part too. Maybe I'm missing a good reason\nthat we want to allow clients to choose compression levels for the\nserver though?\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Wed, 15 May 2024 11:50:40 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:50 PM Jacob Burroughs\n<[email protected]> wrote:\n> I think I would agree with that. That said, I don't think the client\n> should be in the business of specifying what configuration of the\n> compression algorithm the server should use. The server administrator\n> (or really most of the time, the compression library developer's\n> defaults) gets to pick the compression/compute tradeoff for\n> compression that runs on the server (which I would imagine would be\n> the vast majority of it), and the client gets to pick those same\n> parameters for any compression that runs on the client machine\n> (probably indeed in practice only for large COPYs). The *algorithm*\n> needs to actually be communicated/negotiated since different\n> client/server pairs may be built with support for different\n> compression libraries, but I think it is reasonable to say that the\n> side that actually has to do the computationally expensive part owns\n> the configuration of that part too. Maybe I'm missing a good reason\n> that we want to allow clients to choose compression levels for the\n> server though?\n\nWell, I mean, I don't really know what the right answer is here, but\nright now I can say pg_dump --compress=gzip to compress the dump with\ngzip, or pg_dump --compress=gzip:9 to compress with gzip level 9. Now,\nsay that instead of compressing the output, I want to compress the\ndata sent to me over the connection. So I figure I should be able to\nsay pg_dump 'compress=gzip' or pg_dump 'compress=gzip:9'. I think you\nwant to let me do the first of those but not the second. But, to turn\nyour question on its head, what would be the reasoning behind such a\nrestriction?\n\nNote also the precedent of pg_basebackup. I can say pg_basebackup\n--compress=server-gzip:9 to ask the server to compress the backup with\ngzip at level 9. In that case, what I request from the server changes\nthe actual output that I get, which is not the case here. Even so, I\ndon't really understand what the justification would be for refusing\nto let the client ask for a specific compression level.\n\nAnd on the flip side, I also don't understand why the server would\nwant to mandate a certain compression level. If compression is very\nexpensive for a certain algorithm when the level is above some\nthreshold X, we could have a GUC to limit the maximum level that the\nclient can request. But, given that the gzip compression level\ndefaults to 6 in every other context, why would the administrator of a\nparticular server want to say, well, the default for my server is 3 or\n9 or whatever?\n\n(This is of course all presuming you want to use gzip at all, which\nyou probably don't, because gzip is crazy slow. Use lz4 or zstd! But\nit makes the point.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2024 09:28:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Thu, May 16, 2024 at 3:28 AM Robert Haas <[email protected]> wrote:\n>\n> Well, I mean, I don't really know what the right answer is here, but\n> right now I can say pg_dump --compress=gzip to compress the dump with\n> gzip, or pg_dump --compress=gzip:9 to compress with gzip level 9. Now,\n> say that instead of compressing the output, I want to compress the\n> data sent to me over the connection. So I figure I should be able to\n> say pg_dump 'compress=gzip' or pg_dump 'compress=gzip:9'. I think you\n> want to let me do the first of those but not the second. But, to turn\n> your question on its head, what would be the reasoning behind such a\n> restriction?\n\nI think I was more thinking that trying to let both parties control\nthe parameter seemed like a recipe for confusion and sadness, and so\nthe choice that felt most natural to me was to let the sender control\nit, but I'm definitely open to changing that the other way around.\n\n> Note also the precedent of pg_basebackup. I can say pg_basebackup\n> --compress=server-gzip:9 to ask the server to compress the backup with\n> gzip at level 9. In that case, what I request from the server changes\n> the actual output that I get, which is not the case here. Even so, I\n> don't really understand what the justification would be for refusing\n> to let the client ask for a specific compression level.\n>\n> And on the flip side, I also don't understand why the server would\n> want to mandate a certain compression level. If compression is very\n> expensive for a certain algorithm when the level is above some\n> threshold X, we could have a GUC to limit the maximum level that the\n> client can request. But, given that the gzip compression level\n> defaults to 6 in every other context, why would the administrator of a\n> particular server want to say, well, the default for my server is 3 or\n> 9 or whatever?\n>\n> (This is of course all presuming you want to use gzip at all, which\n> you probably don't, because gzip is crazy slow. Use lz4 or zstd! But\n> it makes the point.)\n\nNew proposal, predicated on the assumption that if you enable\ncompression you are ok with the client picking whatever level they\nwant. At least with the currently enabled algorithms I don't think\nany of them are so insane that they would knock over a server or\nanything, and in general postgres servers are usually connected to by\nclients that the server admin has some channel to talk to (after all\nthey somehow had to get access to log in to the server in the first\nplace) if they are doing something wasteful, given that a client can\ndo a lot worse things than enable aggressive compression by writing\nbad queries.\n\nOn the server side, we use slash separated sets of options\nconnection_compression=DEFAULT_VALUE_FOR_BOTH_DIRECTIONS/client_to_server=OVERRIDE_FOR_THIS_DIRECTION/server_to_client=OVERRIDE_FOR_THIS_DIRECTION\nwith the values being semicolon separated compression algorithms.\nOn the client side, you can specify\ncompression=<same_specification_as_above>,\nbut on the client side you can actually specify compression options,\nwhich the server will use if provided, and otherwise it will fall back\nto defaults.\n\nIf we think we need to, we could let the server specify defaults for\nserver-side compression. My overall thought though is that having an\nexcessive number of knobs increases the surface area for testing and\nbugs while also increasing potential user confusion and that allowing\nconfiguration on *both* sides doesn't seem sufficiently useful to be\nworth adding that complexity.\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Fri, 17 May 2024 10:53:28 -1000",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, May 17, 2024 at 1:53 PM Jacob Burroughs\n<[email protected]> wrote:\n> New proposal, predicated on the assumption that if you enable\n> compression you are ok with the client picking whatever level they\n> want. At least with the currently enabled algorithms I don't think\n> any of them are so insane that they would knock over a server or\n> anything, and in general postgres servers are usually connected to by\n> clients that the server admin has some channel to talk to (after all\n> they somehow had to get access to log in to the server in the first\n> place) if they are doing something wasteful, given that a client can\n> do a lot worse things than enable aggressive compression by writing\n> bad queries.\n\nWe're talking about a transport-level option, though -- I thought the\nproposal enabled compression before authentication completed? Or has\nthat changed?\n\n(I'm suspicious of arguments that begin \"well you can already do bad\nthings\", anyway... It seems like there's a meaningful difference\nbetween consuming resources running a parsed query and consuming\nresources trying to figure out what the parsed query is. I don't know\nif the solution is locking in a compression level, or something else;\nmaybe they're both reasonably mitigated in the same way. I haven't\nreally looked into zip bombs much.)\n\n--Jacob\n\n\n",
"msg_date": "Fri, 17 May 2024 14:10:36 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, May 17, 2024 at 4:53 PM Jacob Burroughs\n<[email protected]> wrote:\n> I think I was more thinking that trying to let both parties control\n> the parameter seemed like a recipe for confusion and sadness, and so\n> the choice that felt most natural to me was to let the sender control\n> it, but I'm definitely open to changing that the other way around.\n\nTo be clear, I am not arguing that it should be the receiver's choice.\nI'm arguing it should be the client's choice, which means the client\ndecides what it sends and also tells the server what to send to it.\nI'm open to counter-arguments, but as I've thought about this more,\nI've come to the conclusion that letting the client control the\nbehavior is the most likely to be useful and the most consistent with\nexisting facilities. I think we're on the same page based on the rest\nof your email: I'm just clarifying.\n\n> On the server side, we use slash separated sets of options\n> connection_compression=DEFAULT_VALUE_FOR_BOTH_DIRECTIONS/client_to_server=OVERRIDE_FOR_THIS_DIRECTION/server_to_client=OVERRIDE_FOR_THIS_DIRECTION\n> with the values being semicolon separated compression algorithms.\n> On the client side, you can specify\n> compression=<same_specification_as_above>,\n> but on the client side you can actually specify compression options,\n> which the server will use if provided, and otherwise it will fall back\n> to defaults.\n\nI have some quibbles with the syntax but I agree with the concept.\nWhat I'd probably do is separate the server side thing into two GUCs,\neach with a list of algorithms, comma-separated, like we do for other\nlists in postgresql.conf. Maybe make the default 'all' meaning\n\"everything this build of the server supports\". On the client side,\nI'd allow both things to be specified using a single option, because\nwanting to do the same thing in both directions will be common, and\nyou actually have to type in connection strings sometimes, so\nverbosity matters more.\n\nAs far as the format of the value for that keyword, what do you think\nabout either compression=DO_THIS_BOTH_WAYS or\ncompression=DO_THIS_WHEN_SENDING/DO_THIS_WHEN_RECEIVING, with each \"do\nthis\" being a specification of the same form already accepted for\nserver-side compression e.g. gzip or gzip:level=9? If you don't like\nthat, why do you think the proposal you made above is better, and why\nis that one now punctuated with slashes instead of semicolons?\n\n> If we think we need to, we could let the server specify defaults for\n> server-side compression. My overall thought though is that having an\n> excessive number of knobs increases the surface area for testing and\n> bugs while also increasing potential user confusion and that allowing\n> configuration on *both* sides doesn't seem sufficiently useful to be\n> worth adding that complexity.\n\nI agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 17:40:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, 17 May 2024 at 23:40, Robert Haas <[email protected]> wrote:\n> To be clear, I am not arguing that it should be the receiver's choice.\n> I'm arguing it should be the client's choice, which means the client\n> decides what it sends and also tells the server what to send to it.\n> I'm open to counter-arguments, but as I've thought about this more,\n> I've come to the conclusion that letting the client control the\n> behavior is the most likely to be useful and the most consistent with\n> existing facilities. I think we're on the same page based on the rest\n> of your email: I'm just clarifying.\n\n+1\n\n> I have some quibbles with the syntax but I agree with the concept.\n> What I'd probably do is separate the server side thing into two GUCs,\n> each with a list of algorithms, comma-separated, like we do for other\n> lists in postgresql.conf. Maybe make the default 'all' meaning\n> \"everything this build of the server supports\". On the client side,\n> I'd allow both things to be specified using a single option, because\n> wanting to do the same thing in both directions will be common, and\n> you actually have to type in connection strings sometimes, so\n> verbosity matters more.\n>\n> As far as the format of the value for that keyword, what do you think\n> about either compression=DO_THIS_BOTH_WAYS or\n> compression=DO_THIS_WHEN_SENDING/DO_THIS_WHEN_RECEIVING, with each \"do\n> this\" being a specification of the same form already accepted for\n> server-side compression e.g. gzip or gzip:level=9? If you don't like\n> that, why do you think the proposal you made above is better, and why\n> is that one now punctuated with slashes instead of semicolons?\n\n+1\n\n\n",
"msg_date": "Sat, 18 May 2024 00:54:03 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, 17 May 2024 at 23:10, Jacob Champion\n<[email protected]> wrote:\n> We're talking about a transport-level option, though -- I thought the\n> proposal enabled compression before authentication completed? Or has\n> that changed?\n\nI think it would make sense to only compress messages after\nauthentication has completed. The gain of compressing authentication\nrelated packets seems pretty limited.\n\n> (I'm suspicious of arguments that begin \"well you can already do bad\n> things\"\n\nOnce logged in it's really easy to max out a core of the backend\nyou're connected as. There's many trivial queries you can use to do\nthat. An example would be:\nSELECT sum(i) from generate_series(1, 1000000000) i;\n\nSo I don't think it makes sense to worry about an attacker using a\nhigh compression level as a means to DoS the server. Sending a few of\nthe above queries seems much easier.\n\n\n",
"msg_date": "Sat, 18 May 2024 01:02:59 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, May 17, 2024 at 11:40 AM Robert Haas <[email protected]> wrote:\n>\n> To be clear, I am not arguing that it should be the receiver's choice.\n> I'm arguing it should be the client's choice, which means the client\n> decides what it sends and also tells the server what to send to it.\n> I'm open to counter-arguments, but as I've thought about this more,\n> I've come to the conclusion that letting the client control the\n> behavior is the most likely to be useful and the most consistent with\n> existing facilities. I think we're on the same page based on the rest\n> of your email: I'm just clarifying.\n\nThis is what I am imagining too\n\n> I have some quibbles with the syntax but I agree with the concept.\n> What I'd probably do is separate the server side thing into two GUCs,\n> each with a list of algorithms, comma-separated, like we do for other\n> lists in postgresql.conf. Maybe make the default 'all' meaning\n> \"everything this build of the server supports\". On the client side,\n> I'd allow both things to be specified using a single option, because\n> wanting to do the same thing in both directions will be common, and\n> you actually have to type in connection strings sometimes, so\n> verbosity matters more.\n>\n> As far as the format of the value for that keyword, what do you think\n> about either compression=DO_THIS_BOTH_WAYS or\n> compression=DO_THIS_WHEN_SENDING/DO_THIS_WHEN_RECEIVING, with each \"do\n> this\" being a specification of the same form already accepted for\n> server-side compression e.g. gzip or gzip:level=9? If you don't like\n> that, why do you think the proposal you made above is better, and why\n> is that one now punctuated with slashes instead of semicolons?\n\nI like this more than what I proposed, and will update the patches to\nreflect this proposal. (I've gotten them locally back into a state of\napplying cleanly and dealing with the changes needed to support direct\nSSL connections, so refactoring the protocol layer shouldn't be too\nhard now.)\n\nOn Fri, May 17, 2024 at 11:10 AM Jacob Champion\n<[email protected]> wrote:\n> We're talking about a transport-level option, though -- I thought the\n> proposal enabled compression before authentication completed? Or has\n> that changed?\nOn Fri, May 17, 2024 at 1:03 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I think it would make sense to only compress messages after\n> authentication has completed. The gain of compressing authentication\n> related packets seems pretty limited.\n\nAt the protocol level, compressed data is a message type that can be\nused to wrap arbitrary data as soon as the startup packet is\nprocessed. However, as an implementation detail that clients should\nnot rely on but that we can rely on in thinking about the\nimplications, the only message types that are compressed (except in\nthe 0005 CI patch for test running only) are PqMsg_CopyData,\nPqMsg_DataRow, and PqMsg_Query, all of which aren't sent before\nauthentication.\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Fri, 17 May 2024 19:18:00 -1000",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Fri, May 17, 2024 at 4:03 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 17 May 2024 at 23:10, Jacob Champion\n> <[email protected]> wrote:\n> > We're talking about a transport-level option, though -- I thought the\n> > proposal enabled compression before authentication completed? Or has\n> > that changed?\n>\n> I think it would make sense to only compress messages after\n> authentication has completed. The gain of compressing authentication\n> related packets seems pretty limited.\n\nOkay. But if we're relying on that for its security properties, it\nneeds to be enforced by the server.\n\n> > (I'm suspicious of arguments that begin \"well you can already do bad\n> > things\"\n>\n> Once logged in it's really easy to max out a core of the backend\n> you're connected as. There's many trivial queries you can use to do\n> that. An example would be:\n> SELECT sum(i) from generate_series(1, 1000000000) i;\n\nThis is just restating the \"you can already do bad things\" argument. I\nunderstand that if your query gets executed, it's going to consume\nresources on the thing that's executing it (for the record, though,\nthere are people working on constraining that). But introducing\ndisproportionate resource consumption into all traffic-inspecting\nsoftware, like pools and bouncers, seems like a different thing to me.\nMany use cases are going to be fine with it, of course, but I don't\nthink it should be hand-waved.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 20 May 2024 07:14:50 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Sat, May 18, 2024 at 1:18 AM Jacob Burroughs\n<[email protected]> wrote:\n> I like this more than what I proposed, and will update the patches to\n> reflect this proposal. (I've gotten them locally back into a state of\n> applying cleanly and dealing with the changes needed to support direct\n> SSL connections, so refactoring the protocol layer shouldn't be too\n> hard now.)\n\nSounds good!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 11:17:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 10:15 AM Jacob Champion\n<[email protected]> wrote:\n> This is just restating the \"you can already do bad things\" argument. I\n> understand that if your query gets executed, it's going to consume\n> resources on the thing that's executing it (for the record, though,\n> there are people working on constraining that). But introducing\n> disproportionate resource consumption into all traffic-inspecting\n> software, like pools and bouncers, seems like a different thing to me.\n> Many use cases are going to be fine with it, of course, but I don't\n> think it should be hand-waved.\n\nI can't follow this argument.\n\nI think it's important that the startup message is always sent\nuncompressed, because it's a strange exception to our usual\nmessage-formatting rules, and because it's so security-critical. I\ndon't think we should do anything to allow more variation there,\nbecause any benefit will be small and the chances of introducing\nsecurity vulnerabilities seems non-trivial.\n\nBut if the client says in the startup message that it would like to\nsend and receive compressed data and the server is happy with that\nrequest, I don't see why we need to postpone implementing that request\nuntil after the authentication exchange is completed. I think that\nwill make the code more complicated and I don't see a security\nbenefit. If the use of a particular compression algorithm is going to\nimpose too much load, the server, or the pooler, is free to refuse it,\nand should. Deferring the use of the compression method until after\nauthentication doesn't really solve any problem here, at least not\nthat I can see.\n\nIt does occur to me that if some compression algorithm has a buffer\noverrun bug, restricting its use until after authentication might\nreduce the score of the resulting CVE, because now you have to be able\nto authenticate to make an exploit work. Perhaps that's an argument\nfor imposing a restriction here, but it doesn't seem to be the\nargument that you're making.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 11:29:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 8:29 AM Robert Haas <[email protected]> wrote:\n> It does occur to me that if some compression algorithm has a buffer\n> overrun bug, restricting its use until after authentication might\n> reduce the score of the resulting CVE, because now you have to be able\n> to authenticate to make an exploit work. Perhaps that's an argument\n> for imposing a restriction here, but it doesn't seem to be the\n> argument that you're making.\n\nIt wasn't my argument; Jacob B said above:\n\n> in general postgres servers are usually connected to by\n> clients that the server admin has some channel to talk to (after all\n> they somehow had to get access to log in to the server in the first\n> place) if they are doing something wasteful, given that a client can\n> do a lot worse things than enable aggressive compression by writing\n> bad queries.\n\n...and my response was that, no, the proposal doesn't seem to be\nrequiring that authentication take place before compression is done.\n(As evidenced by your email. :D) If the claim is that there are no\nsecurity problems with letting unauthenticated clients force\ndecompression, then I can try to poke holes in that; or if the claim\nis that we don't need to worry about that at all because we'll wait\nuntil after authentication, then I can poke holes in that too. My\nrequest is just that we choose one.\n\n> But if the client says in the startup message that it would like to\n> send and receive compressed data and the server is happy with that\n> request, I don't see why we need to postpone implementing that request\n> until after the authentication exchange is completed. I think that\n> will make the code more complicated and I don't see a security\n> benefit.\n\nI haven't implemented compression bombs before to know lots of\ndetails, but I think the general idea is to take up resources that are\nvastly disproportionate to the effort expended by the client. The\nsystemic risk is then more or less multiplied by the number of\nintermediaries that need to do the decompression. Maybe all three of\nour algorithms are hardened against malicious compression techniques;\nthat'd be great. But if we've never had a situation where a completely\nuntrusted peer can hand a blob to the server and say \"here, decompress\nthis for me\", maybe we'd better check?\n\n--Jacob\n\n\n",
"msg_date": "Mon, 20 May 2024 09:49:46 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 12:49 PM Jacob Champion\n<[email protected]> wrote:\n> ...and my response was that, no, the proposal doesn't seem to be\n> requiring that authentication take place before compression is done.\n> (As evidenced by your email. :D) If the claim is that there are no\n> security problems with letting unauthenticated clients force\n> decompression, then I can try to poke holes in that;\n\nI would prefer this approach, so I suggest trying to poke holes here\nfirst. If you find big enough holes then...\n\n> or if the claim\n> is that we don't need to worry about that at all because we'll wait\n> until after authentication, then I can poke holes in that too. My\n> request is just that we choose one.\n\n...we can fall back to this and you can try to poke holes here.\n\nI really hope that you can't poke big enough holes to kill the feature\nentirely, though. Because that sounds sad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 13:01:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 10:01 AM Robert Haas <[email protected]> wrote:\n> I really hope that you can't poke big enough holes to kill the feature\n> entirely, though. Because that sounds sad.\n\nEven if there are holes, I don't think the situation's going to be bad\nenough to tank everything; otherwise no one would be able to use\ndecompression on the Internet. :D And I expect the authors of the\nnewer compression methods to have thought about these things [1].\n\nI hesitate to ask as part of the same email, but what were the plans\nfor compression in combination with transport encryption? (Especially\nif you plan to compress the authentication exchange, since mixing your\nLDAP password into the compression context seems like it might be a\nbad idea if you don't want to leak it.)\n\n--Jacob\n\n[1] https://datatracker.ietf.org/doc/html/rfc8878#name-security-considerations\n\n\n",
"msg_date": "Mon, 20 May 2024 10:22:55 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 1:23 PM Jacob Champion\n<[email protected]> wrote:\n> On Mon, May 20, 2024 at 10:01 AM Robert Haas <[email protected]> wrote:\n> > I really hope that you can't poke big enough holes to kill the feature\n> > entirely, though. Because that sounds sad.\n>\n> Even if there are holes, I don't think the situation's going to be bad\n> enough to tank everything; otherwise no one would be able to use\n> decompression on the Internet. :D And I expect the authors of the\n> newer compression methods to have thought about these things [1].\n>\n> I hesitate to ask as part of the same email, but what were the plans\n> for compression in combination with transport encryption? (Especially\n> if you plan to compress the authentication exchange, since mixing your\n> LDAP password into the compression context seems like it might be a\n> bad idea if you don't want to leak it.)\n\nSo, the data would be compressed first, with framing around that, and\nthen transport encryption would happen afterwards. I don't see how\nthat would leak your password, but I have a feeling that might be a\nsign that I'm about to learn some unpleasant truths.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 13:48:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "\n\n> On 20 May 2024, at 22:48, Robert Haas <[email protected]> wrote:\n> \n> On Mon, May 20, 2024 at 1:23 PM Jacob Champion\n> <[email protected]> wrote:\n>> On Mon, May 20, 2024 at 10:01 AM Robert Haas <[email protected]> wrote:\n>>> I really hope that you can't poke big enough holes to kill the feature\n>>> entirely, though. Because that sounds sad.\n>> \n>> Even if there are holes, I don't think the situation's going to be bad\n>> enough to tank everything; otherwise no one would be able to use\n>> decompression on the Internet. :D And I expect the authors of the\n>> newer compression methods to have thought about these things [1].\n>> \n>> I hesitate to ask as part of the same email, but what were the plans\n>> for compression in combination with transport encryption? (Especially\n>> if you plan to compress the authentication exchange, since mixing your\n>> LDAP password into the compression context seems like it might be a\n>> bad idea if you don't want to leak it.)\n> \n> So, the data would be compressed first, with framing around that, and\n> then transport encryption would happen afterwards. I don't see how\n> that would leak your password, but I have a feeling that might be a\n> sign that I'm about to learn some unpleasant truths.\n\nCompression defeats encryption. That's why it's not in TLS anymore.\nThe thing is compression codecs use data self correlation. And if you mix secret data with user's data, user might guess how correlated they are.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 20 May 2024 23:05:22 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 2:05 PM Andrey M. Borodin <[email protected]> wrote:\n> Compression defeats encryption. That's why it's not in TLS anymore.\n> The thing is compression codecs use data self correlation. And if you mix secret data with user's data, user might guess how correlated they are.\n\nYeah, I'm aware that there are some problems like this. For example,\nsuppose the bad guy can both supply some of the data sent over the\nconnection (e.g. by typing search queries into a web page) and also\nobserve the traffic between the web application and the database. Then\nthey could supply data and try to guess how correlated that is with\nother data sent over the same connection. But if that's a practical\nattack, preventing compression prior to the authentication exchange\nprobably isn't good enough: the user could also try to guess what\nqueries are being sent on behalf of other users through the same\npooled connection, or they could try to use the bits of the query that\nthey can control to guess what the other bits of the query that they\ncan't see look like.\n\nBut, does this mean that we should just refuse to offer compression as\na feature? This kind of attack isn't a threat in every environment,\nand in some environments, compression could be pretty useful. For\ninstance, you might need to pull down a lot of data from the database\nover a slow connection. Perhaps you're the only user of the database,\nand you wrote all of the queries yourself in a locked vault, accepting\nno untrusted inputs. In that case, these kinds of attacks aren't\npossible, or at least I don't see how, but you might want both\ncompression and encryption. I guess I don't understand why TLS removed\nsupport for encryption entirely instead of disclaiming its use in some\nappropriate way.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 14:37:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "\n\n\n> On 20 May 2024, at 23:37, Robert Haas <[email protected]> wrote:\n> \n> But if that's a practical\n> attack, preventing compression prior to the authentication exchange\n> probably isn't good enough: the user could also try to guess what\n> queries are being sent on behalf of other users through the same\n> pooled connection, or they could try to use the bits of the query that\n> they can control to guess what the other bits of the query that they\n> can't see look like.\n\nAll these attacks can be practically exploited in a controlled environment.\nThat's why previous incarnation of this patchset [0] contained a way to reset compression context. And Odyssey AFAIR did it (Dan, coauthor of that patch, implemented the compression in Odyssey).\nBut attacking authentication is much more straightforward and viable.\n\n> On 20 May 2024, at 23:37, Robert Haas <[email protected]> wrote:\n> \n> But, does this mean that we should just refuse to offer compression as\n> a feature?\n\nNo, absolutely, we need the feature.\n\n> I guess I don't understand why TLS removed\n> support for encryption entirely instead of disclaiming its use in some\n> appropriate way.\n\nI think, the scope of TLS is too broad. HTTPS in turn has a compression. But AFAIK it never compress headers.\nIMO we should try to avoid compressing authentication information.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/38/3499/\n\n",
"msg_date": "Tue, 21 May 2024 00:09:42 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 11:05 AM Andrey M. Borodin <[email protected]> wrote:\n> > So, the data would be compressed first, with framing around that, and\n> > then transport encryption would happen afterwards. I don't see how\n> > that would leak your password, but I have a feeling that might be a\n> > sign that I'm about to learn some unpleasant truths.\n>\n> Compression defeats encryption. That's why it's not in TLS anymore.\n> The thing is compression codecs use data self correlation. And if you mix secret data with user's data, user might guess how correlated they are.\n\nI'm slow on the draw, but I hacked up a sample client to generate\ntraffic against the compression-enabled server, to try to illustrate.\n\nIf my client sends an LDAP password of \"hello\", followed by the query\n`SELECT 'world'`, as part of the same gzip stream, I get two encrypted\npackets on the wire: lengths 42 and 49 bytes. If the client instead\nsends the query `SELECT 'hello'`, I get lengths 42 and 46. We lost\nthree bytes, and there's only been one packet on the stream before the\nquery; if the observer controlled the query, it's pretty obvious that\nthe self-similarity has to have come from the PasswordMessage. Rinse\nand repeat.\n\nThat doesn't cover the case where the password itself is low-entropy,\neither. \"hellohellohellohello\" at least has length, but once you\ncompress it that collapses. So an attacker can passively monitor for\nshorter password packets and know which user to target first.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 20 May 2024 12:39:49 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 11:37 AM Robert Haas <[email protected]> wrote:\n> But if that's a practical\n> attack, preventing compression prior to the authentication exchange\n> probably isn't good enough\n\nI mean... you said it, not me. I'm trying not to rain on the parade\ntoo much, because compression is clearly very valuable. But it makes\nme really uncomfortable that we're reintroducing the compression\noracle (especially over the authentication exchange, which is\ngenerally more secret than the rest of the traffic).\n\n> But, does this mean that we should just refuse to offer compression as\n> a feature? This kind of attack isn't a threat in every environment,\n> and in some environments, compression could be pretty useful. For\n> instance, you might need to pull down a lot of data from the database\n> over a slow connection. Perhaps you're the only user of the database,\n> and you wrote all of the queries yourself in a locked vault, accepting\n> no untrusted inputs. In that case, these kinds of attacks aren't\n> possible, or at least I don't see how, but you might want both\n> compression and encryption.\n\nRight, I think it's reasonable to let a sufficiently\ndetermined/informed user lift the guardrails, but first we have to\nchoose to put guardrails in place... and then we have to somehow\nsufficiently inform the users when it's okay to lift them.\n\n> I guess I don't understand why TLS removed\n> support for encryption entirely instead of disclaiming its use in some\n> appropriate way.\n\nOne of the IETF conversations was at [1] (there were dissenters on the\nlist, as you might expect). My favorite summary is this one from\nAlyssa Rowan:\n\n> Compression is usually best performed as \"high\" as possible; transport layer is blind to what's being compressed, which is (as we now know) was definitely too low and was in retrospect a mistake.\n>\n> Any application layer protocol needs to know - if compression is supported - to separate compression contexts for attacker-chosen plaintext and attacker-sought unknown secrets. (As others have stated, HTTPbis covers this.)\n\nBut for SQL, where's the dividing line between attacker-chosen and\nattacker-sought? To me, it seems like only the user knows; the server\nhas no clue. I think that puts us \"lower\" in Alyssa's model than HTTP\nis.\n\nAs Andrey points out, there was prior work done that started to take\nthis into account. I haven't reviewed it to see how good it is -- and\nI think there are probably many use cases in which queries and tables\ncontain both private and attacker-controlled information -- but if we\nagree that they have to be separated, then the strategy can at least\nbe improved upon.\n\n--Jacob\n\n[1] https://mailarchive.ietf.org/arch/msg/tls/xhMLf8j4pq8W_ZGXUUU1G_m6r1c/\n\n\n",
"msg_date": "Mon, 20 May 2024 12:42:13 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 9:09 PM Andrey M. Borodin <[email protected]>\nwrote:\n\n>\n>\n>\n> > On 20 May 2024, at 23:37, Robert Haas <[email protected]> wrote:\n> >\n> > But, does this mean that we should just refuse to offer compression as\n> > a feature?\n>\n> No, absolutely, we need the feature.\n>\n> > I guess I don't understand why TLS removed\n> > support for encryption entirely instead of disclaiming its use in some\n> > appropriate way.\n>\n> I think, the scope of TLS is too broad. HTTPS in turn has a compression.\n> But AFAIK it never compress headers.\n> IMO we should try to avoid compressing authentication information.\n>\n\nThat used to be the case in HTTP/1. But header compression was one of the\nheadline features of HTTP/2, which isn't exactly new anymore. But there's a\nspecial algorithm, HPACK, for it. And then http/3 uses QPACK.\nCloudflare has a pretty decent blog post explaining why and how:\nhttps://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/, or\nrfc7541 for all the details.\n\ntl;dr; is yes, let's be careful not to expose headers to a CRIME-style\nattack. And I doubt our connections has as much to gain by compressing\n\"header style\" fields as http, so we are probably better off just not\ncompressing those parts.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, May 20, 2024 at 9:09 PM Andrey M. Borodin <[email protected]> wrote:\n\n> On 20 May 2024, at 23:37, Robert Haas <[email protected]> wrote:\n> \n> But, does this mean that we should just refuse to offer compression as\n> a feature?\n\nNo, absolutely, we need the feature.\n\n> I guess I don't understand why TLS removed\n> support for encryption entirely instead of disclaiming its use in some\n> appropriate way.\n\nI think, the scope of TLS is too broad. HTTPS in turn has a compression. But AFAIK it never compress headers.\nIMO we should try to avoid compressing authentication information.That used to be the case in HTTP/1. But header compression was one of the headline features of HTTP/2, which isn't exactly new anymore. But there's a special algorithm, HPACK, for it. And then http/3 uses QPACK. Cloudflare has a pretty decent blog post explaining why and how: https://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/, or rfc7541 for all the details.tl;dr; is yes, let's be careful not to expose headers to a CRIME-style attack. And I doubt our connections has as much to gain by compressing \"header style\" fields as http, so we are probably better off just not compressing those parts.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 20 May 2024 22:11:43 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 4:12 PM Magnus Hagander <[email protected]> wrote:\n> That used to be the case in HTTP/1. But header compression was one of the headline features of HTTP/2, which isn't exactly new anymore. But there's a special algorithm, HPACK, for it. And then http/3 uses QPACK. Cloudflare has a pretty decent blog post explaining why and how: https://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/, or rfc7541 for all the details.\n>\n> tl;dr; is yes, let's be careful not to expose headers to a CRIME-style attack. And I doubt our connections has as much to gain by compressing \"header style\" fields as http, so we are probably better off just not compressing > Work: https://www.redpill-linpro.com/\n\nWhat do you think constitutes a header in the context of the\nPostgreSQL wire protocol?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 May 2024 08:32:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, May 20, 2024 at 2:42 PM Jacob Champion\n<[email protected]> wrote:\n>\n> I mean... you said it, not me. I'm trying not to rain on the parade\n> too much, because compression is clearly very valuable. But it makes\n> me really uncomfortable that we're reintroducing the compression\n> oracle (especially over the authentication exchange, which is\n> generally more secret than the rest of the traffic).\n\nAs currently implemented, the compression only applies to\nCopyData/DataRow/Query messages, none of which should be involved in\nauthentication, unless I've really missed something in my\nunderstanding.\n\n> Right, I think it's reasonable to let a sufficiently\n> determined/informed user lift the guardrails, but first we have to\n> choose to put guardrails in place... and then we have to somehow\n> sufficiently inform the users when it's okay to lift them.\n\nMy thought would be that compression should be opt-in on the client\nside, with documentation around the potential security pitfalls. (I\ncould be convinced it should be opt-in on the server side, but overall\nI think opt-in on the client side generally protects against footguns\nwithout excessively getting in the way and if an attacker controls the\nclient, they can just get the information they want directly-they\ndon't need compression sidechannels to get that information.)\n\n> But for SQL, where's the dividing line between attacker-chosen and\n> attacker-sought? To me, it seems like only the user knows; the server\n> has no clue. I think that puts us \"lower\" in Alyssa's model than HTTP\n> is.\n>\n> As Andrey points out, there was prior work done that started to take\n> this into account. I haven't reviewed it to see how good it is -- and\n> I think there are probably many use cases in which queries and tables\n> contain both private and attacker-controlled information -- but if we\n> agree that they have to be separated, then the strategy can at least\n> be improved upon.\n\nWithin SQL-level things, I don't think we can reasonably differentiate\nbetween private and attacker-controlled information at the\nlibpq/server level. We can reasonably differentiate between message\ntypes that *definitely* are private and ones that could have\neither/both data in them, but that's not nearly as useful. I think\nnot compressing auth-related packets plus giving a mechanism to reset\nthe compression stream for clients (plus guidance on the tradeoffs\ninvolved in turning on compression) is about as good as we can get.\nThat said, I *think* the feature is reasonable to be\nreviewed/committed without the reset functionality as long as the\ncompressed data already has the mechanism built in (as it does) to\nsignal when a decompressor should restart its streaming. The actual\nsignaling protocol mechanism/necessary libpq API can happen in\nfollowon work.\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 21 May 2024 10:23:36 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Mon, 20 May 2024 at 21:42, Jacob Champion\n<[email protected]> wrote:\n> As Andrey points out, there was prior work done that started to take\n> this into account. I haven't reviewed it to see how good it is -- and\n> I think there are probably many use cases in which queries and tables\n> contain both private and attacker-controlled information -- but if we\n> agree that they have to be separated, then the strategy can at least\n> be improved upon.\n\n\nTo help get everyone on the same page I wanted to list all the\nsecurity concerns in one place:\n\n1. Triggering excessive CPU usage before authentication, by asking for\nvery high compression levels\n2. Triggering excessive memory/CPU usage before authentication, by\nsending a client sending a zipbomb\n3. Triggering excessive CPU after authentication, by asking for a very\nhigh compression level\n4. Triggering excessive memory/CPU after authentication due to\nzipbombs (i.e. small amount of data extracting to lots of data)\n5. CRIME style leakage of information about encrypted data\n\n1 & 2 can easily be solved by not allowing any authentication packets\nto be compressed. This also has benefits for 5.\n\n3 & 4 are less of a concern than 1&2 imho. Once authenticated a client\ndeserves some level of trust. But having knobs to limit impact\ndefinitely seems useful.\n\n3 can be solved in two ways afaict:\na. Allow the server to choose the maximum compression level for each\ncompression method (using some GUC), and downgrade the level\ntransparently when a higher level is requested\nb. Don't allow the client to choose the compression level that the server uses.\n\nI'd prefer option a\n\n4 would require some safety limits on the amount of data that a\n(small) compressed message can be decompressed to, and stop\ndecompression of that message once that limit is hit. What that limit\nshould be seems hard to choose though. A few ideas:\na. The size of the message reported by the uncompressed header. This\nwould mean that at most the 4GB will be uncompressed, since maximum\nmessage length is 4GB (limited by 32bit message length field)\nb. Allow servers to specify maximum client decompressed message length\nlower than this 4GB, e.g. messages of more than 100MB of uncompressed\nsize should not be allowed.\n\nI think 5 is the most complicated to deal with, especially as it\ndepends on the actual usage to know what is safe. I believe we should\nlet users have the freedom to make their own security tradeoffs, but\nwe should protect them against some of the most glaring issues\n(especially ones that benefit little from compression anyway). As\nalready shown by Andrey, sending LDAP passwords in a compressed way\nseems extremely dangerous. So I think we should disallow compressing\nany authentication related packets. To reduce similar risks further we\ncan choose to compress only the message types that we expect to\nbenefit most from compression. IMHO those are the following (marked\nwith (B)ackend or (F)rontend to show who sends them):\n- Query (F)\n- Parse (F)\n- Describe (F)\n- Bind (F)\n- RowDescription (B)\n- DataRow (B)\n- CopyData (B/F)\n\nThen I think we should let users choose how they want to compress and\nwhere they want their compression stream to restart. Something like\nthis:\na. compression_restart=query: Restart the stream after every query.\nRecommended if queries across the same connection are triggered by\ndifferent end-users. I think this would be a sane default\nb. compression_restart=message: Restart the stream for every message.\nRecommended if the amount of correlation between rows of the same\nquery is a security concern.\nc. compression_restart=manual: Don't restart the stream automatically,\nbut only when the client user calls a specific function. Recommended\nonly if the user can make trade-offs, or if no encryption is used\nanyway.\n\n\n",
"msg_date": "Tue, 21 May 2024 17:42:19 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> To help get everyone on the same page I wanted to list all the\n> security concerns in one place:\n>\n> 1. Triggering excessive CPU usage before authentication, by asking for\n> very high compression levels\n> 2. Triggering excessive memory/CPU usage before authentication, by\n> sending a client sending a zipbomb\n> 3. Triggering excessive CPU after authentication, by asking for a very\n> high compression level\n> 4. Triggering excessive memory/CPU after authentication due to\n> zipbombs (i.e. small amount of data extracting to lots of data)\n> 5. CRIME style leakage of information about encrypted data\n>\n> 1 & 2 can easily be solved by not allowing any authentication packets\n> to be compressed. This also has benefits for 5.\n\nThis is already addressed by only compressing certain message types.\nIf we think it is important that the server reject compressed packets\nof other types I can add that, but it seemed reasonable to just make\nthe client never send such packets compressed.\n\n> 3 & 4 are less of a concern than 1&2 imho. Once authenticated a client\n> deserves some level of trust. But having knobs to limit impact\n> definitely seems useful.\n>\n> 3 can be solved in two ways afaict:\n> a. Allow the server to choose the maximum compression level for each\n> compression method (using some GUC), and downgrade the level\n> transparently when a higher level is requested\n> b. Don't allow the client to choose the compression level that the server uses.\n>\n> I'd prefer option a\n\n3a would seem preferable given discussion upthread. It would probably\nbe worth doing some measurement to check how much of an actual\ndifference in compute effort the max vs the default for all 3\nalgorithms adds, because I would really prefer to avoid needing to add\neven more configuration knobs if the max compression level for the\nstreaming data usecase is sufficiently performant.\n\n> 4 would require some safety limits on the amount of data that a\n> (small) compressed message can be decompressed to, and stop\n> decompression of that message once that limit is hit. What that limit\n> should be seems hard to choose though. A few ideas:\n> a. The size of the message reported by the uncompressed header. This\n> would mean that at most the 4GB will be uncompressed, since maximum\n> message length is 4GB (limited by 32bit message length field)\n> b. Allow servers to specify maximum client decompressed message length\n> lower than this 4GB, e.g. messages of more than 100MB of uncompressed\n> size should not be allowed.\n\nBecause we are using streaming decompression, this is much less of an\nissue than for things that decompress wholesale onto disk/into memory.\nWe only read PQ_RECV_BUFFER_SIZE (8k) bytes off the stream at once,\nand when reading a packet we already have a `maxmsglen` that is\nPQ_LARGE_MESSAGE_LIMIT (1gb) already, and \"We abort the connection (by\nreturning EOF) if client tries to send more than that.)\". Therefore,\nwe effectively already have a limit of 1gb that applies to regular\nmessages too, and I think we should rely on this mechanism for\ncompressed data too (if we really think we need to make that number\nconfigurable we probably could, but again the fewer new knobs we need\nto add the better.\n\n\n> I think 5 is the most complicated to deal with, especially as it\n> depends on the actual usage to know what is safe. I believe we should\n> let users have the freedom to make their own security tradeoffs, but\n> we should protect them against some of the most glaring issues\n> (especially ones that benefit little from compression anyway). As\n> already shown by Andrey, sending LDAP passwords in a compressed way\n> seems extremely dangerous. So I think we should disallow compressing\n> any authentication related packets. To reduce similar risks further we\n> can choose to compress only the message types that we expect to\n> benefit most from compression. IMHO those are the following (marked\n> with (B)ackend or (F)rontend to show who sends them):\n> - Query (F)\n> - Parse (F)\n> - Describe (F)\n> - Bind (F)\n> - RowDescription (B)\n> - DataRow (B)\n> - CopyData (B/F)\n\nThat seems like a reasonable list (current implementation is just\nCopyData/DataRow/Query, but I really just copied that fairly blindly\nfrom the previous incarnation of this effort.) See also my comment\nbelow 1&2 for if we think we need to block decompressing them too.\n\n> Then I think we should let users choose how they want to compress and\n> where they want their compression stream to restart. Something like\n> this:\n> a. compression_restart=query: Restart the stream after every query.\n> Recommended if queries across the same connection are triggered by\n> different end-users. I think this would be a sane default\n> b. compression_restart=message: Restart the stream for every message.\n> Recommended if the amount of correlation between rows of the same\n> query is a security concern.\n> c. compression_restart=manual: Don't restart the stream automatically,\n> but only when the client user calls a specific function. Recommended\n> only if the user can make trade-offs, or if no encryption is used\n> anyway.\n\nI reasonably like this idea, though I think maybe we should also\n(instead of query?) add per-transaction on the backend side. I'm\ncurious what other people think of this.\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 21 May 2024 11:13:57 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 8:23 AM Jacob Burroughs\n<[email protected]> wrote:\n> As currently implemented, the compression only applies to\n> CopyData/DataRow/Query messages, none of which should be involved in\n> authentication, unless I've really missed something in my\n> understanding.\n\nRight, but Robert has argued that we should compress it all, and I'm\nresponding to that proposal.\n\nSorry for introducing threads within threads. But I think it's\nvaluable to pin down both 1) the desired behavior, and 2) how the\ncurrent proposal behaves, as two separate things. I'll try to do a\nbetter job of communicating which I'm talking about.\n\n> > Right, I think it's reasonable to let a sufficiently\n> > determined/informed user lift the guardrails, but first we have to\n> > choose to put guardrails in place... and then we have to somehow\n> > sufficiently inform the users when it's okay to lift them.\n>\n> My thought would be that compression should be opt-in on the client\n> side, with documentation around the potential security pitfalls. (I\n> could be convinced it should be opt-in on the server side, but overall\n> I think opt-in on the client side generally protects against footguns\n> without excessively getting in the way\n\nWe absolutely have to document the risks and allow clients to be\nwritten safely. But I think server-side controls on risky behavior\nhave proven to be generally more valuable, because the server\nadministrator is often in a better spot to see the overall risks to\nthe system. (\"No, you will not use deprecated ciphersuites. No, you\nwill not access this URL over plaintext. No, I will not compress this\nresponse containing customer credit card numbers, no matter how nicely\nyou ask.\") There are many more clients than servers, so it's less\nrisky for the server to enforce safety than to hope that every client\nis safe.\n\nDoes your database and access pattern regularly mingle secrets with\npublic data? Would auditing correct client use of compression be a\nlogistical nightmare? Do your app developers keep indicating in\nconversations that they don't understand the risks at all? Cool, just\nset `encrypted_compression = nope_nope_nope` on the server and sleep\nsoundly at night. (Ideally we would default to that.)\n\n> and if an attacker controls the\n> client, they can just get the information they want directly-they\n> don't need compression sidechannels to get that information.)\n\nSure, but I don't think that's relevant to the threats being discussed.\n\n> Within SQL-level things, I don't think we can reasonably differentiate\n> between private and attacker-controlled information at the\n> libpq/server level.\n\nAnd by the IETF line of argument -- or at least the argument I quoted\nabove -- that implies that we really have no business introducing\ncompression when confidentiality is requested. A stronger approach\nwould require us to prove, or the user to indicate, safety before\ncompressing.\n\nTake a look at the security notes for QPACK [1] -- keeping in mind\nthat they know _more_ about what's going on at the protocol level than\nwe do, due to the header design. And they still say things like \"an\nencoder might choose not to index values with low entropy\" and \"these\ncriteria ... will evolve over time as new attacks are discovered.\" A\nhuge amount is left as an exercise for the reader. This stuff is\nreally hard.\n\n> We can reasonably differentiate between message\n> types that *definitely* are private and ones that could have\n> either/both data in them, but that's not nearly as useful. I think\n> not compressing auth-related packets plus giving a mechanism to reset\n> the compression stream for clients (plus guidance on the tradeoffs\n> involved in turning on compression) is about as good as we can get.\n\nThe concept of stream reset seems necessary but insufficient at the\napplication level, which bleeds over into Jelte's compression_restart\nproposal. (At the protocol level, I think it may be sufficient?)\n\nIf I write a query where one of the WHERE clauses is\nattacker-controlled and the other is a secret, I would really like to\nnot compress that query on the client side. If I join a table of user\nIDs against a table of user-provided addresses and a table of\napplication tokens for that user, compressing even a single row leaks\ninformation about those tokens -- at a _very_ granular level -- and I\nwould really like the server not to do that.\n\nSo if I'm building sand castles... I think maybe it'd be nice to mark\ntables (and/or individual columns?) as safe for compression under\nencryption, whether by row or in aggregate. And maybe libpq and psql\nshould be able to turn outgoing compression on and off at will.\n\nAnd I understand those would balloon the scope of the feature. I'm\nworried I'm doing the security-person thing and sucking all the air\nout of the room. I know not everybody uses transport encryption; for\nthose people, compress-it-all is probably a pretty winning strategy,\nand there's no need to reset the compression context ever. And the\npg_dump-style, \"give me everything\" use case seems like it could maybe\nbe okay, but I really don't know how to assess the risk there, at all.\n\n> That said, I *think* the feature is reasonable to be\n> reviewed/committed without the reset functionality as long as the\n> compressed data already has the mechanism built in (as it does) to\n> signal when a decompressor should restart its streaming. The actual\n> signaling protocol mechanism/necessary libpq API can happen in\n> followon work.\n\nWell... working out the security minutiae _after_ changing the\nprotocol is not historically a winning strategy, I think. Better to do\nit as a vertical stack.\n\nThanks,\n--Jacob\n\n[1] https://www.rfc-editor.org/rfc/rfc9204.html#name-security-considerations\n\n\n",
"msg_date": "Tue, 21 May 2024 11:23:52 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 9:14 AM Jacob Burroughs\n<[email protected]> wrote:\n> On Tue, May 21, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > To help get everyone on the same page I wanted to list all the\n> > security concerns in one place:\n> >\n> > 1. Triggering excessive CPU usage before authentication, by asking for\n> > very high compression levels\n> > 2. Triggering excessive memory/CPU usage before authentication, by\n> > sending a client sending a zipbomb\n> > 3. Triggering excessive CPU after authentication, by asking for a very\n> > high compression level\n> > 4. Triggering excessive memory/CPU after authentication due to\n> > zipbombs (i.e. small amount of data extracting to lots of data)\n> > 5. CRIME style leakage of information about encrypted data\n> >\n> > 1 & 2 can easily be solved by not allowing any authentication packets\n> > to be compressed. This also has benefits for 5.\n>\n> This is already addressed by only compressing certain message types.\n> If we think it is important that the server reject compressed packets\n> of other types I can add that, but it seemed reasonable to just make\n> the client never send such packets compressed.\n\nIf the server doesn't reject compressed packets pre-authentication,\nthen case 2 isn't mitigated. (I haven't proven how risky that case is\nyet, to be clear.) In other words: if the threat model is that a\nclient can attack us, we shouldn't assume that it will attack us\npolitely.\n\n> > 4 would require some safety limits on the amount of data that a\n> > (small) compressed message can be decompressed to, and stop\n> > decompression of that message once that limit is hit. What that limit\n> > should be seems hard to choose though. A few ideas:\n> > a. The size of the message reported by the uncompressed header. This\n> > would mean that at most the 4GB will be uncompressed, since maximum\n> > message length is 4GB (limited by 32bit message length field)\n> > b. Allow servers to specify maximum client decompressed message length\n> > lower than this 4GB, e.g. messages of more than 100MB of uncompressed\n> > size should not be allowed.\n>\n> Because we are using streaming decompression, this is much less of an\n> issue than for things that decompress wholesale onto disk/into memory.\n\n(I agree in general, but since you're designing a protocol extension,\nIMO it's not enough that your implementation happens to mitigate\nrisks. We more or less have to bake those mitigations into the\nspecification of the extension, because things that aren't servers\nhave to decompress now. Similar to RFC security considerations.)\n\n--Jacob\n\n\n",
"msg_date": "Tue, 21 May 2024 11:38:51 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 1:39 PM Jacob Champion\n<[email protected]> wrote:\n>\n> If the server doesn't reject compressed packets pre-authentication,\n> then case 2 isn't mitigated. (I haven't proven how risky that case is\n> yet, to be clear.) In other words: if the threat model is that a\n> client can attack us, we shouldn't assume that it will attack us\n> politely.\n\nI think I thought I was writing about something else when I wrote that\n:sigh:. I think what I really should have written was a version of\nthe part below, which is that we use streaming decompression, only\ndecompress 8kb at a time, and for pre-auth messages limit them to\n`PG_MAX_AUTH_TOKEN_LENGTH` (65535 bytes), which isn't really enough\ndata to actually cause any real-world pain by needing to decompress vs\nthe equivalent pain of sending invalid uncompressed auth packets.\n\n> > Because we are using streaming decompression, this is much less of an\n> > issue than for things that decompress wholesale onto disk/into memory.\n>\n> (I agree in general, but since you're designing a protocol extension,\n> IMO it's not enough that your implementation happens to mitigate\n> risks. We more or less have to bake those mitigations into the\n> specification of the extension, because things that aren't servers\n> have to decompress now. Similar to RFC security considerations.)\n\nWe own both the canonical client and server, so those are both covered\nhere. I would think it would be the responsibility of any other\nsystem that maintains its own implementation of the postgres protocol\nand chooses to support the compression protocol to perform its own\nmitigations against potential compression security issues. Should we\nput the fixed message size limits (that have de facto been part of the\nprotocol since 2021, even if they weren't documented as such) into the\nprotocol documentation? That would give implementers of the protocol\nnumbers that they could actually rely on when implementing the\nappropriate safeguards because they would be able to actually have\nexplicit guarantees about the size of messages. I think it would make\nmore sense to put the limits on the underlying messages rather than\nadding an additional limit that only applies to compressed messages.\n( I don't really see how one could implement other tooling that used\npg compression without using streaming compression, as the protocol\nnever hands over a standalone blob of compressed data: all compressed\ndata is always part of a stream, but even with streaming decompression\nyou still need some kind of limits or you will just chew up memory.)\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 21 May 2024 14:08:37 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 1:24 PM Jacob Champion\n<[email protected]> wrote:\n>\n> We absolutely have to document the risks and allow clients to be\n> written safely. But I think server-side controls on risky behavior\n> have proven to be generally more valuable, because the server\n> administrator is often in a better spot to see the overall risks to\n> the system. (\"No, you will not use deprecated ciphersuites. No, you\n> will not access this URL over plaintext. No, I will not compress this\n> response containing customer credit card numbers, no matter how nicely\n> you ask.\") There are many more clients than servers, so it's less\n> risky for the server to enforce safety than to hope that every client\n> is safe.\n>\n> Does your database and access pattern regularly mingle secrets with\n> public data? Would auditing correct client use of compression be a\n> logistical nightmare? Do your app developers keep indicating in\n> conversations that they don't understand the risks at all? Cool, just\n> set `encrypted_compression = nope_nope_nope` on the server and sleep\n> soundly at night. (Ideally we would default to that.)\n\nThinking about this more (and adding a encrypted_compression GUC or\nwhatever), I think my inclination would on the server-side default\nenable compression for insecure connections but default disable for\nencrypted connections, but both would be config parameters that can be\nchanged as desired.\n\n> The concept of stream reset seems necessary but insufficient at the\n> application level, which bleeds over into Jelte's compression_restart\n> proposal. (At the protocol level, I think it may be sufficient?)\n>\n> If I write a query where one of the WHERE clauses is\n> attacker-controlled and the other is a secret, I would really like to\n> not compress that query on the client side. If I join a table of user\n> IDs against a table of user-provided addresses and a table of\n> application tokens for that user, compressing even a single row leaks\n> information about those tokens -- at a _very_ granular level -- and I\n> would really like the server not to do that.\n>\n> So if I'm building sand castles... I think maybe it'd be nice to mark\n> tables (and/or individual columns?) as safe for compression under\n> encryption, whether by row or in aggregate. And maybe libpq and psql\n> should be able to turn outgoing compression on and off at will.\n>\n> And I understand those would balloon the scope of the feature. I'm\n> worried I'm doing the security-person thing and sucking all the air\n> out of the room. I know not everybody uses transport encryption; for\n> those people, compress-it-all is probably a pretty winning strategy,\n> and there's no need to reset the compression context ever. And the\n> pg_dump-style, \"give me everything\" use case seems like it could maybe\n> be okay, but I really don't know how to assess the risk there, at all.\n\nI would imagine that a large volume of uses of postgres are in\ncontexts (e.g. internal networks) where either no encryption is used\nor even when encryption is used the benefit of compression vs the risk\nof someone being a position to perform a BREACH-style sidechannel\nattack against DB traffic is sufficiently high that compress-it-all\nwould be be quite useful in many cases. Would some sort of\nper-table/column marking be useful for some cases? Probably, but that\ndoesn't seem to me like it needs to be in v1 of this feature as long\nas the protocol layer itself is designed such that parties can\narbitrarily alternate between transmitting compressed and uncompressed\ndata. Then if we build such a feature down the road we just add logic\naround *when* we compress but the protocol layer doesn't change.\n\n> Well... working out the security minutiae _after_ changing the\n> protocol is not historically a winning strategy, I think. Better to do\n> it as a vertical stack.\n\nThinking about it more, I agree that we probably should work out the\nprotocol level mechanism for resetting compression\ncontext/enabling/disabling/reconfiguring compression as part of this\nwork. I don't think that we need to have all the ways that the\napplication layer might choose to use such things done here, but we\nshould have all the necessary primitives worked out.\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Tue, 21 May 2024 14:26:32 -0500",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression (part 3)"
},
{
"msg_contents": "On Tue, May 21, 2024 at 12:08 PM Jacob Burroughs\n<[email protected]> wrote:\n> I think I thought I was writing about something else when I wrote that\n> :sigh:. I think what I really should have written was a version of\n> the part below, which is that we use streaming decompression, only\n> decompress 8kb at a time, and for pre-auth messages limit them to\n> `PG_MAX_AUTH_TOKEN_LENGTH` (65535 bytes), which isn't really enough\n> data to actually cause any real-world pain by needing to decompress vs\n> the equivalent pain of sending invalid uncompressed auth packets.\n\nOkay. So it sounds like your position is similar to Robert's from\nearlier: prefer allowing unauthenticated compressed packets for\nsimplicity, as long as we think it's safe for the server. (Personally\nI still think a client that compresses its password packets is doing\nit wrong, and we could help them out by refusing that.)\n\n> We own both the canonical client and server, so those are both covered\n> here. I would think it would be the responsibility of any other\n> system that maintains its own implementation of the postgres protocol\n> and chooses to support the compression protocol to perform its own\n> mitigations against potential compression security issues.\n\nSure, but if our official documentation is \"here's an extremely\nsecurity-sensitive feature, figure it out!\" then we've done a\ndisservice to the community.\n\n> Should we\n> put the fixed message size limits (that have de facto been part of the\n> protocol since 2021, even if they weren't documented as such) into the\n> protocol documentation?\n\nPossibly? I don't know if the other PG-compatible implementations use\nthe same limits. It might be better to say \"limits must exist\".\n\n> ( I don't really see how one could implement other tooling that used\n> pg compression without using streaming compression, as the protocol\n> never hands over a standalone blob of compressed data: all compressed\n> data is always part of a stream, but even with streaming decompression\n> you still need some kind of limits or you will just chew up memory.)\n\nWell, that's a good point; I wasn't thinking about the streaming APIs\nthemselves. If the easiest way to implement decompression requires the\nuse of an API that shouts \"hey, give me guardrails!\", then that helps\nquite a bit. I really need to look into the attack surface of the\nthree algorithms.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 21 May 2024 12:42:57 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq compression (part 3)"
}
] |
[
{
"msg_contents": "Move src/bin/pg_verifybackup/parse_manifest.c into src/common.\n\nThis makes it possible for the code to be easily reused by other\nclient-side tools, and/or by the server.\n\nPatch by me. Review of this patch in particular by at least Peter\nEisentraut; reviewers for the patch series in general include Dilip\nKumar, Andres Fruend, David Steele, Álvaro Herrera, and Jakub Wartak.\n\nDiscussion: http://postgr.es/m/CA+TgmoZ6UGZVnSy5iak6s6+AXu_DewXovDjhLs3-su6nmU_x_g@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/aafc07c7a191bc807c77fe2a044006a5db07faba\n\nModified Files\n--------------\nsrc/bin/pg_verifybackup/Makefile | 1 -\nsrc/bin/pg_verifybackup/meson.build | 1 -\nsrc/bin/pg_verifybackup/nls.mk | 4 ++--\nsrc/bin/pg_verifybackup/pg_verifybackup.c | 2 +-\nsrc/common/Makefile | 1 +\nsrc/common/meson.build | 1 +\nsrc/{bin/pg_verifybackup => common}/parse_manifest.c | 4 ++--\nsrc/{bin/pg_verifybackup => include/common}/parse_manifest.h | 2 +-\n8 files changed, 8 insertions(+), 8 deletions(-)",
"msg_date": "Tue, 19 Dec 2023 20:29:18 +0000",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Move src/bin/pg_verifybackup/parse_manifest.c into src/common."
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 08:29:18PM +0000, Robert Haas wrote:\n> Move src/bin/pg_verifybackup/parse_manifest.c into src/common.\n> \n> This makes it possible for the code to be easily reused by other\n> client-side tools, and/or by the server.\n> \n> Patch by me. Review of this patch in particular by at least Peter\n> Eisentraut; reviewers for the patch series in general include Dilip\n> Kumar, Andres Fruend, David Steele, Álvaro Herrera, and Jakub Wartak.\n\nWorth noting that this has forgotten to update @pgcommonallfiles in\nMkvcbuild.pm so this would have failed a build with src/build/msvc/.\nOr you counted on me here, relying on the scripts to be gone? ;p\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 10:01:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move src/bin/pg_verifybackup/parse_manifest.c into\n src/common."
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 8:01 PM Michael Paquier <[email protected]> wrote:\n> On Tue, Dec 19, 2023 at 08:29:18PM +0000, Robert Haas wrote:\n> > Move src/bin/pg_verifybackup/parse_manifest.c into src/common.\n> >\n> > This makes it possible for the code to be easily reused by other\n> > client-side tools, and/or by the server.\n> >\n> > Patch by me. Review of this patch in particular by at least Peter\n> > Eisentraut; reviewers for the patch series in general include Dilip\n> > Kumar, Andres Fruend, David Steele, Álvaro Herrera, and Jakub Wartak.\n>\n> Worth noting that this has forgotten to update @pgcommonallfiles in\n> Mkvcbuild.pm so this would have failed a build with src/build/msvc/.\n> Or you counted on me here, relying on the scripts to be gone? ;p\n\nTo be honest, I counted on CI to tell me whether it still builds everywhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 21:16:55 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move src/bin/pg_verifybackup/parse_manifest.c into\n src/common."
}
] |
[
{
"msg_contents": "Here is a patch which adds support for the returns_nonnull attribute \nalongside all the other attributes we optionally support.\n\nI recently wound up in a situation where I was checking for NULL return \nvalues of a function that couldn't ever return NULL because the \ninability to allocate memory was always elog(ERROR)ed (aborted).\n\nI didn't go through and mark anything, but I feel like it could be \nuseful for people going forward, including myself.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 19 Dec 2023 14:43:49 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add support for __attribute__((returns_nonnull))"
},
{
"msg_contents": "On 19.12.23 21:43, Tristan Partin wrote:\n> Here is a patch which adds support for the returns_nonnull attribute \n> alongside all the other attributes we optionally support.\n> \n> I recently wound up in a situation where I was checking for NULL return \n> values of a function that couldn't ever return NULL because the \n> inability to allocate memory was always elog(ERROR)ed (aborted).\n> \n> I didn't go through and mark anything, but I feel like it could be \n> useful for people going forward, including myself.\n\nI think it would be useful if this patch series contained a patch that \nadded some initial uses of this. That way we can check that the \nproposed definition actually works, and we can observe what it does, \nwith respect to warnings, static analysis, etc.\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 13:42:17 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for __attribute__((returns_nonnull))"
},
{
"msg_contents": "On Wed Dec 27, 2023 at 6:42 AM CST, Peter Eisentraut wrote:\n> On 19.12.23 21:43, Tristan Partin wrote:\n> > Here is a patch which adds support for the returns_nonnull attribute\n> > alongside all the other attributes we optionally support.\n> >\n> > I recently wound up in a situation where I was checking for NULL return\n> > values of a function that couldn't ever return NULL because the\n> > inability to allocate memory was always elog(ERROR)ed (aborted).\n> >\n> > I didn't go through and mark anything, but I feel like it could be\n> > useful for people going forward, including myself.\n>\n> I think it would be useful if this patch series contained a patch that\n> added some initial uses of this. That way we can check that the\n> proposed definition actually works, and we can observe what it does,\n> with respect to warnings, static analysis, etc.\n\nGood point. Patch attached.\n\nI tried to find some ways to prove some value, but I couldn't. Take this\nexample for instance:\n\n\tstatic const char word[] = { 'h', 'e', 'l', 'l', 'o' };\n\n\tconst char * __attribute__((returns_nonnull))\n\thello()\n\t{\n\t\treturn word;\n\t}\n\n\tint\n\tmain(void)\n\t{\n\t\tconst char *r;\n\n\t\tr = hello();\n\t\tif (r == NULL)\n\t\t\treturn 1;\n\n\t\treturn 0;\n\t}\n\nI would have thought I could get gcc or clang to warn on a wasteful NULL\ncheck, but alas. I also didn't see any code generation improvements, but\nI am assuming that the example is too contrived. I couldn't find any\ngood things online that had examples of when such an annotation forced\nthe compiler to warn or create more optimized code.\n\nIf you return NULL from the hello() function, clang will warn that the\nattribute doesn't match reality.\n\n--\nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 27 Dec 2023 12:20:22 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for __attribute__((returns_nonnull))"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 1:20 AM Tristan Partin <[email protected]> wrote:\n> I recently wound up in a situation where I was checking for NULL return\n> values of a function that couldn't ever return NULL because the\n> inability to allocate memory was always elog(ERROR)ed (aborted).\n\nIt sounds like you have a ready example to test, so what happens with the patch?\n\nAs for whether any code generation changed, I'd start by checking if\nanything in a non-debug binary changed at all.\n\n\n",
"msg_date": "Mon, 1 Jan 2024 10:29:19 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for __attribute__((returns_nonnull))"
},
{
"msg_contents": "On Sun Dec 31, 2023 at 9:29 PM CST, John Naylor wrote:\n> On Thu, Dec 28, 2023 at 1:20 AM Tristan Partin <[email protected]> wrote:\n> > I recently wound up in a situation where I was checking for NULL return\n> > values of a function that couldn't ever return NULL because the\n> > inability to allocate memory was always elog(ERROR)ed (aborted).\n>\n> It sounds like you have a ready example to test, so what happens with the patch?\n>\n> As for whether any code generation changed, I'd start by checking if\n> anything in a non-debug binary changed at all.\n\nThe idea I had in mind initially was PGLC_localeconv(), but I couldn't\nprove that anything changed with the annotation added. The second patch\nin my previous email was attempt at deriving real-world benefit, but\nnothing I did seemed to change anything. Perhaps you can test it and see\nif anything changes for you.\n\n--\nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 08 Jan 2024 17:04:58 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for __attribute__((returns_nonnull))"
},
{
"msg_contents": "On Mon, Jan 08, 2024 at 05:04:58PM -0600, Tristan Partin wrote:\n> The idea I had in mind initially was PGLC_localeconv(), but I couldn't\n> prove that anything changed with the annotation added. The second patch\n> in my previous email was attempt at deriving real-world benefit, but\n> nothing I did seemed to change anything. Perhaps you can test it and see\n> if anything changes for you.\n\nThere are a bunch of places in the tree where we return NULL just to\nkeep the compiler quiet, and where we should never return NULL, as in:\n return NULL; /* keep compiler quiet */\n\nSee bbstreamer_gzip.c as one example for non-gzip builds. Perhaps you\ncould look at one of these places and see if the marker shows\nbenefits?\n--\nMichael",
"msg_date": "Tue, 9 Jan 2024 09:14:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for __attribute__((returns_nonnull))"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nAttached is a patch which attempts to implement a new system function\npg_get_slot_invalidation_cause('slot_name') to get invalidation cause\nof a replication slot.\n\nCurrently, users do not have a way to know the invalidation cause. One\ncan only find out if the slot is invalidated or not by querying the\n'conflicting' field of pg_replication_slots. This new function\nprovides a way to query invalidation cause as well.\n\nOne of the use case scenarios for this function is another ongoing\nthread \"Synchronizing slots from primary to standby\" at [1]. This\nfunction is needed there to replicate invalidation-cause of a logical\nreplication slot from the primary server to the hot standby. But this\nis an independent requirement in itself and thus makes sense to have\nit implemented separately.\n\nPlease review and provide your feedback.\n\n[1]: https://www.postgresql.org/message-id/514f6f2f-6833-4539-39f1-96cd1e011f23%40enterprisedb.com\n\nthanks\nShveta",
"msg_date": "Wed, 20 Dec 2023 11:31:04 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "Hi,\n\nOn 12/20/23 7:01 AM, shveta malik wrote:\n> Hello hackers,\n> \n> Attached is a patch which attempts to implement a new system function\n> pg_get_slot_invalidation_cause('slot_name') to get invalidation cause\n> of a replication slot.\n\nThanks! +1 for the idea to display this information through an SQL API.\n\nI wonder if we could add a new field to pg_replication_slots instead\nof creating a new function.\n\n> One of the use case scenarios for this function is another ongoing\n> thread \"Synchronizing slots from primary to standby\" at [1]. This\n> function is needed there to replicate invalidation-cause of a logical\n> replication slot from the primary server to the hot standby. But this\n> is an independent requirement in itself and thus makes sense to have\n> it implemented separately.\n\nAgree.\n\nMy thoughts about it:\n\n1 ===\n\n\"Currently, users do not have a way to know the invalidation cause\"\n\nI'm not sure about it, I think one could see the reasons in the log file.\n\n2 ===\n\n\"This function returns NULL if the replication slot is not found\"\n\nWhy not returning an error instead? (like pg_drop_replication_slot() does for example)\n\nFWIW, we'd not need to cover this case if the description would be added to pg_replication_slots.\n\n3 ===\n\n+ <para>\n+ <literal>3</literal> = wal_level insufficient for the slot\n+ </para>\n\nwal_level insufficient on the primary maybe?\n\n4 ===\n\n+ * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n\nI think it would be more user friendly to return a description instead of an enum value.\n\n5 ===\n\n doc/src/sgml/func.sgml | 33 +++++++++++++++++++++++++++++++++\n src/backend/replication/slotfuncs.c | 27 +++++++++++++++++++++++++++\n src/include/catalog/pg_proc.dat | 4 ++++\n\nWorth to add some coverage in 019_replslot_limit.pl and 035_standby_logical_decoding.pl?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 08:16:09 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 12/20/23 7:01 AM, shveta malik wrote:\n> > Hello hackers,\n> >\n> > Attached is a patch which attempts to implement a new system function\n> > pg_get_slot_invalidation_cause('slot_name') to get invalidation cause\n> > of a replication slot.\n>\n> Thanks! +1 for the idea to display this information through an SQL API.\n>\n> I wonder if we could add a new field to pg_replication_slots instead\n> of creating a new function.\n>\n\nYeah, that is another option. We already have a boolean column\n'conflicting' in pg_replication_slots, so are you suggesting adding a\nnew column like 'conflict_cause'? I feel it is okay to have an SQL\nfunction for this as it may be primarily used for internal purposes or\nfor some specific users who want to dig deeper for the invalidation\ncause.\n\n> > One of the use case scenarios for this function is another ongoing\n> > thread \"Synchronizing slots from primary to standby\" at [1]. This\n> > function is needed there to replicate invalidation-cause of a logical\n> > replication slot from the primary server to the hot standby. But this\n> > is an independent requirement in itself and thus makes sense to have\n> > it implemented separately.\n>\n> Agree.\n>\n> My thoughts about it:\n>\n> 1 ===\n>\n> \"Currently, users do not have a way to know the invalidation cause\"\n>\n> I'm not sure about it, I think one could see the reasons in the log file.\n>\n> 2 ===\n>\n> \"This function returns NULL if the replication slot is not found\"\n>\n> Why not returning an error instead? (like pg_drop_replication_slot() does for example)\n>\n\n+1. Also, the check for NULL argument should be there.\n\n> FWIW, we'd not need to cover this case if the description would be added to pg_replication_slots.\n>\n> 3 ===\n>\n> + <para>\n> + <literal>3</literal> = wal_level insufficient for the slot\n> + </para>\n>\n> wal_level insufficient on the primary maybe?\n>\n> 4 ===\n>\n> + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n>\n> I think it would be more user friendly to return a description instead of an enum value.\n>\n\nBut it would be tricky to use at a place where we want to know the\nenum value as in the case of the sync slots patch where we want to\ncopy the cause. I think it would be better to display the description\nif we want to display it in the view.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 14:20:06 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 2:20 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> >\n> > On 12/20/23 7:01 AM, shveta malik wrote:\n> > > Hello hackers,\n> > >\n> > > Attached is a patch which attempts to implement a new system function\n> > > pg_get_slot_invalidation_cause('slot_name') to get invalidation cause\n> > > of a replication slot.\n> >\n> > Thanks! +1 for the idea to display this information through an SQL API.\n> >\n> > I wonder if we could add a new field to pg_replication_slots instead\n> > of creating a new function.\n> >\n>\n> Yeah, that is another option. We already have a boolean column\n> 'conflicting' in pg_replication_slots, so are you suggesting adding a\n> new column like 'conflict_cause'? I feel it is okay to have an SQL\n> function for this as it may be primarily used for internal purposes or\n> for some specific users who want to dig deeper for the invalidation\n> cause.\n>\n> > > One of the use case scenarios for this function is another ongoing\n> > > thread \"Synchronizing slots from primary to standby\" at [1]. This\n> > > function is needed there to replicate invalidation-cause of a logical\n> > > replication slot from the primary server to the hot standby. But this\n> > > is an independent requirement in itself and thus makes sense to have\n> > > it implemented separately.\n> >\n> > Agree.\n> >\n> > My thoughts about it:\n> >\n> > 1 ===\n> >\n> > \"Currently, users do not have a way to know the invalidation cause\"\n> >\n> > I'm not sure about it, I think one could see the reasons in the log file.\n> >\n> > 2 ===\n> >\n> > \"This function returns NULL if the replication slot is not found\"\n> >\n> > Why not returning an error instead? (like pg_drop_replication_slot() does for example)\n> >\n>\n> +1. Also, the check for NULL argument should be there.\n\nIf arg is NULL, it will not come to this function call. It comes to\nthis function if the signature matches. That is why other functions\nlike pg_drop_replication_slot(), pg_replication_slot_advance() etc do\nnot have NULL arg check as well.\n\n>\n> > FWIW, we'd not need to cover this case if the description would be added to pg_replication_slots.\n> >\n> > 3 ===\n> >\n> > + <para>\n> > + <literal>3</literal> = wal_level insufficient for the slot\n> > + </para>\n> >\n> > wal_level insufficient on the primary maybe?\n> >\n> > 4 ===\n> >\n> > + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n> >\n> > I think it would be more user friendly to return a description instead of an enum value.\n> >\n>\n> But it would be tricky to use at a place where we want to know the\n> enum value as in the case of the sync slots patch where we want to\n> copy the cause. I think it would be better to display the description\n> if we want to display it in the view.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\nPFA v2 patch. Addressed below comments:\n\n1) Added test in 019_replslot_limit.pl\n2) 'pg_get_slot_invalidation_cause' now returns error if the given\nslot does not exist\n3) Corrected doc and commit msg.\n\nthanks\nShveta",
"msg_date": "Wed, 20 Dec 2023 15:25:26 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 12/20/23 7:01 AM, shveta malik wrote:\n> > Hello hackers,\n> >\n> > Attached is a patch which attempts to implement a new system function\n> > pg_get_slot_invalidation_cause('slot_name') to get invalidation cause\n> > of a replication slot.\n>\n> Thanks! +1 for the idea to display this information through an SQL API.\n>\n> I wonder if we could add a new field to pg_replication_slots instead\n> of creating a new function.\n>\n> > One of the use case scenarios for this function is another ongoing\n> > thread \"Synchronizing slots from primary to standby\" at [1]. This\n> > function is needed there to replicate invalidation-cause of a logical\n> > replication slot from the primary server to the hot standby. But this\n> > is an independent requirement in itself and thus makes sense to have\n> > it implemented separately.\n>\n> Agree.\n>\n> My thoughts about it:\n>\n> 1 ===\n>\n> \"Currently, users do not have a way to know the invalidation cause\"\n>\n> I'm not sure about it, I think one could see the reasons in the log file.\n>\n> 2 ===\n>\n> \"This function returns NULL if the replication slot is not found\"\n>\n> Why not returning an error instead? (like pg_drop_replication_slot() does for example)\n>\n> FWIW, we'd not need to cover this case if the description would be added to pg_replication_slots.\n>\n> 3 ===\n>\n> + <para>\n> + <literal>3</literal> = wal_level insufficient for the slot\n> + </para>\n>\n> wal_level insufficient on the primary maybe?\n>\n> 4 ===\n>\n> + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n>\n> I think it would be more user friendly to return a description instead of an enum value.\n>\n> 5 ===\n>\n> doc/src/sgml/func.sgml | 33 +++++++++++++++++++++++++++++++++\n> src/backend/replication/slotfuncs.c | 27 +++++++++++++++++++++++++++\n> src/include/catalog/pg_proc.dat | 4 ++++\n>\n> Worth to add some coverage in 019_replslot_limit.pl and 035_standby_logical_decoding.pl?\n\nI have recently added a test in 019_replslot_limit.pl in v2. Do you\nsuggest adding in 035_standby_logical_decoding as well? Will it have\nadditional benefits?\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 20 Dec 2023 15:27:26 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "Hi,\n\nOn 12/20/23 9:50 AM, Amit Kapila wrote:\n> On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 12/20/23 7:01 AM, shveta malik wrote:\n>>> Hello hackers,\n>>>\n>>> Attached is a patch which attempts to implement a new system function\n>>> pg_get_slot_invalidation_cause('slot_name') to get invalidation cause\n>>> of a replication slot.\n>>\n>> Thanks! +1 for the idea to display this information through an SQL API.\n>>\n>> I wonder if we could add a new field to pg_replication_slots instead\n>> of creating a new function.\n>>\n> \n> Yeah, that is another option. We already have a boolean column\n> 'conflicting' in pg_replication_slots, so are you suggesting adding a\n> new column like 'conflict_cause'? \n\nYes.\n\n> I feel it is okay to have an SQL\n> function for this as it may be primarily used for internal purposes or\n> for some specific users who want to dig deeper for the invalidation\n> cause.\n> \n>> 4 ===\n>>\n>> + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n>>\n>> I think it would be more user friendly to return a description instead of an enum value.\n>>\n> But it would be tricky to use at a place where we want to know the\n> enum value as in the case of the sync slots patch where we want to\n> copy the cause. \n\nYeah right, what about displaying both then? (one field for the enum and one for\nthe description). I feel it's not friendly to ask users to refer to the documentation\n(or the commit message) to get the meaning of the output.\n\nAnother option could be to create a new view, say pg_slot_invalidation_causes, that would list\nthem all (cause_id, cause_description) and then keep pg_get_slot_invalidation_cause() returning\nthe cause_id only.\n\nAlso I think we should add a comment in ReplicationSlotInvalidationCause definition (slot.h)\nthat any new enum values (if any) should be added after the ones that are already defined (to\nprovide some consistency across changes in this area if any).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:40:03 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "Hi,\n\nOn 12/20/23 10:57 AM, shveta malik wrote:\n> On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n>> Worth to add some coverage in 019_replslot_limit.pl and 035_standby_logical_decoding.pl?\n> \n> I have recently added a test in 019_replslot_limit.pl in v2. \n\nThanks!\n\n> Do you\n> suggest adding in 035_standby_logical_decoding as well? Will it have\n> additional benefits?\n\nThat would cover the remaining invalidation causes but I'm not sure\nthat's worth it as this new function is simple enough after all.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:44:49 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 4:10 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 12/20/23 9:50 AM, Amit Kapila wrote:\n> > On Wed, Dec 20, 2023 at 12:46 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >>\n> >\n> >> 4 ===\n> >>\n> >> + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n> >>\n> >> I think it would be more user friendly to return a description instead of an enum value.\n> >>\n> > But it would be tricky to use at a place where we want to know the\n> > enum value as in the case of the sync slots patch where we want to\n> > copy the cause.\n>\n> Yeah right, what about displaying both then? (one field for the enum and one for\n> the description). I feel it's not friendly to ask users to refer to the documentation\n> (or the commit message) to get the meaning of the output.\n>\n> Another option could be to create a new view, say pg_slot_invalidation_causes, that would list\n> them all (cause_id, cause_description) and then keep pg_get_slot_invalidation_cause() returning\n> the cause_id only.\n>\n\nI am not sure if it is really valuable enough to have a separate view\nbut if others also feel that would be a good idea then we can do it. I\nfeel for the current purpose having a function as proposed in the\npatch is good enough, we can always extend it or add a new view\ndependening on if some users really care about it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 16:18:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "Hi,\n\nOn 12/20/23 10:55 AM, shveta malik wrote:\n> On Wed, Dec 20, 2023 at 2:20 PM Amit Kapila <[email protected]> wrote:\n> \n> \n> PFA v2 patch. Addressed below comments:\n> \n> 1) Added test in 019_replslot_limit.pl\n> 2) 'pg_get_slot_invalidation_cause' now returns error if the given\n> slot does not exist\n> 3) Corrected doc and commit msg.\n\nThanks!\n\n+ <literal>3</literal> = wal_level insufficient on the primary server\n\n\".\" is missing at the end (to be consistent with 1 and 2). Same\nin the commit message.\n\n+ * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n\nNot sure the sentence should finish with \";\".\n\nAnother Nit is to add a comment in ReplicationSlotInvalidationCause definition (slot.h)\nthat any new enum values (if any) should be added after the ones that are already defined (to\nprovide some consistency across changes in this area if any).\n\nExcept the above Nit(s) the patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 12:43:45 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 12:43:45PM +0100, Drouvot, Bertrand wrote:\n> + <literal>3</literal> = wal_level insufficient on the primary server\n> \n> \".\" is missing at the end (to be consistent with 1 and 2). Same\n> in the commit message.\n> \n> + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n> \n> Not sure the sentence should finish with \";\".\n> \n> Another Nit is to add a comment in ReplicationSlotInvalidationCause definition (slot.h)\n> that any new enum values (if any) should be added after the ones that are already defined (to\n> provide some consistency across changes in this area if any).\n> \n> Except the above Nit(s) the patch LGTM.\n\nThis patch has a problem: this design can easily cause the report of\ndata inconsistent with pg_replication_slots because the data returned \nby pg_get_replication_slots() and pg_get_slot_invalidation_cause()\nwould happen across *two* function call contexts, no? So, it seems to\nme that this had better be integrated as an extra column of\npg_get_replication_slots() to ensure the consistency of the data\nreported. And it's critical to get consistent data for monitoring.\n\nNot to mention that the lookup had better happen also while holding\nReplicationSlotControlLock as well, which is also something\nInvalidatePossiblyObsoleteSlot() relies on. v2 ignores that.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 12:17:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 8:47 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 12:43:45PM +0100, Drouvot, Bertrand wrote:\n> > + <literal>3</literal> = wal_level insufficient on the primary server\n> >\n> > \".\" is missing at the end (to be consistent with 1 and 2). Same\n> > in the commit message.\n> >\n> > + * Returns ReplicationSlotInvalidationCause enum value for valid slot_name;\n> >\n> > Not sure the sentence should finish with \";\".\n> >\n> > Another Nit is to add a comment in ReplicationSlotInvalidationCause definition (slot.h)\n> > that any new enum values (if any) should be added after the ones that are already defined (to\n> > provide some consistency across changes in this area if any).\n> >\n> > Except the above Nit(s) the patch LGTM.\n>\n> This patch has a problem: this design can easily cause the report of\n> data inconsistent with pg_replication_slots because the data returned\n> by pg_get_replication_slots() and pg_get_slot_invalidation_cause()\n> would happen across *two* function call contexts, no? So, it seems to\n> me that this had better be integrated as an extra column of\n> pg_get_replication_slots() to ensure the consistency of the data\n> reported. And it's critical to get consistent data for monitoring.\n>\n\nIt depends on how one uses the function. For example, if one finds\nthere is a conflict via pg_get_replication_slots() and wants to check\nthe reason for the same via this new function then it would give the\ncorrect answer. Now, if we think it is okay to have two columns\n'conflicting' and 'conflict_reason/conflict_cause' to be returned by\npg_get_replication_slots() then that should serve the purpose as well\nbut one can argue that 'conflicting' is deducible from\n'conflict_reason'.\n\nThe other thing we want to decide is how useful will it be to display\nthis information via pg_replication_slots view considering we already\nhave a boolean for it. Shall we add yet another column in the view and\nif so, it would probably be a string rather than an enum? Now, if we\ndo so, we still want function pg_get_replication_slots() or\npg_get_slot_invalidation_cause() to return an enum value as we want\nthat to be synced/copied to standby.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 09:37:39 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 09:37:39AM +0530, Amit Kapila wrote:\n> It depends on how one uses the function. For example, if one finds\n> there is a conflict via pg_get_replication_slots() and wants to check\n> the reason for the same via this new function then it would give the\n> correct answer.\n\nMy point is that we may not get the \"correct\" answer at all :/\n\nWhat guarantee do you have that between the scan of\npg_get_replication_slots() to get the value of \"conflicting\" and the\ncall of pg_get_slot_invalidation_cause() the slot state will still be\nthe same? A lot could happen between both function calls while the\nrepslot LWLock is not hold.\n\n> Now, if we think it is okay to have two columns\n> 'conflicting' and 'conflict_reason/conflict_cause' to be returned by\n> pg_get_replication_slots() then that should serve the purpose as well\n> but one can argue that 'conflicting' is deducible from\n> 'conflict_reason'.\n\nYeah, you could keep the reason text as NULL when there is no\nconflict, replacing the boolean by the text in the function, and keep\nthe view definition compatible with v16 while adding an extra column.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 14:47:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 11:18 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 09:37:39AM +0530, Amit Kapila wrote:\n> > It depends on how one uses the function. For example, if one finds\n> > there is a conflict via pg_get_replication_slots() and wants to check\n> > the reason for the same via this new function then it would give the\n> > correct answer.\n>\n> My point is that we may not get the \"correct\" answer at all :/\n>\n> What guarantee do you have that between the scan of\n> pg_get_replication_slots() to get the value of \"conflicting\" and the\n> call of pg_get_slot_invalidation_cause() the slot state will still be\n> the same?\n>\n\nYeah, if one uses them independently then there is no such guarantee.\n\n> A lot could happen between both function calls while the\n> repslot LWLock is not hold.\n>\n> > Now, if we think it is okay to have two columns\n> > 'conflicting' and 'conflict_reason/conflict_cause' to be returned by\n> > pg_get_replication_slots() then that should serve the purpose as well\n> > but one can argue that 'conflicting' is deducible from\n> > 'conflict_reason'.\n>\n> Yeah, you could keep the reason text as NULL when there is no\n> conflict, replacing the boolean by the text in the function, and keep\n> the view definition compatible with v16 while adding an extra column.\n>\n\nBut as mentioned we also want the enum value to be exposed in some way\nso that it can be used by the sync slot feature [1] as well,\notherwise, we may need some mappings to convert the text back to an\nenum. I guess if we want to expose via view, then we can return an\nenum value by pg_get_replication_slots() and the view can replace it\nwith text based on the value.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716DAF72265388A2AD424119495A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:53:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 11:53:04AM +0530, Amit Kapila wrote:\n> On Thu, Dec 21, 2023 at 11:18 AM Michael Paquier <[email protected]> wrote:\n> Yeah, if one uses them independently then there is no such guarantee.\n\nThis could be possible in the same query as well, still less likely,\nas the contents are volatile.\n\n>> A lot could happen between both function calls while the\n>> repslot LWLock is not hold.\n>>\n>> Yeah, you could keep the reason text as NULL when there is no\n>> conflict, replacing the boolean by the text in the function, and keep\n>> the view definition compatible with v16 while adding an extra column.\n> \n> But as mentioned we also want the enum value to be exposed in some way\n> so that it can be used by the sync slot feature [1] as well,\n> otherwise, we may need some mappings to convert the text back to an\n> enum. I guess if we want to expose via view, then we can return an\n> enum value by pg_get_replication_slots() and the view can replace it\n> with text based on the value.\n\nSure. Something like is OK by me as long as the data is retrieved\nfrom a single scan of the slot data while holding the slot data's\nLWLock.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 15:37:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 12:07 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 11:53:04AM +0530, Amit Kapila wrote:\n> > On Thu, Dec 21, 2023 at 11:18 AM Michael Paquier <[email protected]> wrote:\n> > Yeah, if one uses them independently then there is no such guarantee.\n>\n> This could be possible in the same query as well, still less likely,\n> as the contents are volatile.\n>\n\nTrue, this is quite obvious but that was not a recommended way to use\nthe function. Anyway, now that we agree to expose it via an existing\nfunction, there is no point in further argument on this.\n\n> >> A lot could happen between both function calls while the\n> >> repslot LWLock is not hold.\n> >>\n> >> Yeah, you could keep the reason text as NULL when there is no\n> >> conflict, replacing the boolean by the text in the function, and keep\n> >> the view definition compatible with v16 while adding an extra column.\n> >\n> > But as mentioned we also want the enum value to be exposed in some way\n> > so that it can be used by the sync slot feature [1] as well,\n> > otherwise, we may need some mappings to convert the text back to an\n> > enum. I guess if we want to expose via view, then we can return an\n> > enum value by pg_get_replication_slots() and the view can replace it\n> > with text based on the value.\n>\n> Sure. Something like is OK by me as long as the data is retrieved\n> from a single scan of the slot data while holding the slot data's\n> LWLock.\n>\n\nOkay, so let's go this way unless someone feels otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 14:59:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 2:59 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 12:07 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Dec 21, 2023 at 11:53:04AM +0530, Amit Kapila wrote:\n> > > On Thu, Dec 21, 2023 at 11:18 AM Michael Paquier <[email protected]> wrote:\n> > > Yeah, if one uses them independently then there is no such guarantee.\n> >\n> > This could be possible in the same query as well, still less likely,\n> > as the contents are volatile.\n> >\n>\n> True, this is quite obvious but that was not a recommended way to use\n> the function. Anyway, now that we agree to expose it via an existing\n> function, there is no point in further argument on this.\n>\n> > >> A lot could happen between both function calls while the\n> > >> repslot LWLock is not hold.\n> > >>\n> > >> Yeah, you could keep the reason text as NULL when there is no\n> > >> conflict, replacing the boolean by the text in the function, and keep\n> > >> the view definition compatible with v16 while adding an extra column.\n> > >\n> > > But as mentioned we also want the enum value to be exposed in some way\n> > > so that it can be used by the sync slot feature [1] as well,\n> > > otherwise, we may need some mappings to convert the text back to an\n> > > enum. I guess if we want to expose via view, then we can return an\n> > > enum value by pg_get_replication_slots() and the view can replace it\n> > > with text based on the value.\n> >\n> > Sure. Something like is OK by me as long as the data is retrieved\n> > from a single scan of the slot data while holding the slot data's\n> > LWLock.\n> >\n>\n> Okay, so let's go this way unless someone feels otherwise.\n\nPlease track the progress in another thread [1] where the patch is posted today.\n\n[1]: https://www.postgresql.org/message-id/ZYOE8IguqTbp-seF%40paquier.xyz\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 27 Dec 2023 15:14:51 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Function to get invalidation cause of a replication slot."
}
] |
[
{
"msg_contents": "I would like to be able to add backtraces to all ERROR logs. This is\nuseful to me, because during postgres or extension development any\nerror that I hit is usually unexpected. This avoids me from having to\nchange backtrace_functions every time I get an error based on the\nfunction name listed in the LOCATION output (added by \"\\set VERBOSITY\nverbose\").\n\nAttached is a trivial patch that starts supporting\nbacktrace_functions='*'. By setting that in postgresql.conf for my dev\nenvironment it starts logging backtraces always.\n\nThe main problem it currently has is that it adds backtraces to all\nLOG level logs too. So probably we want to make backtrace_functions\nonly log backtraces for ERROR and up (or maybe WARNING/NOTICE and up),\nor add a backtrace_functions_level GUC too control this behaviour. The\ndocs of backtrace_functions currently heavily suggest that it should\nonly be logging backtraces for errors, so either we actually start\ndoing that or we should clarify the docs (emphasis mine):\n\n> This parameter contains a comma-separated list of C function\n> names. If an **error** is raised and the name of the internal C function\n> where the **error** happens matches a value in the list, then a\n> backtrace is written to the server log together with the error\n> message. This can be used to debug specific areas of the\n> source code.",
"msg_date": "Wed, 20 Dec 2023 12:23:04 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support a wildcard in backtrace_functions"
},
{
"msg_contents": "> On 20 Dec 2023, at 12:23, Jelte Fennema-Nio <[email protected]> wrote:\n\n> Attached is a trivial patch that starts supporting\n> backtrace_functions='*'. By setting that in postgresql.conf for my dev\n> environment it starts logging backtraces always.\n\nI happened to implement pretty much the same diff today during a debugging\nsession, and then stumbled across this when searching the archives, so count me\nin for +1 on the concept.\n\n> The main problem it currently has is that it adds backtraces to all\n> LOG level logs too. So probably we want to make backtrace_functions\n> only log backtraces for ERROR and up (or maybe WARNING/NOTICE and up),\n> or add a backtrace_functions_level GUC too control this behaviour.\n\nA wildcard should IMO only apply for ERROR (and higher) so I've hacked that up\nin the attached v2. I was thinking about WARNING as well but opted against it.\n\n> The docs of backtrace_functions currently heavily suggest that it should\n> only be logging backtraces for errors, so either we actually start\n> doing that or we should clarify the docs\n\nI think we should keep the current functionality and instead adjust the docs.\nThis has already been shipped like this, and restricting it now without a clear\nusecase for doing so seems invasive (and someone might very well be using\nthis). 0001 in the attached adjusts this.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 12 Feb 2024 14:14:20 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, 12 Feb 2024 at 14:14, Daniel Gustafsson <[email protected]> wrote:\n> > The main problem it currently has is that it adds backtraces to all\n> > LOG level logs too. So probably we want to make backtrace_functions\n> > only log backtraces for ERROR and up (or maybe WARNING/NOTICE and up),\n> > or add a backtrace_functions_level GUC too control this behaviour.\n>\n> A wildcard should IMO only apply for ERROR (and higher) so I've hacked that up\n> in the attached v2. I was thinking about WARNING as well but opted against it.\n\nFine by me patch looks good. Although I think I'd slightly prefer\nhaving a backtrace_functions_level GUC, so that we can get this same\nbenefit for non wildcard backtrace_functions and so we keep the\nbehaviour between the two consistent.\n\n> I think we should keep the current functionality and instead adjust the docs.\n> This has already been shipped like this, and restricting it now without a clear\n> usecase for doing so seems invasive (and someone might very well be using\n> this). 0001 in the attached adjusts this.\n\nWould a backtrace_functions_level GUC that would default to ERROR be\nacceptable in this case? It's slight behaviour break, but you would be\nable to get the previous behaviour very easily. And honestly wanting\nto get backtraces for non-ERROR log entries seems quite niche to me,\nwhich to me makes it a weird default.\n\n> + If an log entry is raised and the name of the internal C function where\n\ns/an log entry/a log entry\n\n\n",
"msg_date": "Mon, 12 Feb 2024 14:27:42 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 12.02.24 14:27, Jelte Fennema-Nio wrote:\n> And honestly wanting\n> to get backtraces for non-ERROR log entries seems quite niche to me,\n> which to me makes it a weird default.\n\nI think one reason for this is that non-ERRORs are fairly unique in \ntheir wording, so you don't have to isolate them by function name.\n\n\n",
"msg_date": "Mon, 12 Feb 2024 16:27:45 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, 12 Feb 2024 at 14:27, Jelte Fennema-Nio <[email protected]> wrote:\n> Would a backtrace_functions_level GUC that would default to ERROR be\n> acceptable in this case?\n\nI implemented it this way in the attached patchset.",
"msg_date": "Tue, 27 Feb 2024 18:03:17 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "> On 27 Feb 2024, at 18:03, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> On Mon, 12 Feb 2024 at 14:27, Jelte Fennema-Nio <[email protected]> wrote:\n>> Would a backtrace_functions_level GUC that would default to ERROR be\n>> acceptable in this case?\n> \n> I implemented it this way in the attached patchset.\n\nI'm not excited about adding even more GUCs but maybe it's the least bad option\nhere.\n\n+ If a log entry is raised with a level higher than\n+ <xref linkend=\"guc-backtrace-functions-min-level\"/> and the name of the\nThis should be \"equal to or higher\" right?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 19:04:07 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, 28 Feb 2024 at 19:04, Daniel Gustafsson <[email protected]> wrote:\n> This should be \"equal to or higher\" right?\n\nCorrect, nicely spotted. Fixed that. I also updated the docs for the\nnew backtrace_functions_min_level GUC itself too, as well as creating\na dedicated options array for the GUC. Because when updating its docs\nI realized that none of the existing level arrays matched what we\nwanted.",
"msg_date": "Wed, 28 Feb 2024 19:50:19 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "> On 28 Feb 2024, at 19:50, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> On Wed, 28 Feb 2024 at 19:04, Daniel Gustafsson <[email protected]> wrote:\n>> This should be \"equal to or higher\" right?\n> \n> Correct, nicely spotted. Fixed that. I also updated the docs for the\n> new backtrace_functions_min_level GUC itself too, as well as creating\n> a dedicated options array for the GUC. Because when updating its docs\n> I realized that none of the existing level arrays matched what we\n> wanted.\n\nLooks good, I'm marking this ready for committer for now. I just have a few\nsmall comments:\n\n+ A single <literal>*</literal> character is interpreted as a wildcard and\n+ will cause all errors in the log to contain backtraces.\nThis should mention that it's all error matching the level (or higher) of the\nnewly introduced GUC.\n\n\n+\tgettext_noop(\"Sets the message levels that create backtraces when backtrace_functions is configured\"),\nI think we should add the same \"Each level includes..\" long_desc, and the\nshort_desc should end with period.\n\n\n+ <para>\n+ Backtrace support is not available on all platforms, and the quality\n+ of the backtraces depends on compilation options.\n+ </para>\nI don't think we need to duplicate this para here, having it on\nbacktrace_functions suffice.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:12:00 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 2024-Feb-28, Jelte Fennema-Nio wrote:\n\n> diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\n> index 699d9d0a241..553e4785520 100644\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -843,6 +843,8 @@ matches_backtrace_functions(const char *funcname)\n> \t\tif (*p == '\\0')\t\t\t/* end of backtrace_function_list */\n> \t\t\tbreak;\n> \n> +\t\tif (strcmp(\"*\", p) == 0)\n> +\t\t\treturn true;\n> \t\tif (strcmp(funcname, p) == 0)\n> \t\t\treturn true;\n> \t\tp += strlen(p) + 1;\n\nHmm, so if I write \"foo,*\" this will work but check all function names\nfirst and on the second entry. But if I write \"foo*\" the GUC value will\nbe accepted but match nothing (as will \"*foo\" or \"foo*bar\"). I don't\nlike either of these behaviors. I think we should tighten this up: an\nasterisk should be allowed only if it appears alone in the string\n(short-circuiting check_backtrace_functions before strspn); and let's\nleave the strspn() call alone.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:35:21 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 4:05 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Hmm, so if I write \"foo,*\" this will work but check all function names\n> first and on the second entry. But if I write \"foo*\" the GUC value will\n> be accepted but match nothing (as will \"*foo\" or \"foo*bar\"). I don't\n> like either of these behaviors. I think we should tighten this up: an\n> asterisk should be allowed only if it appears alone in the string\n> (short-circuiting check_backtrace_functions before strspn); and let's\n> leave the strspn() call alone.\n\n+1 for disallowing *foo or foo* or foo*bar etc. combinations. I think\nwe need to go a bit further and convert backtrace_functions of type\nGUC_LIST_INPUT so that check_backtrace_functions can just use\nSplitIdentifierString to parse the list of identifiers. Then, the\nstrspn can just be something like below for each token:\n\n validlen = strspn(*tok,\n \"0123456789_\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\");\n\nDoes anyone see a problem with it?\n\nFWIW, I've recently noticed for my work on\nhttps://commitfest.postgresql.org/47/2863/ that there isn't any test\ncovering all the backtrace related code - backtrace_functions GUC,\nbacktrace_on_internal_error GUC, set_backtrace(), backtrace(),\nbacktrace_symbols(). I came up with a test module covering these areas\nhttps://commitfest.postgresql.org/47/4823/. I could make the TAP tests\npass on all the CF bot animals. Interestingly, the new code that gets\nadded for this thread can also be covered with it. Any thoughts are\nwelcome.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Mar 2024 00:18:14 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 2024-Mar-06, Bharath Rupireddy wrote:\n\n> +1 for disallowing *foo or foo* or foo*bar etc. combinations.\n\nCool.\n\n> I think we need to go a bit further and convert backtrace_functions of\n> type GUC_LIST_INPUT so that check_backtrace_functions can just use\n> SplitIdentifierString to parse the list of identifiers. Then, the\n> strspn can just be something like below for each token:\n> \n> validlen = strspn(*tok,\n> \"0123456789_\"\n> \"abcdefghijklmnopqrstuvwxyz\"\n> \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\");\n> \n> Does anyone see a problem with it?\n\nIIRC the reason it's coded as it is, is so that we have a single palloc\nchunk of memory to free when the value changes; we purposefully stayed\naway from SplitIdentifierString and the like. What problem do you see\nwith the idea I proposed? That was:\n\n> On Thu, Feb 29, 2024 at 4:05 PM Alvaro Herrera <[email protected]> wrote:\n\n> > I think we should tighten this up: an asterisk should be allowed\n> > only if it appears alone in the string (short-circuiting\n> > check_backtrace_functions before strspn); and let's leave the\n> > strspn() call alone.\n\nThat means, just add something like this at the top of\ncheck_backtrace_functions and don't do anything to this function\notherwise (untested code):\n\n\tif (newval[0] == '*' && newval[1] == '\\0')\n\t{\n\t\tsomeval = guc_malloc(ERROR, 2);\n\t\tif (someval == NULL)\n\t\t\treturn false;\n\t\tsomeval[0] = '*';\n\t\tsomeval[1] = '\\0';\n\t\t*extra = someval;\n\t\treturn true;\n\t}\n\n(Not sure if a second trailing \\0 is necessary.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n",
"msg_date": "Tue, 5 Mar 2024 20:11:23 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 12:41 AM Alvaro Herrera <[email protected]> wrote:\n>\n> > I think we need to go a bit further and convert backtrace_functions of\n> > type GUC_LIST_INPUT so that check_backtrace_functions can just use\n> > SplitIdentifierString to parse the list of identifiers. Then, the\n> > strspn can just be something like below for each token:\n> >\n> > validlen = strspn(*tok,\n> > \"0123456789_\"\n> > \"abcdefghijklmnopqrstuvwxyz\"\n> > \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\");\n> >\n> > Does anyone see a problem with it?\n>\n> IIRC the reason it's coded as it is, is so that we have a single palloc\n> chunk of memory to free when the value changes; we purposefully stayed\n> away from SplitIdentifierString and the like.\n\nWhy do we even need to prepare another list backtrace_function_list\nfrom the parsed identifiers? Why can't we just do something like\nv4-0003? Existing logic looks a bit complicated to me personally.\n\nI still don't understand why we can't just turn backtrace_functions to\nGUC_LIST_INPUT and use SplitIdentifierString? I see a couple of\nadvantages with this approach:\n1. It simplifies the backtrace_functions GUC related code a lot.\n2. We don't need assign_backtrace_functions() and a separate variable\nbacktrace_function_list, we can just rely on the GUC value\nbacktrace_functions.\n3. All we do now in check_backtrace_functions() is just parse the user\nentered backtrace_functions value, and quickly exit if we have found\n'*'.\n4. In matches_backtrace_functions(), we iterate over the list as we\nalready do right now.\n\nWith the v4-0003, all of the below test cases work:\n\nALTER SYSTEM SET backtrace_functions = 'pg_terminate_backend,\npg_create_restore_point';\nSELECT pg_reload_conf();\nSHOW backtrace_functions;\n\n-- Must see backtrace\nSELECT pg_create_restore_point(repeat('A', 1024));\n\n-- Must see backtrace\nSELECT pg_terminate_backend(1234, -1);\n\nALTER SYSTEM SET backtrace_functions = '*, pg_create_restore_point';\nSELECT pg_reload_conf();\nSHOW backtrace_functions;\n\n-- Must see backtrace as * is specified\nSELECT pg_terminate_backend(1234, -1);\n\n-- Must see an error as * is specified in between the identifier name\nALTER SYSTEM SET backtrace_functions = 'pg*_create_restore_point';\nERROR: invalid value for parameter \"backtrace_functions\":\n\"pg*_create_restore_point\"\nDETAIL: Invalid character\n\n> What problem do you see with the idea I proposed? That was:\n>\n> > On Thu, Feb 29, 2024 at 4:05 PM Alvaro Herrera <[email protected]> wrote:\n>\n> > > I think we should tighten this up: an asterisk should be allowed\n> > > only if it appears alone in the string (short-circuiting\n> > > check_backtrace_functions before strspn); and let's leave the\n> > > strspn() call alone.\n>\n> That means, just add something like this at the top of\n> check_backtrace_functions and don't do anything to this function\n> otherwise (untested code):\n>\n> if (newval[0] == '*' && newval[1] == '\\0')\n> {\n> someval = guc_malloc(ERROR, 2);\n> if (someval == NULL)\n> return false;\n> someval[0] = '*';\n> someval[1] = '\\0';\n> *extra = someval;\n> return true;\n> }\n\nThis works only if '* 'is specified as the only one character in\nbacktrace_functions = '*', right? If yes, what if someone sets\nbacktrace_functions = 'foo, bar, *, baz'?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 8 Mar 2024 11:40:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 2024-Mar-08, Bharath Rupireddy wrote:\n\n> This works only if '* 'is specified as the only one character in\n> backtrace_functions = '*', right? If yes, what if someone sets\n> backtrace_functions = 'foo, bar, *, baz'?\n\nIt throws an error, as expected. This is a useless waste of resources:\nchecking for \"foo\" and \"bar\" is pointless, since the * is going to give\na positive match anyway. And the \"baz\" is a waste of memory which is\nnever going to be checked.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n",
"msg_date": "Fri, 8 Mar 2024 10:58:57 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 10:59, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Mar-08, Bharath Rupireddy wrote:\n>\n> > This works only if '* 'is specified as the only one character in\n> > backtrace_functions = '*', right? If yes, what if someone sets\n> > backtrace_functions = 'foo, bar, *, baz'?\n>\n> It throws an error, as expected. This is a useless waste of resources:\n> checking for \"foo\" and \"bar\" is pointless, since the * is going to give\n> a positive match anyway. And the \"baz\" is a waste of memory which is\n> never going to be checked.\n\nMakes sense. Attached is a new patchset that implements it that way.\nI've not included Bharath his 0003 patch, since it's a much bigger\nchange than the others, and thus might need some more discussion.",
"msg_date": "Fri, 8 Mar 2024 12:25:28 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "> On 8 Mar 2024, at 12:25, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> On Fri, 8 Mar 2024 at 10:59, Alvaro Herrera <[email protected]> wrote:\n>> \n>> On 2024-Mar-08, Bharath Rupireddy wrote:\n>> \n>>> This works only if '* 'is specified as the only one character in\n>>> backtrace_functions = '*', right? If yes, what if someone sets\n>>> backtrace_functions = 'foo, bar, *, baz'?\n>> \n>> It throws an error, as expected. This is a useless waste of resources:\n>> checking for \"foo\" and \"bar\" is pointless, since the * is going to give\n>> a positive match anyway. And the \"baz\" is a waste of memory which is\n>> never going to be checked.\n> \n> Makes sense. Attached is a new patchset that implements it that way.\n\nThis version address the concerns raised by Alvaro, and even simplifies the\ncode over earlier revisions. My documentation comments from upthread still\nstands, but other than those this version LGTM.\n\n> I've not included Bharath his 0003 patch, since it's a much bigger\n> change than the others, and thus might need some more discussion.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 14:41:52 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 7:12 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 8 Mar 2024, at 12:25, Jelte Fennema-Nio <[email protected]> wrote:\n> >\n> > On Fri, 8 Mar 2024 at 10:59, Alvaro Herrera <[email protected]> wrote:\n> >>\n> >> On 2024-Mar-08, Bharath Rupireddy wrote:\n> >>\n> >>> This works only if '* 'is specified as the only one character in\n> >>> backtrace_functions = '*', right? If yes, what if someone sets\n> >>> backtrace_functions = 'foo, bar, *, baz'?\n> >>\n> >> It throws an error, as expected. This is a useless waste of resources:\n> >> checking for \"foo\" and \"bar\" is pointless, since the * is going to give\n> >> a positive match anyway. And the \"baz\" is a waste of memory which is\n> >> never going to be checked.\n> >\n> > Makes sense. Attached is a new patchset that implements it that way.\n>\n> This version address the concerns raised by Alvaro, and even simplifies the\n> code over earlier revisions. My documentation comments from upthread still\n> stands, but other than those this version LGTM.\n\nSo, to get backtraces of all functions at\nbacktrace_functions_min_level level, one has to specify\nbacktrace_functions = '*'; combining it with function names is not\nallowed. This looks cleaner.\n\npostgres=# ALTER SYSTEM SET backtrace_functions = '*, pg_create_restore_point';\nERROR: invalid value for parameter \"backtrace_functions\": \"*,\npg_create_restore_point\"\nDETAIL: Invalid character\n\nI have one comment on 0002, otherwise all looks good.\n\n+ <para>\n+ A single <literal>*</literal> character can be used instead of a list\n+ of C functions. This <literal>*</literal> is interpreted as a wildcard\n+ and will cause all errors in the log to contain backtraces.\n+ </para>\n\nIt's not always the ERRORs for which backtraces get logged, it really\ndepends on the new GUC backtrace_functions_min_level. If my\nunderstanding is right, can we specify that in the above note?\n\n> > I've not included Bharath his 0003 patch, since it's a much bigger\n> > change than the others, and thus might need some more discussion.\n\n+1. I'll see if I can start a new thread for this.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Mar 2024 19:31:11 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "> On 8 Mar 2024, at 15:01, Bharath Rupireddy <[email protected]> wrote:\n\n> So, to get backtraces of all functions at\n> backtrace_functions_min_level level, one has to specify\n> backtrace_functions = '*'; combining it with function names is not\n> allowed. This looks cleaner.\n> \n> postgres=# ALTER SYSTEM SET backtrace_functions = '*, pg_create_restore_point';\n> ERROR: invalid value for parameter \"backtrace_functions\": \"*,\n> pg_create_restore_point\"\n> DETAIL: Invalid character\n\nIf we want to be extra helpful here we could add something like the below to\ngive an errhint when a wildcard was found. Also, the errdetail should read\nlike a full sentence so it should be slightly expanded anyways.\n\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex ca621ea3ff..7bc655ecd2 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -2151,7 +2151,9 @@ check_backtrace_functions(char **newval, void **extra, GucSource source)\n \", \\n\\t\");\n if (validlen != newvallen)\n {\n- GUC_check_errdetail(\"Invalid character\");\n+ GUC_check_errdetail(\"Invalid character in function name.\");\n+ if ((*newval)[validlen] == '*')\n+ GUC_check_errhint(\"For wildcard matching, use a single \\\"*\\\" without any other function names.\");\n return false;\n }\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 15:22:34 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 08.03.24 12:25, Jelte Fennema-Nio wrote:\n> On Fri, 8 Mar 2024 at 10:59, Alvaro Herrera <[email protected]> wrote:\n>>\n>> On 2024-Mar-08, Bharath Rupireddy wrote:\n>>\n>>> This works only if '* 'is specified as the only one character in\n>>> backtrace_functions = '*', right? If yes, what if someone sets\n>>> backtrace_functions = 'foo, bar, *, baz'?\n>>\n>> It throws an error, as expected. This is a useless waste of resources:\n>> checking for \"foo\" and \"bar\" is pointless, since the * is going to give\n>> a positive match anyway. And the \"baz\" is a waste of memory which is\n>> never going to be checked.\n> \n> Makes sense. Attached is a new patchset that implements it that way.\n> I've not included Bharath his 0003 patch, since it's a much bigger\n> change than the others, and thus might need some more discussion.\n\nWhat is the relationship of these changes with the recently added \nbacktrace_on_internal_error? We had similar discussions there, I feel \nlike we are doing similar things here but slightly differently. Like, \nshouldn't backtrace_functions_min_level also affect \nbacktrace_on_internal_error? Don't you really just want \nbacktrace_on_any_error? You are sneaking that in through the backdoor \nvia backtrace_functions. Can we somehow combine all these use cases \nmore elegantly? backtrace_on_error = {all|internal|none}?\n\nBtw., your code/documentation sometimes writes \"stack trace\". Let's \nstick to backtrace for consistency.\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 15:51:39 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 14:42, Daniel Gustafsson <[email protected]> wrote:\n> My documentation comments from upthread still\n> stands, but other than those this version LGTM.\n\nAh yeah, I forgot about those. Fixed now.",
"msg_date": "Fri, 8 Mar 2024 16:42:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 15:51, Peter Eisentraut <[email protected]> wrote:\n> What is the relationship of these changes with the recently added\n> backtrace_on_internal_error?\n\nI think that's a reasonable question. And the follow up ones too.\n\nI think it all depends on how close we consider\nbacktrace_on_internal_error and backtrace_functions. While they\nobviously have similar functionality, I feel like\nbacktrace_on_internal_error is probably a function that we'd want to\nturn on by default in the future. While backtrace_functions seems like\nit's mostly useful for developers. (i.e. the current grouping of\nbacktrace_on_internal_error under DEVELOPER_OPTIONS seems wrong to me)\n\n> shouldn't backtrace_functions_min_level also affect\n> backtrace_on_internal_error?\n\nI guess that depends on the default behaviour that we want. Would we\nwant warnings with ERRCODE_INTERNAL_ERROR to be backtraced by default\nor not. There is at least one example of such a warning in the\ncodebase:\n\n ereport(WARNING,\n errcode(ERRCODE_INTERNAL_ERROR),\n errmsg_internal(\"could not parse XML declaration in stored value\"),\n errdetail_for_xml_code(res_code));\n\n> Btw., your code/documentation sometimes writes \"stack trace\". Let's\n> stick to backtrace for consistency.\n\nFixed that in the latest patset in the email I sent right before this one.\n\n\n",
"msg_date": "Fri, 8 Mar 2024 16:55:05 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 9:25 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 8 Mar 2024 at 15:51, Peter Eisentraut <[email protected]> wrote:\n> > What is the relationship of these changes with the recently added\n> > backtrace_on_internal_error?\n>\n> I think that's a reasonable question. And the follow up ones too.\n>\n> I think it all depends on how close we consider\n> backtrace_on_internal_error and backtrace_functions. While they\n> obviously have similar functionality, I feel like\n> backtrace_on_internal_error is probably a function that we'd want to\n> turn on by default in the future.\n\nHm, we may not want backtrace_on_internal_error to be on by default.\nAFAICS, all ERRCODE_INTERNAL_ERROR are identifiable with the error\nmessage, so it's sort of easy for one to track down the cause of it\nwithout actually needing to log backtrace by default.\n\nOn Fri, Mar 8, 2024 at 8:21 PM Peter Eisentraut <[email protected]> wrote:\n>\n> What is the relationship of these changes with the recently added\n> backtrace_on_internal_error? We had similar discussions there, I feel\n> like we are doing similar things here but slightly differently. Like,\n> shouldn't backtrace_functions_min_level also affect\n> backtrace_on_internal_error? Don't you really just want\n> backtrace_on_any_error? You are sneaking that in through the backdoor\n> via backtrace_functions. Can we somehow combine all these use cases\n> more elegantly? backtrace_on_error = {all|internal|none}?\n\nI see a merit in Peter's point. I like the idea of\nbacktrace_functions_min_level controlling backtrace_on_internal_error\ntoo. Less GUCs for similar functionality is always a good idea IMHO.\nHere's what I think:\n\n1. Rename the new GUC backtrace_functions_min_level to backtrace_min_level.\n2. Add 'internal' to backtrace_min_level_options enum\n+static const struct config_enum_entry backtrace_functions_level_options[] = {\n+ {\"internal\", INTERNAL, false},\n+ {\"debug5\", DEBUG5, false},\n+ {\"debug4\", DEBUG4, false},\n3. Remove backtrace_on_internal_error GUC which is now effectively\ncovered by backtrace_min_level = 'internal';\n\nDoes anyone see a problem with it?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Mar 2024 21:47:26 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 17:17, Bharath Rupireddy\n<[email protected]> wrote:\n> Hm, we may not want backtrace_on_internal_error to be on by default.\n> AFAICS, all ERRCODE_INTERNAL_ERROR are identifiable with the error\n> message, so it's sort of easy for one to track down the cause of it\n> without actually needing to log backtrace by default.\n\nWhile maybe all messages uniquely identify the exact error, these\nerrors usually originate somewhere deep down the call stack in a\nfunction that is called from many other places. Having the full stack\ntrace can thus greatly help us to find what caused this specific error\nto occur. I think that would be quite useful to enable by default, if\nonly because it would make many bug reports much more actionable.\n\n> 1. Rename the new GUC backtrace_functions_min_level to backtrace_min_level.\n> 2. Add 'internal' to backtrace_min_level_options enum\n> +static const struct config_enum_entry backtrace_functions_level_options[] = {\n> + {\"internal\", INTERNAL, false},\n> + {\"debug5\", DEBUG5, false},\n> + {\"debug4\", DEBUG4, false},\n> 3. Remove backtrace_on_internal_error GUC which is now effectively\n> covered by backtrace_min_level = 'internal';\n>\n> Does anyone see a problem with it?\n\nHonestly, it seems a bit confusing to me to add INTERNAL as a level,\nsince it's an error code not log level. Also merging it in this way,\nbrings up certain questions:\n1. How do you get the current backtrace_on_internal_error=true\nbehaviour? Would you need to set both backtrace_functions='*' and\nbacktrace_min_level=INTERNAL?\n2. What is the default value for backtrace_min_level?\n3. You still wouldn't be able to limit INTERNAL errors to FATAL level\n\nI personally think having three GUCs in this patchset make sense,\nespecially since I think it would be good to turn\nbacktrace_on_internal_error on by default. The only additional change\nthat I think might be worth making is to make\nbacktrace_on_internal_error take a level argument, so that you could\nconfigure postgres to only add stack traces to FATAL internal errors.\n\n(attached is a patch that should fix the CI issue by adding\nGUC_NOT_IN_SAMPLE backtrace_functions_min_level)",
"msg_date": "Mon, 11 Mar 2024 10:52:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 08.03.24 16:55, Jelte Fennema-Nio wrote:\n> On Fri, 8 Mar 2024 at 15:51, Peter Eisentraut <[email protected]> wrote:\n>> What is the relationship of these changes with the recently added\n>> backtrace_on_internal_error?\n> \n> I think that's a reasonable question. And the follow up ones too.\n> \n> I think it all depends on how close we consider\n> backtrace_on_internal_error and backtrace_functions. While they\n> obviously have similar functionality, I feel like\n> backtrace_on_internal_error is probably a function that we'd want to\n> turn on by default in the future. While backtrace_functions seems like\n> it's mostly useful for developers. (i.e. the current grouping of\n> backtrace_on_internal_error under DEVELOPER_OPTIONS seems wrong to me)\n\nHence the idea\n\n backtrace_on_error = {all|internal|none}\n\nwhich could default to 'internal'.\n\n\n\n",
"msg_date": "Wed, 13 Mar 2024 15:20:18 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 7:50 PM Peter Eisentraut <[email protected]> wrote:\n>\n> > I think it all depends on how close we consider\n> > backtrace_on_internal_error and backtrace_functions. While they\n> > obviously have similar functionality, I feel like\n> > backtrace_on_internal_error is probably a function that we'd want to\n> > turn on by default in the future. While backtrace_functions seems like\n> > it's mostly useful for developers. (i.e. the current grouping of\n> > backtrace_on_internal_error under DEVELOPER_OPTIONS seems wrong to me)\n>\n> Hence the idea\n>\n> backtrace_on_error = {all|internal|none}\n>\n> which could default to 'internal'.\n\nSo, are you suggesting to just have backtrace_on_error =\n{all|internal|none} leaving backtrace_functions_min_level aside?\n\nIn that case, I'd like to understand how backtrace_on_error and\nbacktrace_functions interact with each other? Does one need to set\nbacktrace_on_error = all to get backtrace of functions specified in\nbacktrace_functions?\n\nWhat must be the behaviour of backtrace_on_error = all?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 20:44:50 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 15:20, Peter Eisentraut <[email protected]> wrote:\n> Hence the idea\n>\n> backtrace_on_error = {all|internal|none}\n>\n> which could default to 'internal'.\n\nI think one use-case that I'd personally at least would like to see\ncovered is being able to get backtraces on all warnings. How would\nthat be done with this setting?\n\nbacktrace_on_error = all\nbacktrace_min_level = warning\n\nIn that case backtrace_on_error seems like a weird name, since it can\ninclude backtraces for warnings if you change backtrace_min_level. How\nabout the following aproach. It still uses 3 GUCs, but they now all\nwork together. There's one entry point and two additional filters\n(level and function name)\n\n# Says what log entries to log backtraces for\nlog_backtrace = {all|internal|none} (default: internal)\n\n# Excludes log entries from log_include_backtrace by level\nbacktrace_min_level = {debug4|...|fatal} (default: error)\n\n# Excludes log entries from log_include_backtrace if function name\n# does not match list, but empty string disables this filter (thus\n# logging for all functions)\nbacktrace_functions = {...} (default: '')\n\n\nPS. Other naming option for log_backtrace could be log_include_backtrace\n\n\n",
"msg_date": "Wed, 13 Mar 2024 16:32:28 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 16:32, Jelte Fennema-Nio <[email protected]> wrote:\n> How\n> about the following aproach. It still uses 3 GUCs, but they now all\n> work together. There's one entry point and two additional filters\n> (level and function name)\n>\n> # Says what log entries to log backtraces for\n> log_backtrace = {all|internal|none} (default: internal)\n>\n> # Excludes log entries from log_include_backtrace by level\n> backtrace_min_level = {debug4|...|fatal} (default: error)\n>\n> # Excludes log entries from log_include_backtrace if function name\n> # does not match list, but empty string disables this filter (thus\n> # logging for all functions)\n> backtrace_functions = {...} (default: '')\n\nAttached is a patch that implements this. Since the more I think about\nit, the more I like this approach.",
"msg_date": "Thu, 21 Mar 2024 15:41:44 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 15:41, Jelte Fennema-Nio <[email protected]> wrote:\n> Attached is a patch that implements this. Since the more I think about\n> it, the more I like this approach.\n\nI now added a 3rd patch to this patchset which changes the\nlog_backtrace default to \"internal\", because it seems quite useful to\nme if user error reports of internal errors included a backtrace.",
"msg_date": "Fri, 22 Mar 2024 11:09:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 8:11 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> Attached is a patch that implements this. Since the more I think about\n> it, the more I like this approach.\n\nThanks. Overall the design looks good. log_backtrace is the entry\npoint for one to control if a backtrace needs to be emitted at all.\nbacktrace_min_level controls at what elevel the backtraces need to be\nemitted.\n\nIf one wants to get backtraces for all internal ERRORs, then\nlog_backtrace = 'internal' and backtrace_min_level = 'error' must be\nset. If backtrace_min_level = 'panic', then backtraces won't get\nlogged for internal ERRORs. But, this is not the case right now, one\ncan just set backtrace_on_internal_error = 'on' to get backtraces for\nall internal ERROR/FATAL or whatever just the errcode has to be\nERRCODE_INTERNAL_ERROR. This is one change of behaviour and looks fine\nto me.\n\nIf one wants to get backtrace from a function for all elog/ereport\ncalls, then log_backtrace = either 'internal' or 'all',\nbacktrace_min_level = 'debug5' and backtrace_functions =\n'<function_name>' must be set. But, right now, one can just set\nbacktrace_functions = '<function_name>' to get backtrace from the\nfunctions for any of elog/ereport calls.\n\nA few comments:\n\n1.\n@@ -832,6 +849,9 @@ matches_backtrace_functions(const char *funcname)\n {\n const char *p;\n\n+ if (!backtrace_functions || backtrace_functions[0] == '\\0')\n+ return true;\n+\n\nShouldn't this be returning false to not log set backtrace when\nbacktrace_functions is not set? Am I missing something here?\n\n2.\n+ {\n+ {\"log_backtrace\", PGC_SUSET, LOGGING_WHAT,\n+ gettext_noop(\"Sets if logs should include a backtrace.\"),\n+ NULL\n\nIMV, log_backtrace, backtrace_min_level and backtrace_functions are\ninterlinked, so keeping all of them as DEVELOPER_OPTIONS looks fine to\nme than having log_backtrace at just LOGGING_WHAT kind. Also, we must\nalso mark log_backtrace as GUC_NOT_IN_SAMPLE.\n\n3. I think it's worth adding a few examples to get backtraces in docs.\nFor instance, what to set to get backtraces of all internal errors and\nwhat to set to get backtraces of all ERRORs coming from a specific\nfunction etc.\n\n4. I see the support for wildcard in backtrace_functions is missing.\nIs it intentionally left out? If not, can you make it part of 0003\npatch?\n\n5. The amount of backtraces generated is going to be too huge when\nsetting log_backtrace = 'all' and backtrace_min_level = 'debug5'. With\nthis setting installcheck generated 12MB worth of log and the test\ntook about 55 seconds (as opposed to 48 seconds without it) The point\nis if these settings are misused, it can easily slow down the entire\nsystem and fill up disk space leading to crashes eventually. This\nmakes a strong case for marking log_backtrace a developer only\nfunction.\n\n6. In continuation to comment #5, does anyone need backtrace for\nelevels like debugX and LOG etc.? What's the use of the backtrace in\nsuch cases?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:44:13 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 22 Mar 2024 at 11:14, Bharath Rupireddy\n<[email protected]> wrote:\n> A few comments:\n>\n> 1.\n> @@ -832,6 +849,9 @@ matches_backtrace_functions(const char *funcname)\n> {\n> const char *p;\n>\n> + if (!backtrace_functions || backtrace_functions[0] == '\\0')\n> + return true;\n> +\n>\n> Shouldn't this be returning false to not log set backtrace when\n> backtrace_functions is not set? Am I missing something here?\n\nEmpty string is considered the new wildcard, i.e. backtrace_functions\nfiltering is not enabled if it is empty. This seemed reasonable to me\nsince you should now disable backtraces by using log_backtrace=none,\nhaving backtrace_functions='' mean the same thing seems rather\nuseless. I also documented this in the updated docs.\n\n> 2.\n> + {\n> + {\"log_backtrace\", PGC_SUSET, LOGGING_WHAT,\n> + gettext_noop(\"Sets if logs should include a backtrace.\"),\n> + NULL\n>\n> IMV, log_backtrace, backtrace_min_level and backtrace_functions are\n> interlinked, so keeping all of them as DEVELOPER_OPTIONS looks fine to\n> me than having log_backtrace at just LOGGING_WHAT kind. Also, we must\n> also mark log_backtrace as GUC_NOT_IN_SAMPLE.\n\nI agree they are linked, but I don't think it's just useful for\ndevelopers to be able to set log_backtrace to internal (even if we\nchoose not to make \"internal\" the default).\n\n> 3. I think it's worth adding a few examples to get backtraces in docs.\n> For instance, what to set to get backtraces of all internal errors and\n> what to set to get backtraces of all ERRORs coming from a specific\n> function etc.\n\nGood idea, I'll try to add those later. For now your specific cases would be:\nlog_backtrace = 'internal' (default in 0003)\nbacktrace_functions = '' (default)\nbacktrace_min_level = 'ERROR' (default)\n\nand\n\nlog_backtrace = 'all'\nbacktrace_functions = 'someFunc'\nbacktrace_min_level = 'ERROR' (default)\n\n> 4. I see the support for wildcard in backtrace_functions is missing.\n> Is it intentionally left out? If not, can you make it part of 0003\n> patch?\n\nYes it's intentional, see answer on 1.\n\n> 5. The amount of backtraces generated is going to be too huge when\n> setting log_backtrace = 'all' and backtrace_min_level = 'debug5'. With\n> this setting installcheck generated 12MB worth of log and the test\n> took about 55 seconds (as opposed to 48 seconds without it) The point\n> is if these settings are misused, it can easily slow down the entire\n> system and fill up disk space leading to crashes eventually. This\n> makes a strong case for marking log_backtrace a developer only\n> function.\n\nI think the same argument can be made for many other GUCs that are not\nmarked as developer options (e.g. log_min_messages). Normally we\n\"protect\" such options by using PGC_SUSET. DEVELOPER_OPTIONS is really\nonly meant for options that are only useful for developers\n\n> 6. In continuation to comment #5, does anyone need backtrace for\n> elevels like debugX and LOG etc.? What's the use of the backtrace in\n> such cases?\n\nI think at least WARNING and NOTICE could be useful in practice, but I\nagree LOG and DEBUGX seem kinda useless. But it seems kinda strange to\nnot have them in the list, especially given it is pretty much no\neffort to support them too.\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:39:14 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 22 Mar 2024 at 11:09, Jelte Fennema-Nio <[email protected]> wrote:\n> On Thu, 21 Mar 2024 at 15:41, Jelte Fennema-Nio <[email protected]> wrote:\n> > Attached is a patch that implements this. Since the more I think about\n> > it, the more I like this approach.\n>\n> I now added a 3rd patch to this patchset which changes the\n> log_backtrace default to \"internal\", because it seems quite useful to\n> me if user error reports of internal errors included a backtrace.\n\nI think patch 0002 should be considered an Open Item for PG17. Since\nit's proposing changing the name of the newly introduced\nbacktrace_on_internal_error GUC. If we want to change it in this way,\nwe should do it before the release and preferably before the beta.\n\nI added it to the Open Items list[1] so we don't forget to at least\ndecide on this.\n\n[1]: https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\n\n",
"msg_date": "Wed, 10 Apr 2024 11:07:00 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 11:07:00AM +0200, Jelte Fennema-Nio wrote:\n> I think patch 0002 should be considered an Open Item for PG17. Since\n> it's proposing changing the name of the newly introduced\n> backtrace_on_internal_error GUC. If we want to change it in this way,\n> we should do it before the release and preferably before the beta.\n\nIndeed, it makes little sense to redesign this at the beginning of v18\nif we're unhappy with the current outcome of v17. So tweaking that is\nworth considering at this stage.\n\nlog_backtrace speaks a bit more to me as a name for this stuff because\nit logs a backtrace. Now, there is consistency on HEAD as well\nbecause these GUCs are all prefixed with \"backtrace_\". Would\nsomething like a backtrace_mode where we have an enum rather than a\nboolean be better? One thing would be to redesign the existing GUC as\nhaving two values on HEAD as of:\n- \"none\", to log nothing.\n- \"internal\", to log backtraces for internal errors.\n\nThen this could be extended with more modes, to discuss in future\nreleases as new features.\n\nWhat you are suggesting with backtrace_min_level is an entirely new\nfeature. Perhaps using an extra GUC to control the interactions of\nthe modes that can be assigned to the primary GUC \"log_backtrace\" in\nyour 0002 is better, but all that sounds like v18 material at this\nstage. The code that resolves the interactions between the existing\nGUC and the new \"level\" GUC does not use LOGBACKTRACE_ALL. Perhaps it\nshould use a case/switch.\n\n+ gettext_noop(\"Each level includes all the levels that follow it. The later\"\n+ \" the level, the fewer backtraces are created.\"),\n\n> I added it to the Open Items list[1] so we don't forget to at least\n> decide on this.\n> \n> [1]: https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\nThanks.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2024 09:36:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 12, 2024 at 09:36:36AM +0900, Michael Paquier wrote:\n> log_backtrace speaks a bit more to me as a name for this stuff because\n> it logs a backtrace. Now, there is consistency on HEAD as well\n> because these GUCs are all prefixed with \"backtrace_\". Would\n> something like a backtrace_mode where we have an enum rather than a\n> boolean be better? One thing would be to redesign the existing GUC as\n> having two values on HEAD as of:\n> - \"none\", to log nothing.\n> - \"internal\", to log backtraces for internal errors.\n> \n> Then this could be extended with more modes, to discuss in future\n> releases as new features.\n\nAs this is an open item, let's move on here.\n\nAttached is a proposal of patch for this open item, switching\nbacktrace_on_internal_error to backtrace_mode with two values:\n- \"none\", to log no backtraces.\n- \"internal\", to log backtraces for internal errors.\n\nThe rest of the proposals had better happen as a v18 discussion, where\nextending this GUC is a benefit.\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 16:02:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 18.04.24 09:02, Michael Paquier wrote:\n> On Fri, Apr 12, 2024 at 09:36:36AM +0900, Michael Paquier wrote:\n>> log_backtrace speaks a bit more to me as a name for this stuff because\n>> it logs a backtrace. Now, there is consistency on HEAD as well\n>> because these GUCs are all prefixed with \"backtrace_\". Would\n>> something like a backtrace_mode where we have an enum rather than a\n>> boolean be better? One thing would be to redesign the existing GUC as\n>> having two values on HEAD as of:\n>> - \"none\", to log nothing.\n>> - \"internal\", to log backtraces for internal errors.\n>>\n>> Then this could be extended with more modes, to discuss in future\n>> releases as new features.\n> \n> As this is an open item, let's move on here.\n> \n> Attached is a proposal of patch for this open item, switching\n> backtrace_on_internal_error to backtrace_mode with two values:\n> - \"none\", to log no backtraces.\n> - \"internal\", to log backtraces for internal errors.\n> \n> The rest of the proposals had better happen as a v18 discussion, where\n> extending this GUC is a benefit.\n\nWhy exactly is this an open item? Is there anything wrong with the \nexisting feature?\n\n\n\n",
"msg_date": "Thu, 18 Apr 2024 10:50:05 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, 18 Apr 2024 at 10:50, Peter Eisentraut <[email protected]> wrote:\n> Why exactly is this an open item? Is there anything wrong with the\n> existing feature?\n\nThe name of the GUC backtrace_on_internal_error is so specific that\nit's impossible to extend our backtrace behaviour in future releases\nwithout adding yet another backtrace GUC. You started the discussion\non renaming it upthread:\n\nOn Fri, 8 Mar 2024 at 15:51, Peter Eisentraut <[email protected]> wrote:\n> What is the relationship of these changes with the recently added\n> backtrace_on_internal_error? We had similar discussions there, I feel\n> like we are doing similar things here but slightly differently. Like,\n> shouldn't backtrace_functions_min_level also affect\n> backtrace_on_internal_error? Don't you really just want\n> backtrace_on_any_error? You are sneaking that in through the backdoor\n> via backtrace_functions. Can we somehow combine all these use cases\n> more elegantly? backtrace_on_error = {all|internal|none}?\n\n\n",
"msg_date": "Thu, 18 Apr 2024 11:54:23 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, 18 Apr 2024 at 09:02, Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Apr 12, 2024 at 09:36:36AM +0900, Michael Paquier wrote:\n> > log_backtrace speaks a bit more to me as a name for this stuff because\n> > it logs a backtrace. Now, there is consistency on HEAD as well\n> > because these GUCs are all prefixed with \"backtrace_\". Would\n> > something like a backtrace_mode where we have an enum rather than a\n> > boolean be better?\n\nI guess it depends what we want consistency with. If we want naming\nconsistency with all other LOGGING_WHAT GUCs or if we want naming\nconsistency with the current backtrace_functions GUC. I personally\nlike log_backtrace slightly better, but I don't have a super strong\nopinion on this either. Another option could be plain \"backtrace\".\n\n> > One thing would be to redesign the existing GUC as\n> > having two values on HEAD as of:\n> > - \"none\", to log nothing.\n> > - \"internal\", to log backtraces for internal errors.\n\nIf we go for backtrace_mode or backtrace, then I think I'd prefer\n\"disabled\"/\"off\" and \"internal_error\" for these values.\n\n\n> The rest of the proposals had better happen as a v18 discussion, where\n> extending this GUC is a benefit.\n\nagreed\n\n\n",
"msg_date": "Thu, 18 Apr 2024 12:21:56 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 12:21:56PM +0200, Jelte Fennema-Nio wrote:\n> On Thu, 18 Apr 2024 at 09:02, Michael Paquier <[email protected]> wrote:\n>> On Fri, Apr 12, 2024 at 09:36:36AM +0900, Michael Paquier wrote:\n>> log_backtrace speaks a bit more to me as a name for this stuff because\n>> it logs a backtrace. Now, there is consistency on HEAD as well\n>> because these GUCs are all prefixed with \"backtrace_\". Would\n>> something like a backtrace_mode where we have an enum rather than a\n>> boolean be better?\n> \n> I guess it depends what we want consistency with. If we want naming\n> consistency with all other LOGGING_WHAT GUCs or if we want naming\n> consistency with the current backtrace_functions GUC. I personally\n> like log_backtrace slightly better, but I don't have a super strong\n> opinion on this either. Another option could be plain \"backtrace\".\n\n\"backtrace\" is too generic IMO. I'd append a \"mode\" as an effect of\nbacktrace_functions, which is also a developer option, and has been\naround for a couple of releases now.\n\n>> One thing would be to redesign the existing GUC as\n>> having two values on HEAD as of:\n>> - \"none\", to log nothing.\n>> - \"internal\", to log backtraces for internal errors.\n>\n> If we go for backtrace_mode or backtrace, then I think I'd prefer\n> \"disabled\"/\"off\" and \"internal_error\" for these values.\n\n\"internal_error\" as a value sounds fine to me, that speaks more than\njust \"internal\". \"off\" rather that \"none\" or \"disabled\", less so,\nbecause it requires more enum entries to map with the boolean values\nthat could be expected from it. \"disabled\" would be mostly a first in\nthe GUCs, icu_validation_level being the first one using it, so I'd\nrather choose \"none\" over \"disabled\". No strong preference on this\none, TBH, but as we're bike-shedding that..\n--\nMichael",
"msg_date": "Fri, 19 Apr 2024 08:28:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 18.04.24 11:54, Jelte Fennema-Nio wrote:\n> On Thu, 18 Apr 2024 at 10:50, Peter Eisentraut<[email protected]> wrote:\n>> Why exactly is this an open item? Is there anything wrong with the\n>> existing feature?\n> The name of the GUC backtrace_on_internal_error is so specific that\n> it's impossible to extend our backtrace behaviour in future releases\n> without adding yet another backtrace GUC. You started the discussion\n> on renaming it upthread:\n\nThis presupposes that there is consensus about how the future \nfunctionality should look like. This topic has gone through half a \ndozen designs over a few months, and I think it would be premature to \nrandomly freeze that discussion now and backport that design.\n\nIf a better, more comprehensive design arises for PG18, I think it would \nbe pretty easy to either remove backtrace_on_internal_error or just \ninternally remap it.\n\n\n\n",
"msg_date": "Fri, 19 Apr 2024 10:58:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, 19 Apr 2024 at 10:58, Peter Eisentraut <[email protected]> wrote:\n> This presupposes that there is consensus about how the future\n> functionality should look like. This topic has gone through half a\n> dozen designs over a few months, and I think it would be premature to\n> randomly freeze that discussion now and backport that design.\n\nWhile there maybe isn't consensus on what a new design exactly looks\nlike, I do feel like there's consensus on this thread that the\nbacktrace_on_internal_error GUC is almost certainly not the design\nthat we want. I guess a more conservative approach to this problem\nwould be to revert the backtrace_on_internal_error commit and agree on\na better design for PG18. But I didn't think that would be necessary\nif we could agree on the name for a more flexibly named GUC, which\nseemed quite possible to me (after a little bit of bikeshedding).\n\n> If a better, more comprehensive design arises for PG18, I think it would\n> be pretty easy to either remove backtrace_on_internal_error or just\n> internally remap it.\n\nI think releasing a GUC (even if it's just meant for developers) in\nPG17 and then deprecating it for a newer version in PG18 wouldn't be a\ngreat look. And even if that's not a huge problem, it still seems\nbetter not to have the problem at all. Renaming the GUC now seems to\nhave only upsides to me: worst case the new design turns out not to be\nwhat we want either, and we have to deprecate the GUC. But in the best\ncase we don't need to deprecate anything.\n\n\n",
"msg_date": "Fri, 19 Apr 2024 13:30:48 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 3:02 AM Michael Paquier <[email protected]> wrote:\n> As this is an open item, let's move on here.\n>\n> Attached is a proposal of patch for this open item, switching\n> backtrace_on_internal_error to backtrace_mode with two values:\n> - \"none\", to log no backtraces.\n> - \"internal\", to log backtraces for internal errors.\n>\n> The rest of the proposals had better happen as a v18 discussion, where\n> extending this GUC is a benefit.\n\n-1. That's just weird. There's no reason to replace a Boolean with a\nnon-Boolean without adding a third value. Either we decide this\nconcern is important enough to justify a post-feature-freeze design\nchange, and add the third value now, or we leave it alone and revisit\nit in a future release. I'm probably OK with either one, but being\nhalfway in between has no appeal for me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2024 12:18:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 7:31 AM Jelte Fennema-Nio <[email protected]> wrote:\n> While there maybe isn't consensus on what a new design exactly looks\n> like, I do feel like there's consensus on this thread that the\n> backtrace_on_internal_error GUC is almost certainly not the design\n> that we want. I guess a more conservative approach to this problem\n> would be to revert the backtrace_on_internal_error commit and agree on\n> a better design for PG18. But I didn't think that would be necessary\n> if we could agree on the name for a more flexibly named GUC, which\n> seemed quite possible to me (after a little bit of bikeshedding).\n\nI think Peter is correct that this presupposes we more or less agree\non the final destination. For example, I think that log_backtrace =\nerror | internal | all is a bit odd; why not backtrace_errcodes =\n<comma-separated list of error codes>? I've written a logging hook for\nEDB that can filter out error messages by error code, so I don't think\nit's at all a stretch to think that someone might want to do something\nsimilar here. I agree that it's somewhat likely that the name we want\ngoing forward isn't backtrace_on_internal_error, but I don't think we\nknow that completely for certain, or what new name we necessarily\nwant.\n\n> I think releasing a GUC (even if it's just meant for developers) in\n> PG17 and then deprecating it for a newer version in PG18 wouldn't be a\n> great look. And even if that's not a huge problem, it still seems\n> better not to have the problem at all. Renaming the GUC now seems to\n> have only upsides to me: worst case the new design turns out not to be\n> what we want either, and we have to deprecate the GUC. But in the best\n> case we don't need to deprecate anything.\n\nThere are some things that are pretty hard to change once we've\nreleased them. For example, if we had a function or operator and\nsomebody embeds it in a view definition, removing or renaming it\nprevents people from upgrading. But GUCs are not as bad. If you don't\nport your postgresql.conf forward to the new version, or if you\nhaven't uncommented this particular setting, then there's no issue at\nall, and when there is a problem, removing a GUC setting from\npostgresql.conf is pretty straightforward compared to getting some\nconstruct out of your application code. I agree it's not amazing if we\nend up changing this exactly one release after it was introduced, but\nwe don't really know that we're going to change it next release, or at\nall, and even if we do, I still don't think that's a catastrophe.\n\nI'm not completely certain that \"let's just leave this alone for right\nnow\" is the correct conclusion, but the fact that we might need to\nrename a GUC down the road is not, by itself, a critical flaw in the\nstatus quo.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2024 13:02:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> There are some things that are pretty hard to change once we've\n> released them. For example, if we had a function or operator and\n> somebody embeds it in a view definition, removing or renaming it\n> prevents people from upgrading. But GUCs are not as bad.\n\nReally? Are we certain that nobody will put SETs of this GUC\ninto their applications, or even just activate it via ALTER DATABASE\nSET? If they've done that, then a GUC change means dump/reload/upgrade\nfailure.\n\nI've not been following this thread, so I don't have an opinion\nabout what the design ought to be. But if we still aren't settled\non it by now, I think the prudent thing is to revert the feature\nout of v17 altogether, and try again in v18. When we're still\ndesigning something after feature freeze, that is a good indication\nthat we are trying to ship something that's not ready for prime time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2024 14:08:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 2:08 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > There are some things that are pretty hard to change once we've\n> > released them. For example, if we had a function or operator and\n> > somebody embeds it in a view definition, removing or renaming it\n> > prevents people from upgrading. But GUCs are not as bad.\n>\n> Really? Are we certain that nobody will put SETs of this GUC\n> into their applications, or even just activate it via ALTER DATABASE\n> SET? If they've done that, then a GUC change means dump/reload/upgrade\n> failure.\n\nThat's fair. That feature isn't super-widely used, but it wouldn't be\ncrazy for someone to use it with this feature, either.\n\n> I've not been following this thread, so I don't have an opinion\n> about what the design ought to be. But if we still aren't settled\n> on it by now, I think the prudent thing is to revert the feature\n> out of v17 altogether, and try again in v18. When we're still\n> designing something after feature freeze, that is a good indication\n> that we are trying to ship something that's not ready for prime time.\n\nThere are two features at issue here. One is\nbacktrace_on_internal_error, committed as\na740b213d4b4d3360ad0cac696e47e5ec0eb8864. The other is a feature to\nproduce backtraces for all errors, which was originally proposed as\nbacktrace_functions='*', backtrace_functions_level=ERROR but which has\nsubsequently been proposed with a few other spellings that involve\nmerging that functionality into backtrace_on_internal_error. To the\nextent that there's an open question here for PG17, it's not about\nreverting this patch (which AIUI has never been committed) but about\nreverting the earlier patch for backtrace_on_internal_error. So is\nthat the right thing to do?\n\nWell, on the one hand, I confess to having had a passing thought that\nbacktrace_on_internal_error is awfully specific. Surely, user A's\ncriterion for which messages should have backtraces might be anything,\nand we cannot reasonably add backtrace_on_$WHATEVER for all $WHATEVER\nin some large set. And the debate here suggests that I wasn't the only\none to have that concern. So that argues for a revert.\n\nBut on the other hand, in my personal experience,\nbacktrace_on_internal_error would be the right thing in a really large\nnumber of cases. I was disappointed to see it added as a developer\noption with GUC_NOT_IN_SAMPLE. My vote would have been to put it in\npostgresql.conf and enable it by default. We have a somewhat debatable\nhabit of using the exact same message in many places with similar\nkinds of problems, and when a production system manages to hit one of\nthose errors, figuring out what actually went wrong is hard. In fact,\nit's often hard even when the error text only occurs in one or two\nplaces, because it's often some very low-level part of the code where\nyou can't get enough context to understand the problem without knowing\nwhere that code got called from. So I sort of hate to see one of the\nmost useful extensions of backtrace functionality that I can\npersonally imagine get pulled back out of the tree because it turns\nout that someone else has something else that they want.\n\nI wonder whether a practical solution here might be to replace\nbacktrace_on_internal_error=true|false with\nbacktrace_on_error=true|internal|false. (Yes, I know that more\nproposed resolutions is not necessarily what we need right now, but I\ncan't resist floating the idea.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2024 15:04:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Apr 19, 2024 at 2:08 PM Tom Lane <[email protected]> wrote:\n>> I've not been following this thread, so I don't have an opinion\n>> about what the design ought to be. But if we still aren't settled\n>> on it by now, I think the prudent thing is to revert the feature\n>> out of v17 altogether, and try again in v18. When we're still\n>> designing something after feature freeze, that is a good indication\n>> that we are trying to ship something that's not ready for prime time.\n\n> There are two features at issue here. One is\n> backtrace_on_internal_error, committed as\n> a740b213d4b4d3360ad0cac696e47e5ec0eb8864. The other is a feature to\n> produce backtraces for all errors, which was originally proposed as\n> backtrace_functions='*', backtrace_functions_level=ERROR but which has\n> subsequently been proposed with a few other spellings that involve\n> merging that functionality into backtrace_on_internal_error. To the\n> extent that there's an open question here for PG17, it's not about\n> reverting this patch (which AIUI has never been committed) but about\n> reverting the earlier patch for backtrace_on_internal_error. So is\n> that the right thing to do?\n\nI can't say that I care for \"backtrace_on_internal_error\".\nRe-reading that thread, I see I argued for having *no* GUC and\njust enabling that behavior all the time. I lost that fight,\nbut it should have been clear that a GUC of this exact shape\nis a design dead end --- and that's what we're seeing now.\n\n> Well, on the one hand, I confess to having had a passing thought that\n> backtrace_on_internal_error is awfully specific. Surely, user A's\n> criterion for which messages should have backtraces might be anything,\n> and we cannot reasonably add backtrace_on_$WHATEVER for all $WHATEVER\n> in some large set. And the debate here suggests that I wasn't the only\n> one to have that concern. So that argues for a revert.\n\nExactly.\n\n> But on the other hand, in my personal experience,\n> backtrace_on_internal_error would be the right thing in a really large\n> number of cases.\n\nThat's why I thought we could get away with doing it automatically.\nSure, more control would be better. But if we just hard-wire it for\nthe moment then we aren't locking in what the eventual design for\nthat control will be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Apr 2024 15:24:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 3:24 PM Tom Lane <[email protected]> wrote:\n> I can't say that I care for \"backtrace_on_internal_error\".\n> Re-reading that thread, I see I argued for having *no* GUC and\n> just enabling that behavior all the time. I lost that fight,\n> but it should have been clear that a GUC of this exact shape\n> is a design dead end --- and that's what we're seeing now.\n\nYeah, I guess I have to agree with that.\n\n> > But on the other hand, in my personal experience,\n> > backtrace_on_internal_error would be the right thing in a really large\n> > number of cases.\n>\n> That's why I thought we could get away with doing it automatically.\n> Sure, more control would be better. But if we just hard-wire it for\n> the moment then we aren't locking in what the eventual design for\n> that control will be.\n\nSo the question before us right now is whether there's a palatable\nalternative to completely ripping out a feature that both you and I\nseem to agree does something useful. I don't think we necessarily need\nto leap to the conclusion that a revert is radically less risky than\nsome other alternative. Now, if there's not some obvious alternative\nupon which we can (mostly) all agree, then maybe that's where we end\nup. But I would like us to be looking to save the features we can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2024 16:17:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 04:17:18PM -0400, Robert Haas wrote:\n> On Fri, Apr 19, 2024 at 3:24 PM Tom Lane <[email protected]> wrote:\n>> I can't say that I care for \"backtrace_on_internal_error\".\n>> Re-reading that thread, I see I argued for having *no* GUC and\n>> just enabling that behavior all the time. I lost that fight,\n>> but it should have been clear that a GUC of this exact shape\n>> is a design dead end --- and that's what we're seeing now.\n> \n> Yeah, I guess I have to agree with that.\n\nAh, I have missed this argument.\n\n> So the question before us right now is whether there's a palatable\n> alternative to completely ripping out a feature that both you and I\n> seem to agree does something useful. I don't think we necessarily need\n> to leap to the conclusion that a revert is radically less risky than\n> some other alternative. Now, if there's not some obvious alternative\n> upon which we can (mostly) all agree, then maybe that's where we end\n> up. But I would like us to be looking to save the features we can.\n\nRemoving this GUC and making the backend react by default the same way\nas when this GUC was enabled sounds like a sensible route here. This\nstuff is useful.\n--\nMichael",
"msg_date": "Sat, 20 Apr 2024 08:19:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 19.04.24 21:24, Tom Lane wrote:\n>> But on the other hand, in my personal experience,\n>> backtrace_on_internal_error would be the right thing in a really large\n>> number of cases.\n> That's why I thought we could get away with doing it automatically.\n> Sure, more control would be better. But if we just hard-wire it for\n> the moment then we aren't locking in what the eventual design for\n> that control will be.\n\nNote that a standard test run produces a number of internal errors. I \nhaven't checked how likely these are in production, but one might want \nto consider that before starting to dump backtraces in routine situations.\n\nFor example,\n\n$ PG_TEST_INITDB_EXTRA_OPTS='-c backtrace_on_internal_error=on' meson \ntest -C build\n$ grep -r 'BACKTRACE:' build/testrun | wc -l\n85\n\n\n\n",
"msg_date": "Sun, 21 Apr 2024 09:26:38 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Sat, 20 Apr 2024 at 01:19, Michael Paquier <[email protected]> wrote:\n> Removing this GUC and making the backend react by default the same way\n> as when this GUC was enabled sounds like a sensible route here. This\n> stuff is useful.\n\nI definitely agree it's useful. But I feel like changing the default\nof the GUC and removing the ability to disable it at the same time are\npretty radical changes that we should not be doing after a feature\nfreeze. I think we should at least have a way to turn this feature off\nto be able to avoid log spam. Especially given the fact that\nextensions use elog much more freely than core. Which afaict from\nother threads[1] Tom also thinks we should normally be careful about.\n\nOf the options to resolve the open item so far, I think there are only\nthree somewhat reasonable to do after the feature freeze:\n1. Rename the GUC to something else (but keep behaviour the same)\n2. Decide to keep the GUC as is\n3. Revert a740b213d4\n\nI hoped 1 was possible, but based on the discussion so far it doesn't\nseem like we'll be able to get a quick agreement on a name. IMHO 2 is\njust a version of 1, but with a GUC name that no-one thinks is a good\none. So I think we are left with option 3.\n\n[1]: https://www.postgresql.org/message-id/flat/524751.1707240550%40sss.pgh.pa.us#59710fd4f3f186e642b8e6b886b2fdff\n\n\n",
"msg_date": "Sun, 21 Apr 2024 13:52:36 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Sun, Apr 21, 2024 at 09:26:38AM +0200, Peter Eisentraut wrote:\n> Note that a standard test run produces a number of internal errors. I\n> haven't checked how likely these are in production, but one might want to\n> consider that before starting to dump backtraces in routine situations.\n> \n> For example,\n> \n> $ PG_TEST_INITDB_EXTRA_OPTS='-c backtrace_on_internal_error=on' meson test\n> -C build\n> $ grep -r 'BACKTRACE:' build/testrun | wc -l\n> 85\n\nUgh, I would not have expected that much. Isn't the problem you are\nreporting here entirely unrelated, though? It seems to me that these\nerror paths should be using a proper errcode rather than the internal\nerrcode, as these refer to states that can be reached by the user with\na combination of queries and/or cancellations.\n\nFor example, take this one for the REFRESH matview path which is a\nvalid error, still using an elog():\n if (!foundUniqueIndex)\n elog(ERROR, \"could not find suitable unique index on materialized view\");\n\nI'd like to think about this stuff in a different way: this is useful\nif enabled by default because it can also help in finding out error\npaths that should not use the internal errcode. Normally, there\nshould be zero backtraces produced, except in unexpected\nnever-to-be-reached cases.\n--\nMichael",
"msg_date": "Mon, 22 Apr 2024 15:36:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 2:36 AM Michael Paquier <[email protected]> wrote:\n> I'd like to think about this stuff in a different way: this is useful\n> if enabled by default because it can also help in finding out error\n> paths that should not use the internal errcode. Normally, there\n> should be zero backtraces produced, except in unexpected\n> never-to-be-reached cases.\n\nThat's long been my feeling about this. So, if we revert this for now,\nwhat we ought to do is put it back right ASAP after feature freeze and\nthen clean all that up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Apr 2024 09:25:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 09:25:15AM -0400, Robert Haas wrote:\n> That's long been my feeling about this. So, if we revert this for now,\n> what we ought to do is put it back right ASAP after feature freeze and\n> then clean all that up.\n\nIn the 85 backtraces I can find in the tests, we have a mixed bag of:\n- Code paths that use the internal errcode, but should not.\n- Code paths that use the internal errcode, and are OK with that in\nthe scope of the tests.\n- Code paths that use the internal errcode, though the coding\nassumptions behind their use feels fuzzy to me, like amcheck for some\nSQL tests or satisfies_hash_partition() in one SQL test.\n\nAs cleaning up that is a separate topic, I have begun a new thread and\nwith a full analysis of everything I've spotted. See here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nThe first class of issues refers to real problems, and should be\nassigned errcodes. Having a way to switch the backtraces off can have\nsome benefits in the second cases. However, even if we silence them,\nit would also mean to miss backtraces that could be legit. The third\ncases would require a different analysis, behind the designs of the\ncode paths able to trigger the internal codes.\n\nAt this stage, my opinion would tend in favor of a revert of the GUC.\nThe second class of cases is useful to stress many unexpected cases,\nand I don't expect this number to go down over time, but increase with\nmore advanced tests added into core (I/O failures with partial writes\nfor availability, etc).\n--\nMichael",
"msg_date": "Tue, 23 Apr 2024 14:05:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 1:05 AM Michael Paquier <[email protected]> wrote:\n> At this stage, my opinion would tend in favor of a revert of the GUC.\n> The second class of cases is useful to stress many unexpected cases,\n> and I don't expect this number to go down over time, but increase with\n> more advanced tests added into core (I/O failures with partial writes\n> for availability, etc).\n\nAll right. I think there is a consensus in favor of reverting\na740b213d4b4d3360ad0cac696e47e5ec0eb8864. Maybe not an absolutely\niron-clad consensus, but there are a couple of votes explicitly in\nfavor of that course of action and the other votes seem to mostly be\nof the form \"well, reverting is ONE option.\"\n\nPeter?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Apr 2024 11:57:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-19 15:24:17 -0400, Tom Lane wrote:\n> I can't say that I care for \"backtrace_on_internal_error\".\n> Re-reading that thread, I see I argued for having *no* GUC and\n> just enabling that behavior all the time. I lost that fight,\n> but it should have been clear that a GUC of this exact shape\n> is a design dead end --- and that's what we're seeing now.\n\nI don't think enabling backtraces without a way to disable them is a good idea\n- security vulnerablilities in backtrace generation code are far from unheard\nof and can make error handling a lot slower...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Apr 2024 11:30:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I don't think enabling backtraces without a way to disable them is a good idea\n> - security vulnerablilities in backtrace generation code are far from unheard\n> of and can make error handling a lot slower...\n\nWell, in that case we have to have some kind of control GUC, and\nI think the consensus is that the one we have now is under-designed.\nSo I also vote for a full revert and try again in v18.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2024 14:39:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 2024-04-26 14:39:16 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I don't think enabling backtraces without a way to disable them is a good idea\n> > - security vulnerablilities in backtrace generation code are far from unheard\n> > of and can make error handling a lot slower...\n>\n> Well, in that case we have to have some kind of control GUC, and\n> I think the consensus is that the one we have now is under-designed.\n> So I also vote for a full revert and try again in v18.\n\nYea, makes sense. I just wanted to point out that some level of control is\nneeded, not say that what we have now is right.\n\n\n",
"msg_date": "Fri, 26 Apr 2024 11:43:05 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Fri, Apr 26, 2024 at 02:39:16PM -0400, Tom Lane wrote:\n> Well, in that case we have to have some kind of control GUC, and\n> I think the consensus is that the one we have now is under-designed.\n> So I also vote for a full revert and try again in v18.\n\nOkay, fine by me to move on with a revert.\n--\nMichael",
"msg_date": "Sat, 27 Apr 2024 07:16:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On 27.04.24 00:16, Michael Paquier wrote:\n> On Fri, Apr 26, 2024 at 02:39:16PM -0400, Tom Lane wrote:\n>> Well, in that case we have to have some kind of control GUC, and\n>> I think the consensus is that the one we have now is under-designed.\n>> So I also vote for a full revert and try again in v18.\n> \n> Okay, fine by me to move on with a revert.\n\nOk, it's reverted.\n\n\n\n",
"msg_date": "Mon, 29 Apr 2024 11:12:01 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, Apr 29, 2024 at 5:12 AM Peter Eisentraut <[email protected]> wrote:\n> On 27.04.24 00:16, Michael Paquier wrote:\n> > On Fri, Apr 26, 2024 at 02:39:16PM -0400, Tom Lane wrote:\n> >> Well, in that case we have to have some kind of control GUC, and\n> >> I think the consensus is that the one we have now is under-designed.\n> >> So I also vote for a full revert and try again in v18.\n> >\n> > Okay, fine by me to move on with a revert.\n>\n> Ok, it's reverted.\n\nThanks, and sorry. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Apr 2024 08:08:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Mon, Apr 29, 2024 at 08:08:19AM -0400, Robert Haas wrote:\n> On Mon, Apr 29, 2024 at 5:12 AM Peter Eisentraut <[email protected]> wrote:\n>> Ok, it's reverted.\n\nThanks for taking care of it.\n\n> Thanks, and sorry. :-(\n\nSorry for the outcome..\n--\nMichael",
"msg_date": "Tue, 30 Apr 2024 07:54:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "Hi,\n\nThis patch is currently parked in the July CommitFest:\n\nhttps://commitfest.postgresql.org/48/4735/\n\nThat's fine, except that I think that what the previous discussion\nrevealed is that we don't have consensus on how backtrace behavior\nought to be controlled: backtrace_on_internal_error was one proposal,\nand this was a competing proposal, and neither one of them seemed to\nbe completely satisfactory. If we don't forge a consensus on what a\nhypothetical patch ought to do, any particular actual patch is\nprobably doomed. So I would suggest that the effort ought to be\ndeciding what kind of design would be generally acceptable -- and that\nneed not wait for July, nor should it, if the goal is to get something\ncommitted in July.\n\n...Robert\n\n\n",
"msg_date": "Wed, 15 May 2024 14:31:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
},
{
"msg_contents": "On Wed, 15 May 2024 at 20:31, Robert Haas <[email protected]> wrote:\n> That's fine, except that I think that what the previous discussion\n> revealed is that we don't have consensus on how backtrace behavior\n> ought to be controlled: backtrace_on_internal_error was one proposal,\n> and this was a competing proposal, and neither one of them seemed to\n> be completely satisfactory.\n\nAttached is a rebased patchset of my previous proposal, including a\nfew changes that Michael preferred:\n1. Renames log_backtrace to log_backtrace_mode\n2. Rename internal to internal_error\n\nI reread the thread since I previously posted the patch and apart from\nMichaels feedback I don't think there was any more feedback on the\ncurrent proposal.\n\nRethinking about it myself though, I think the main downside of this\nproposal is that if you want the previous behaviour of\nbacktrace_functions (add backtraces to all elog/ereports in the given\nfunctions) you now need to set three GUCs:\nlog_backtrace_mode='all'\nbacktrace_functions='some_func'\nbacktrace_min_level=DEBUG5\n\nThe third one is not needed in the common case where someone only\ncares about errors, but still needing to set log_backtrace_mode='all'\nmight seem a bit annoying. One way around that would be to make\nlog_backtrace_mode and backtrace_functions be additive instead of\nsubtractive.\n\nPersonally I think the proposed subtractive nature would be exactly\nwhat I want for backtraces I'm interested in. Because I would want to\nuse backtrace_functions in this way:\n\n1. I see an error I want a backtrace of: et log_backtrace_mode='all'\nand try to trigger again.\n2. Oops, there's many backtraces now let's filter by function: set\nbacktrace_functions=some_func\n\nSo if it's additive, I'd have to also undo log_backtrace_mode='all'\nagain at step 2. So changing two GUCs instead of one to do what I\nwant.",
"msg_date": "Thu, 27 Jun 2024 12:43:20 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support a wildcard in backtrace_functions"
}
] |
[
{
"msg_contents": "Hi,\n\nWe fairly regularly have commits breaking the generation of INSTALL. IIRC we\nrecently discussed building it locally unconditionally, but I couldn't\nimmediately find that discussion. Until then, I think we should at least\nbuild it in CI so that cfbot can warn.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 20 Dec 2023 03:49:27 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "ci: Build standalone INSTALL file"
},
{
"msg_contents": "Hi,\n\nThe attached patch had a slight bug. Also turned out that the CI environment\ndidn't have pandoc installed. Fixed that.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 20 Dec 2023 06:15:10 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "> On 20 Dec 2023, at 15:15, Andres Freund <[email protected]> wrote:\n\n> The attached patch had a slight bug. Also turned out that the CI environment\n> didn't have pandoc installed. Fixed that.\n\nLGTM.\n\n+ time make -s -j${BUILD_JOBS} -C doc/src/sgml all INSTALL\nunrelated pet peeve: \"make -C doc/src/sgml all\" doesn't build all docs targets..\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 15:28:56 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> We fairly regularly have commits breaking the generation of INSTALL. IIRC we\n> recently discussed building it locally unconditionally, but I couldn't\n> immediately find that discussion. Until then, I think we should at least\n> build it in CI so that cfbot can warn.\n\nI thought the plan was to get rid of that file, in pursuit of making\nour distribution tarballs be more or less pure git pulls. Instead of\nexpending more effort on it, why not just push that project forward?\n(IIRC, what we intended to do instead was to modify the top-level\nREADME to point at the HTML install directions on the web.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:36:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 11:36:28AM -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n>> We fairly regularly have commits breaking the generation of INSTALL. IIRC we\n>> recently discussed building it locally unconditionally, but I couldn't\n>> immediately find that discussion. Until then, I think we should at least\n>> build it in CI so that cfbot can warn.\n> \n> I thought the plan was to get rid of that file, in pursuit of making\n> our distribution tarballs be more or less pure git pulls. Instead of\n> expending more effort on it, why not just push that project forward?\n> (IIRC, what we intended to do instead was to modify the top-level\n> README to point at the HTML install directions on the web.)\n\nHmm. It depends on if the next release should include it or not, but\nlet me add my +1 for replacing it with a simple redirect.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 08:39:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 03:28:56PM +0100, Daniel Gustafsson wrote:\n> + time make -s -j${BUILD_JOBS} -C doc/src/sgml all INSTALL\n> unrelated pet peeve: \"make -C doc/src/sgml all\" doesn't build all docs targets..\n\nThat seems relevant in terms of coverage. Why not just moving the\nINSTALL bit to a different line?\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 08:44:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "On 2023-12-21 08:44:33 +0900, Michael Paquier wrote:\n> On Wed, Dec 20, 2023 at 03:28:56PM +0100, Daniel Gustafsson wrote:\n> > + time make -s -j${BUILD_JOBS} -C doc/src/sgml all INSTALL\n> > unrelated pet peeve: \"make -C doc/src/sgml all\" doesn't build all docs targets..\n> \n> That seems relevant in terms of coverage. Why not just moving the\n> INSTALL bit to a different line?\n\nI am confused - which coverage could we be loosing here?\n\n\n",
"msg_date": "Thu, 21 Dec 2023 01:14:35 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-20 15:28:56 +0100, Daniel Gustafsson wrote:\n> + time make -s -j${BUILD_JOBS} -C doc/src/sgml all INSTALL\n> unrelated pet peeve: \"make -C doc/src/sgml all\" doesn't build all docs targets..\n\nWell, building the PDF takes a *long* time and is rarely required. I think\nthere's an argument for adding INSTALL to all - however, there's a reason not\nto as well: It has pandoc as an additional dependency, which isn't small...\n\nAndres\n\n\n",
"msg_date": "Thu, 21 Dec 2023 01:16:23 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 08:39:26 +0900, Michael Paquier wrote:\n> On Wed, Dec 20, 2023 at 11:36:28AM -0500, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> >> We fairly regularly have commits breaking the generation of INSTALL. IIRC we\n> >> recently discussed building it locally unconditionally, but I couldn't\n> >> immediately find that discussion. Until then, I think we should at least\n> >> build it in CI so that cfbot can warn.\n> > \n> > I thought the plan was to get rid of that file, in pursuit of making\n> > our distribution tarballs be more or less pure git pulls. Instead of\n> > expending more effort on it, why not just push that project forward?\n> > (IIRC, what we intended to do instead was to modify the top-level\n> > README to point at the HTML install directions on the web.)\n\nAh, right. I don't really care what solution we go for, just that as long as\nwe have INSTALL, we should make sure we don't regularly break it... Both\nMichael and I have in the last couple weeks.\n\n\n> Hmm. It depends on if the next release should include it or not, but\n> let me add my +1 for replacing it with a simple redirect.\n\nAre you going to submit a patch for that bit?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 01:17:57 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "> On 21 Dec 2023, at 10:16, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n> On 2023-12-20 15:28:56 +0100, Daniel Gustafsson wrote:\n>> + time make -s -j${BUILD_JOBS} -C doc/src/sgml all INSTALL\n>> unrelated pet peeve: \"make -C doc/src/sgml all\" doesn't build all docs targets..\n> \n> Well, building the PDF takes a *long* time and is rarely required. I think\n> there's an argument for adding INSTALL to all - however, there's a reason not\n> to as well: It has pandoc as an additional dependency, which isn't small...\n\nYeah, I'm not advocating changing anything.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:53:27 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-21 08:39:26 +0900, Michael Paquier wrote:\n>> On Wed, Dec 20, 2023 at 11:36:28AM -0500, Tom Lane wrote:\n>>> I thought the plan was to get rid of that file, in pursuit of making\n>>> our distribution tarballs be more or less pure git pulls. Instead of\n>>> expending more effort on it, why not just push that project forward?\n\n> Ah, right. I don't really care what solution we go for, just that as long as\n> we have INSTALL, we should make sure we don't regularly break it... Both\n> Michael and I have in the last couple weeks.\n\nSo let's just do it. I think the only real question is what URL\nto point at exactly. We can't simply say\n\nhttps://www.postgresql.org/docs/current/installation.html\n\nbecause that will be wrong for any version more than one major\nrelease back. We could make it version-specific,\n\nhttps://www.postgresql.org/docs/17/installation.html\n\nand task src/tools/version_stamp.pl with updating it. But that's\nproblematic for not-yet-released branches (there's no 17 today\nfor example). Perhaps we can use /devel/ in the master branch\nand try to remember to replace that with a version number as soon\nas a release branch is forked off --- but does the docs website\nget populated as soon as the branch is made?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Dec 2023 10:22:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 10:22:49 -0500, Tom Lane wrote:\n> I think the only real question is what URL to point at exactly. We can't\n> simply say\n> \n> https://www.postgresql.org/docs/current/installation.html\n> \n> because that will be wrong for any version more than one major\n> release back.\n\nRight.\n\n\n> We could make it version-specific,\n> \n> https://www.postgresql.org/docs/17/installation.html\n> \n> and task src/tools/version_stamp.pl with updating it. But that's\n> problematic for not-yet-released branches (there's no 17 today\n> for example).\n\nPerhaps we could make the website redirect 17 to /devel/ until 17 is branched\noff?\n\n\n> Perhaps we can use /devel/ in the master branch\n> and try to remember to replace that with a version number as soon\n> as a release branch is forked off --- but does the docs website\n> get populated as soon as the branch is made?\n\nI think it runs a few times a day - breaking the link for a few hours wouldn't\nbe optimal, but also not the end of the world. But redirecting $vnext -> to\ndevel would probably be a more reliable approach.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:30:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-21 10:22:49 -0500, Tom Lane wrote:\n>> We could make it version-specific,\n>> https://www.postgresql.org/docs/17/installation.html\n>> and task src/tools/version_stamp.pl with updating it. But that's\n>> problematic for not-yet-released branches (there's no 17 today\n>> for example).\n\n> Perhaps we could make the website redirect 17 to /devel/ until 17 is branched\n> off?\n\nHmm, maybe, but then there's a moving part in version-stamping that's not\naccessible to the average committer. On the other hand, it wouldn't\nbe too awful if that redirect didn't get updated instantly after a\nbranch. This is probably a point where we need advice from the web\nteam about how they manage documentation branches on the site.\n\n>> Perhaps we can use /devel/ in the master branch\n>> and try to remember to replace that with a version number as soon\n>> as a release branch is forked off --- but does the docs website\n>> get populated as soon as the branch is made?\n\n> I think it runs a few times a day - breaking the link for a few hours wouldn't\n> be optimal, but also not the end of the world. But redirecting $vnext -> to\n> devel would probably be a more reliable approach.\n\nLet's go with \"/devel/ in master and a number in release branches\"\nfor now, and tweak that if the web team wants to take on maintaining\na redirect. I'll put together a concrete patch proposal in a little\nbit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Dec 2023 10:46:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 10:46:02 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-12-21 10:22:49 -0500, Tom Lane wrote:\n> >> We could make it version-specific,\n> >> https://www.postgresql.org/docs/17/installation.html\n> >> and task src/tools/version_stamp.pl with updating it. But that's\n> >> problematic for not-yet-released branches (there's no 17 today\n> >> for example).\n> \n> > Perhaps we could make the website redirect 17 to /devel/ until 17 is branched\n> > off?\n> \n> Hmm, maybe, but then there's a moving part in version-stamping that's not\n> accessible to the average committer. On the other hand, it wouldn't\n> be too awful if that redirect didn't get updated instantly after a\n> branch. This is probably a point where we need advice from the web\n> team about how they manage documentation branches on the site.\n\nIIRC the relevant part of the website code has access to a table of\ndocumentation versions, so the redirect could be implement based on not\nknowing the version.\n\n\n> >> Perhaps we can use /devel/ in the master branch\n> >> and try to remember to replace that with a version number as soon\n> >> as a release branch is forked off --- but does the docs website\n> >> get populated as soon as the branch is made?\n> \n> > I think it runs a few times a day - breaking the link for a few hours wouldn't\n> > be optimal, but also not the end of the world. But redirecting $vnext -> to\n> > devel would probably be a more reliable approach.\n> \n> Let's go with \"/devel/ in master and a number in release branches\"\n> for now, and tweak that if the web team wants to take on maintaining\n> a redirect. I'll put together a concrete patch proposal in a little\n> bit.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:57:56 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-21 10:46:02 -0500, Tom Lane wrote:\n>> Let's go with \"/devel/ in master and a number in release branches\"\n>> for now, and tweak that if the web team wants to take on maintaining\n>> a redirect. I'll put together a concrete patch proposal in a little\n>> bit.\n\n> Cool.\n\nHere's a draft patch for this. Most of it is mechanical removal of\ninfrastructure for building the INSTALL file. If anyone wants to\nbikeshed on the new wording of README, feel free.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 21 Dec 2023 14:22:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 02:22:02PM -0500, Tom Lane wrote:\n> Here's a draft patch for this. Most of it is mechanical removal of\n> infrastructure for building the INSTALL file. If anyone wants to\n> bikeshed on the new wording of README, feel free.\n\nThanks for putting this together. That looks reasonable.\n\n> diff --git a/README b/README\n> index 56d0c951a9..e40e610ccb 100644\n> --- a/README\n> +++ b/README\n> @@ -9,14 +9,13 @@ that supports an extended subset of the SQL standard, including\n> -See the file INSTALL for instructions on how to build and install\n> -PostgreSQL. That file also lists supported operating systems and\n> -hardware platforms and contains information regarding any other\n> -software packages that are required to build or run the PostgreSQL\n> -system. Copyright and license information can be found in the\n> -file COPYRIGHT. A comprehensive documentation set is included in this\n> -distribution; it can be read as described in the installation\n> -instructions.\n> +Copyright and license information can be found in the file COPYRIGHT.\n> +\n> +General documentation about this version of PostgreSQL can be found at:\n> +https://www.postgresql.org/docs/devel/\n> +In particular, information about building PostgreSQL from the source\n> +code can be found at:\n> +https://www.postgresql.org/docs/devel/installation.html\n\nSounds fine by me, including the extra step documented in\nRELEASE_CHANGES. No information is lost.\n--\nMichael",
"msg_date": "Fri, 22 Dec 2023 11:41:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Dec 21, 2023 at 02:22:02PM -0500, Tom Lane wrote:\n>> Here's a draft patch for this. Most of it is mechanical removal of\n>> infrastructure for building the INSTALL file. If anyone wants to\n>> bikeshed on the new wording of README, feel free.\n\n> Thanks for putting this together. That looks reasonable.\n\nThanks for checking it. Pushed --- we can tweak things later\nif we decide the web-redirect idea is superior to this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Dec 2023 13:33:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ci: Build standalone INSTALL file"
}
] |
[
{
"msg_contents": "While looking at something else I noticed that pg_dump performs strdup without\nchecking the returned pointer, which will segfault in hasSuffix() in case of\nOOM. The attached, which should be backpatched to 16, changes to using\npg_strdup instead which handles it.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 20 Dec 2023 15:52:56 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unchecked strdup leading to segfault in pg_dump"
},
{
"msg_contents": "On Wed Dec 20, 2023 at 8:52 AM CST, Daniel Gustafsson wrote:\n> While looking at something else I noticed that pg_dump performs strdup without\n> checking the returned pointer, which will segfault in hasSuffix() in case of\n> OOM. The attached, which should be backpatched to 16, changes to using\n> pg_strdup instead which handles it.\n\nLooks good to me.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 20 Dec 2023 09:39:55 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unchecked strdup leading to segfault in pg_dump"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 09:39:55AM -0600, Tristan Partin wrote:\n> On Wed Dec 20, 2023 at 8:52 AM CST, Daniel Gustafsson wrote:\n>> While looking at something else I noticed that pg_dump performs strdup without\n>> checking the returned pointer, which will segfault in hasSuffix() in case of\n>> OOM. The attached, which should be backpatched to 16, changes to using\n>> pg_strdup instead which handles it.\n> \n> Looks good to me.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:20:26 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unchecked strdup leading to segfault in pg_dump"
}
] |
[
{
"msg_contents": "Hi.\n\nIn some cases, the planner underestimates FULL JOIN.\n\nExample:\n\npostgres=# CREATE TABLE t AS SELECT x AS a, null AS b FROM\ngenerate_series(1, 10) x;\npostgres=# ANALYZE;\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t t1 FULL JOIN t t2\nON t1.b = t2.b;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Hash Full Join (cost=1.23..2.37 rows=10 width=72) (actual rows=20 loops=1)\n Hash Cond: (t1.b = t2.b)\n -> Seq Scan on t t1 (cost=0.00..1.10 rows=10 width=36) (actual rows=10\nloops=1)\n -> Hash (cost=1.10..1.10 rows=10 width=36) (actual rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on t t2 (cost=0.00..1.10 rows=10 width=36) (actual\nrows=10 loops=1)\n Planning Time: 0.067 ms\n Execution Time: 0.052 ms\n(8 rows)\n\nAre these simple changes enough to improve this situation?\n\ndiff --git a/src/backend/optimizer/path/costsize.c\nb/src/backend/optimizer/path/costsize.c\nindex ef475d95a18..9cd43b778f3 100644\n--- a/src/backend/optimizer/path/costsize.c\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -5259,6 +5259,8 @@ calc_joinrel_size_estimate(PlannerInfo *root,\n break;\n case JOIN_FULL:\n nrows = outer_rows * inner_rows * fkselec * jselec;\n+ if (2 * nrows < outer_rows + inner_rows)\n+ nrows = outer_rows + inner_rows - nrows;\n if (nrows < outer_rows)\n nrows = outer_rows;\n if (nrows < inner_rows)\n\nThere is no error in the above case:\n\npostgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t t1 FULL JOIN t t2\nON t1.b = t2.b;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Hash Full Join (cost=1.23..2.37 rows=20 width=72) (actual rows=20 loops=1)\n Hash Cond: (t1.b = t2.b)\n -> Seq Scan on t t1 (cost=0.00..1.10 rows=10 width=36) (actual rows=10\nloops=1)\n -> Hash (cost=1.10..1.10 rows=10 width=36) (actual rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on t t2 (cost=0.00..1.10 rows=10 width=36) (actual\nrows=10 loops=1)\n Planning Time: 0.069 ms\n Execution Time: 0.065 ms\n(8 rows)\n\nHi.In some cases, the planner underestimates FULL JOIN.Example:postgres=# CREATE TABLE t AS SELECT x AS a, null AS b FROM generate_series(1, 10) x;postgres=# ANALYZE;postgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t t1 FULL JOIN t t2 ON t1.b = t2.b; QUERY PLAN ------------------------------------------------------------------------------------------- Hash Full Join (cost=1.23..2.37 rows=10 width=72) (actual rows=20 loops=1) Hash Cond: (t1.b = t2.b) -> Seq Scan on t t1 (cost=0.00..1.10 rows=10 width=36) (actual rows=10 loops=1) -> Hash (cost=1.10..1.10 rows=10 width=36) (actual rows=10 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t t2 (cost=0.00..1.10 rows=10 width=36) (actual rows=10 loops=1) Planning Time: 0.067 ms Execution Time: 0.052 ms(8 rows)Are these simple changes enough to improve this situation?diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.cindex ef475d95a18..9cd43b778f3 100644--- a/src/backend/optimizer/path/costsize.c+++ b/src/backend/optimizer/path/costsize.c@@ -5259,6 +5259,8 @@ calc_joinrel_size_estimate(PlannerInfo *root, break; case JOIN_FULL: nrows = outer_rows * inner_rows * fkselec * jselec;+ if (2 * nrows < outer_rows + inner_rows)+ nrows = outer_rows + inner_rows - nrows; if (nrows < outer_rows) nrows = outer_rows; if (nrows < inner_rows)There is no error in the above case:postgres=# EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM t t1 FULL JOIN t t2 ON t1.b = t2.b; QUERY PLAN ------------------------------------------------------------------------------------------- Hash Full Join (cost=1.23..2.37 rows=20 width=72) (actual rows=20 loops=1) Hash Cond: (t1.b = t2.b) -> Seq Scan on t t1 (cost=0.00..1.10 rows=10 width=36) (actual rows=10 loops=1) -> Hash (cost=1.10..1.10 rows=10 width=36) (actual rows=10 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t t2 (cost=0.00..1.10 rows=10 width=36) (actual rows=10 loops=1) Planning Time: 0.069 ms Execution Time: 0.065 ms(8 rows)",
"msg_date": "Thu, 21 Dec 2023 00:31:51 +0700",
"msg_from": "Danil Anisimow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Estimation rows of FULL JOIN"
},
{
"msg_contents": "Danil Anisimow <[email protected]> writes:\n> In some cases, the planner underestimates FULL JOIN.\n\nPerhaps.\n\n> Are these simple changes enough to improve this situation?\n\nThis looks like an entirely ad-hoc change, one that could make as\nmany cases worse as it makes better. If you want to argue for\nmerging it, you at least need to provide some evidence for thinking\nit's a more accurate model of what will happen. And adjust the\nnearby comment to explain it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Dec 2023 12:39:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimation rows of FULL JOIN"
}
] |
[
{
"msg_contents": "Hi all,\n(Bertrand and Andres in CC.)\n\nWhile listening at Bertrand's talk about logical decoding on standbys\nlast week at Prague, I got surprised by the fact that we do not\nreflect in the catalogs the reason why a conflict happened for a slot.\nThere are three of them depending on ReplicationSlotInvalidationCause:\n- WAL removed.\n- Invalid horizon.\n- Insufficient WAL level.\n\nThis idea has been hinted around here on the original thread that led\nto be87200efd93:\nhttps://www.postgresql.org/message-id/[email protected]\nHowever v44 has picked up the idea of a boolean:\nhttps://www.postgresql.org/message-id/[email protected]\n\nReplicationSlotCtl holds this information, so couldn't it be useful\nfor monitoring purposes to know why a slot got invalidated and add a\ncolumn to pg_get_replication_slots()? This could just be an extra\ntext conflicting_reason, defaulting to NULL when there's nothing to\nsee.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 09:21:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 5:51 AM Michael Paquier <[email protected]> wrote:\n>\n> While listening at Bertrand's talk about logical decoding on standbys\n> last week at Prague, I got surprised by the fact that we do not\n> reflect in the catalogs the reason why a conflict happened for a slot.\n> There are three of them depending on ReplicationSlotInvalidationCause:\n> - WAL removed.\n> - Invalid horizon.\n> - Insufficient WAL level.\n>\n\nThe invalidation cause is also required by one of the features being\ndiscussed \"Synchronize slots from primary to standby\" [1] and there is\nalready a thread to discuss the same [2]. As that thread started\nyesterday only, you may not have noticed it. Currently, the proposal\nis to expose it via a function but we can extend it to also display\nvia view, feel free to share your opinion on that thread.\n\n[1] - https://www.postgresql.org/message-id/[email protected]\n[2] - https://www.postgresql.org/message-id/CAJpy0uBpr0ym12%2B0mXpjcRFA6N%3DanX%2BYk9aGU4EJhHNu%3DfWykQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 08:20:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 08:20:16AM +0530, Amit Kapila wrote:\n> The invalidation cause is also required by one of the features being\n> discussed \"Synchronize slots from primary to standby\" [1] and there is\n> already a thread to discuss the same [2]. As that thread started\n> yesterday only, you may not have noticed it. Currently, the proposal\n> is to expose it via a function but we can extend it to also display\n> via view, feel free to share your opinion on that thread.\n> \n> [1] - https://www.postgresql.org/message-id/[email protected]\n> [2] - https://www.postgresql.org/message-id/CAJpy0uBpr0ym12%2B0mXpjcRFA6N%3DanX%2BYk9aGU4EJhHNu%3DfWykQ%40mail.gmail.com\n\nAh thanks, missed this one. This cannot use a separate function,\nactually, and there is a good reason for that that has not been\nmentioned. I'll jump there.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 12:07:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 09:21:04 +0900, Michael Paquier wrote:\n> While listening at Bertrand's talk about logical decoding on standbys\n> last week at Prague, I got surprised by the fact that we do not\n> reflect in the catalogs the reason why a conflict happened for a slot.\n> There are three of them depending on ReplicationSlotInvalidationCause:\n> - WAL removed.\n> - Invalid horizon.\n> - Insufficient WAL level.\n\nIt should be extremely rare to hit any of these other than \"WAL removed\", so\nI'm not sure it's worth adding interface complexity to show them.\n\n\n> ReplicationSlotCtl holds this information, so couldn't it be useful\n> for monitoring purposes to know why a slot got invalidated and add a\n> column to pg_get_replication_slots()? This could just be an extra\n> text conflicting_reason, defaulting to NULL when there's nothing to\n> see.\n\nExtra columns aren't free from a usability perspective. IFF we do something, I\nthink it should be a single column with a cause.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 01:40:15 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 3:10 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-12-21 09:21:04 +0900, Michael Paquier wrote:\n> > While listening at Bertrand's talk about logical decoding on standbys\n> > last week at Prague, I got surprised by the fact that we do not\n> > reflect in the catalogs the reason why a conflict happened for a slot.\n> > There are three of them depending on ReplicationSlotInvalidationCause:\n> > - WAL removed.\n> > - Invalid horizon.\n> > - Insufficient WAL level.\n>\n> It should be extremely rare to hit any of these other than \"WAL removed\", so\n> I'm not sure it's worth adding interface complexity to show them.\n>\n>\n> > ReplicationSlotCtl holds this information, so couldn't it be useful\n> > for monitoring purposes to know why a slot got invalidated and add a\n> > column to pg_get_replication_slots()? This could just be an extra\n> > text conflicting_reason, defaulting to NULL when there's nothing to\n> > see.\n>\n> Extra columns aren't free from a usability perspective. IFF we do something, I\n> think it should be a single column with a cause.\n\nThanks for the feedback. But do you mean that we replace existing\n'conflicting' column with 'cause' in both the function and view\n(pg_get_replication_slots() and pg_replication_slots)? Or do you mean\nthat we expose 'cause' from pg_get_replication_slots() and use that to\ndisplay 'conflicting' in pg_replication_slots view?\n\nAnd if we plan to return/display cause from either function or view,\nthen shall it be enum 'ReplicationSlotInvalidationCause' or\ndescription/text corresponding to enum?\n\n In the other feature being discussed \"Synchronize slots from primary\nto standby\" [1] , there is a requirement to replicate invalidation\ncause of slot from the primary to standby and thus it is needed in\nenum form there. And thus there was a suggestion earlier to have the\nfunction return enum-value and let the view display it as\ntext/description to the user. So kindly let us know your thoughts.\n\n[1] - https://www.postgresql.org/message-id/[email protected]\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 21 Dec 2023 16:08:48 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 16:08:48 +0530, shveta malik wrote:\n> On Thu, Dec 21, 2023 at 3:10 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-12-21 09:21:04 +0900, Michael Paquier wrote:\n> > > While listening at Bertrand's talk about logical decoding on standbys\n> > > last week at Prague, I got surprised by the fact that we do not\n> > > reflect in the catalogs the reason why a conflict happened for a slot.\n> > > There are three of them depending on ReplicationSlotInvalidationCause:\n> > > - WAL removed.\n> > > - Invalid horizon.\n> > > - Insufficient WAL level.\n> >\n> > It should be extremely rare to hit any of these other than \"WAL removed\", so\n> > I'm not sure it's worth adding interface complexity to show them.\n> >\n> >\n> > > ReplicationSlotCtl holds this information, so couldn't it be useful\n> > > for monitoring purposes to know why a slot got invalidated and add a\n> > > column to pg_get_replication_slots()? This could just be an extra\n> > > text conflicting_reason, defaulting to NULL when there's nothing to\n> > > see.\n> >\n> > Extra columns aren't free from a usability perspective. IFF we do something, I\n> > think it should be a single column with a cause.\n>\n> Thanks for the feedback. But do you mean that we replace existing\n> 'conflicting' column with 'cause' in both the function and view\n> (pg_get_replication_slots() and pg_replication_slots)? Or do you mean\n> that we expose 'cause' from pg_get_replication_slots() and use that to\n> display 'conflicting' in pg_replication_slots view?\n\nI'm not entirely sure I understand the difference - just whether we add one\nnew column or replace the existing 'conflicting' column? I can see arguments\nfor either. A conflicting column where NULL indicates no conflict, and other\nvalues indicate the reason for the conflict, doesn't seem too bad.\n\n\n> And if we plan to return/display cause from either function or view,\n> then shall it be enum 'ReplicationSlotInvalidationCause' or\n> description/text corresponding to enum?\n\nWe clearly can't just expose the numerical value for a C enum. So it has to be\nconverted to something SQL representable.\n\n\n> In the other feature being discussed \"Synchronize slots from primary\n> to standby\" [1] , there is a requirement to replicate invalidation\n> cause of slot from the primary to standby and thus it is needed in\n> enum form there. And thus there was a suggestion earlier to have the\n> function return enum-value and let the view display it as\n> text/description to the user. So kindly let us know your thoughts.\n>\n> [1] - https://www.postgresql.org/message-id/[email protected]\n\nCan you point me to a more specific message for that requirement? It seems\npretty odd to me. Your link goes to the top of a 400 message thread, I don't\nhave time to find one specific design point in that...\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Thu, 21 Dec 2023 03:25:32 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 5:04 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-12-21 16:08:48 +0530, shveta malik wrote:\n> > On Thu, Dec 21, 2023 at 3:10 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2023-12-21 09:21:04 +0900, Michael Paquier wrote:\n> > > > While listening at Bertrand's talk about logical decoding on standbys\n> > > > last week at Prague, I got surprised by the fact that we do not\n> > > > reflect in the catalogs the reason why a conflict happened for a slot.\n> > > > There are three of them depending on ReplicationSlotInvalidationCause:\n> > > > - WAL removed.\n> > > > - Invalid horizon.\n> > > > - Insufficient WAL level.\n> > >\n> > > It should be extremely rare to hit any of these other than \"WAL removed\", so\n> > > I'm not sure it's worth adding interface complexity to show them.\n> > >\n> > >\n> > > > ReplicationSlotCtl holds this information, so couldn't it be useful\n> > > > for monitoring purposes to know why a slot got invalidated and add a\n> > > > column to pg_get_replication_slots()? This could just be an extra\n> > > > text conflicting_reason, defaulting to NULL when there's nothing to\n> > > > see.\n> > >\n> > > Extra columns aren't free from a usability perspective. IFF we do something, I\n> > > think it should be a single column with a cause.\n> >\n> > Thanks for the feedback. But do you mean that we replace existing\n> > 'conflicting' column with 'cause' in both the function and view\n> > (pg_get_replication_slots() and pg_replication_slots)? Or do you mean\n> > that we expose 'cause' from pg_get_replication_slots() and use that to\n> > display 'conflicting' in pg_replication_slots view?\n>\n> I'm not entirely sure I understand the difference - just whether we add one\n> new column or replace the existing 'conflicting' column? I can see arguments\n> for either. A conflicting column where NULL indicates no conflict, and other\n> values indicate the reason for the conflict, doesn't seem too bad.\n>\n>\n> > And if we plan to return/display cause from either function or view,\n> > then shall it be enum 'ReplicationSlotInvalidationCause' or\n> > description/text corresponding to enum?\n>\n> We clearly can't just expose the numerical value for a C enum. So it has to be\n> converted to something SQL representable.\n>\n>\n> > In the other feature being discussed \"Synchronize slots from primary\n> > to standby\" [1] , there is a requirement to replicate invalidation\n> > cause of slot from the primary to standby and thus it is needed in\n> > enum form there. And thus there was a suggestion earlier to have the\n> > function return enum-value and let the view display it as\n> > text/description to the user. So kindly let us know your thoughts.\n> >\n> > [1] - https://www.postgresql.org/message-id/[email protected]\n>\n> Can you point me to a more specific message for that requirement? It seems\n> pretty odd to me. Your link goes to the top of a 400 message thread, I don't\n> have time to find one specific design point in that...\n\nIt is currently implemented there as a new function\n'pg_get_slot_invalidation_cause()' without changing existing view\npg_replication_slots. (See 2.1 in [1] where it was introduced).\n\nThen it was suggested in [2] to fork a new thread as it makes sense to\nhave it independent of this slot-synchronization feature.\n\nThe new thread forked is [3]. In that thread, the issues in having a\nnew function pg_get_slot_invalidation_cause() are discussed and also\nwe came to know about this very thread that started the next day.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uAuzbzvcjpnzFTiWuDBctnH-SDZC6AZabPX65x9GWBrjQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAA4eK1K0KCDNtpDyUKucMRdyK-5KdrCRWakCpHEdHT9muAiEOw%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CAJpy0uBpr0ym12%2B0mXpjcRFA6N%3DanX%2BYk9aGU4EJhHNu%3DfWykQ%40mail.gmail.com\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 21 Dec 2023 17:38:35 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 5:05 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-12-21 16:08:48 +0530, shveta malik wrote:\n> > On Thu, Dec 21, 2023 at 3:10 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > Extra columns aren't free from a usability perspective. IFF we do something, I\n> > > think it should be a single column with a cause.\n> >\n> > Thanks for the feedback. But do you mean that we replace existing\n> > 'conflicting' column with 'cause' in both the function and view\n> > (pg_get_replication_slots() and pg_replication_slots)? Or do you mean\n> > that we expose 'cause' from pg_get_replication_slots() and use that to\n> > display 'conflicting' in pg_replication_slots view?\n>\n> I'm not entirely sure I understand the difference - just whether we add one\n> new column or replace the existing 'conflicting' column? I can see arguments\n> for either.\n>\n\nAgreed. I think the argument against replacing the existing\n'conflicting' column is that there is a chance that it is being used\nby some monitoring script which I guess shouldn't be a big deal to\nchange. So, if we don't see that as a problem, I would prefer to have\na single column with conflict reason where one of its values indicates\nthere is no conflict.\n\nA conflicting column where NULL indicates no conflict, and other\n> values indicate the reason for the conflict, doesn't seem too bad.\n>\n\nThis is fine too.\n\n>\n> > And if we plan to return/display cause from either function or view,\n> > then shall it be enum 'ReplicationSlotInvalidationCause' or\n> > description/text corresponding to enum?\n>\n> We clearly can't just expose the numerical value for a C enum. So it has to be\n> converted to something SQL representable.\n>\n\nWe can return int2 value from the function pg_get_replication_slots()\nand then use that to display a string in the view\npg_replication_slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 19:55:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn Thu, Dec 21, 2023 at 07:55:51PM +0530, Amit Kapila wrote:\n> On Thu, Dec 21, 2023 at 5:05 PM Andres Freund <[email protected]> wrote:\n> > I'm not entirely sure I understand the difference - just whether we add one\n> > new column or replace the existing 'conflicting' column? I can see arguments\n> > for either.\n> >\n> \n> Agreed. I think the argument against replacing the existing\n> 'conflicting' column is that there is a chance that it is being used\n> by some monitoring script which I guess shouldn't be a big deal to\n> change. So, if we don't see that as a problem, I would prefer to have\n> a single column with conflict reason where one of its values indicates\n> there is no conflict.\n\n+1\n\n> A conflicting column where NULL indicates no conflict, and other\n> > values indicate the reason for the conflict, doesn't seem too bad.\n> >\n> \n> This is fine too.\n\n+1\n\n> >\n> > > And if we plan to return/display cause from either function or view,\n> > > then shall it be enum 'ReplicationSlotInvalidationCause' or\n> > > description/text corresponding to enum?\n> >\n> > We clearly can't just expose the numerical value for a C enum. So it has to be\n> > converted to something SQL representable.\n> >\n> \n> We can return int2 value from the function pg_get_replication_slots()\n> and then use that to display a string in the view\n> pg_replication_slots.\n\nYeah, and in the sync slot related work we could use pg_get_replication_slots()\nthen to get the enum.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 14:51:24 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On 2023-12-21 19:55:51 +0530, Amit Kapila wrote:\n> On Thu, Dec 21, 2023 at 5:05 PM Andres Freund <[email protected]> wrote:\n> > We clearly can't just expose the numerical value for a C enum. So it has to be\n> > converted to something SQL representable.\n> >\n> \n> We can return int2 value from the function pg_get_replication_slots()\n> and then use that to display a string in the view\n> pg_replication_slots.\n\nI strongly dislike that pattern. It just leads to complicated views - and\ndoesn't provide a single benefit that I am aware of. It's much bettter to\nsimply populate the text version in pg_get_replication_slots().\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:26:56 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 07:26:56AM -0800, Andres Freund wrote:\n> On 2023-12-21 19:55:51 +0530, Amit Kapila wrote:\n>> We can return int2 value from the function pg_get_replication_slots()\n>> and then use that to display a string in the view\n>> pg_replication_slots.\n> \n> I strongly dislike that pattern. It just leads to complicated views - and\n> doesn't provide a single benefit that I am aware of. It's much bettter to\n> simply populate the text version in pg_get_replication_slots().\n\nI agree that this is a better integration in the view, and that's what\nI would do FWIW.\n\nAmit, how much of a problem would it be to do a text->enum mapping\nwhen synchronizing the slots from a primary to a standby? Sure you\ncould have a system function that does some of the mapping work, but I\nam not sure what's the best integration when it comes to the other\npatch.\n--\nMichael",
"msg_date": "Fri, 22 Dec 2023 08:29:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 5:00 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 07:26:56AM -0800, Andres Freund wrote:\n> > On 2023-12-21 19:55:51 +0530, Amit Kapila wrote:\n> >> We can return int2 value from the function pg_get_replication_slots()\n> >> and then use that to display a string in the view\n> >> pg_replication_slots.\n> >\n> > I strongly dislike that pattern. It just leads to complicated views - and\n> > doesn't provide a single benefit that I am aware of. It's much bettter to\n> > simply populate the text version in pg_get_replication_slots().\n>\n> I agree that this is a better integration in the view, and that's what\n> I would do FWIW.\n>\n> Amit, how much of a problem would it be to do a text->enum mapping\n> when synchronizing the slots from a primary to a standby?\n>\n\nThere is no problem as such in that. We were trying to see if there is\na more convenient way but let's move by having cause as text from both\nthe function and view as that seems to be a preferred way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Dec 2023 08:20:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 8:21 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 07:55:51PM +0530, Amit Kapila wrote:\n> > On Thu, Dec 21, 2023 at 5:05 PM Andres Freund <[email protected]> wrote:\n> > > I'm not entirely sure I understand the difference - just whether we add one\n> > > new column or replace the existing 'conflicting' column? I can see arguments\n> > > for either.\n> > >\n> >\n> > Agreed. I think the argument against replacing the existing\n> > 'conflicting' column is that there is a chance that it is being used\n> > by some monitoring script which I guess shouldn't be a big deal to\n> > change. So, if we don't see that as a problem, I would prefer to have\n> > a single column with conflict reason where one of its values indicates\n> > there is no conflict.\n>\n> +1\n>\n\nDoes anyone else have a preference on whether to change the existing\ncolumn or add a new one?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Dec 2023 08:44:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 08:44:44AM +0530, Amit Kapila wrote:\n> Does anyone else have a preference on whether to change the existing\n> column or add a new one?\n\nJust to be clear here, I'd vote for replacing the existing boolean\nwith a text.\n--\nMichael",
"msg_date": "Tue, 26 Dec 2023 17:23:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 26, 2023 at 05:23:56PM +0900, Michael Paquier wrote:\n> On Tue, Dec 26, 2023 at 08:44:44AM +0530, Amit Kapila wrote:\n> > Does anyone else have a preference on whether to change the existing\n> > column or add a new one?\n> \n> Just to be clear here, I'd vote for replacing the existing boolean\n> with a text.\n\nSame here, I'd vote to avoid 2 columns having the same \"meaning\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Dec 2023 08:50:00 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, 21 Dec 2023 at 09:26, Amit Kapila <[email protected]> wrote:\n\n\n> A conflicting column where NULL indicates no conflict, and other\n> > values indicate the reason for the conflict, doesn't seem too bad.\n> >\n>\n> This is fine too.\n>\n\nI prefer this option. There is precedent for doing it this way, for example\nin pg_stat_activity.wait_event_type.\n\nThe most common test of this field is likely to be \"is there a conflict\"\nand it's better to write this as \"[fieldname] IS NOT NULL\" than to\nintroduce a magic constant. Also, it makes clear to future maintainers that\nthis field has one purpose: saying what type of conflict there is, if any.\nIf we find ourselves wanting to record a new non-conflict status (no idea\nwhat that could be: \"almost conflict\"? \"probably conflict soon\"?) there\nwould be less temptation to break existing tests for \"is there a conflict\".\n\nOn Thu, 21 Dec 2023 at 09:26, Amit Kapila <[email protected]> wrote: \nA conflicting column where NULL indicates no conflict, and other\n> values indicate the reason for the conflict, doesn't seem too bad.\n>\n\nThis is fine too.I prefer this option. There is precedent for doing it this way, for example in pg_stat_activity.wait_event_type.The most common test of this field is likely to be \"is there a conflict\" and it's better to write this as \"[fieldname] IS NOT NULL\" than to introduce a magic constant. Also, it makes clear to future maintainers that this field has one purpose: saying what type of conflict there is, if any. If we find ourselves wanting to record a new non-conflict status (no idea what that could be: \"almost conflict\"? \"probably conflict soon\"?) there would be less temptation to break existing tests for \"is there a conflict\".",
"msg_date": "Tue, 26 Dec 2023 09:05:10 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 7:35 PM Isaac Morland <[email protected]> wrote:\n>\n> On Thu, 21 Dec 2023 at 09:26, Amit Kapila <[email protected]> wrote:\n>\n>>\n>> A conflicting column where NULL indicates no conflict, and other\n>> > values indicate the reason for the conflict, doesn't seem too bad.\n>> >\n>>\n>> This is fine too.\n>\n>\n> I prefer this option. There is precedent for doing it this way, for example in pg_stat_activity.wait_event_type.\n>\n> The most common test of this field is likely to be \"is there a conflict\" and it's better to write this as \"[fieldname] IS NOT NULL\" than to introduce a magic constant. Also, it makes clear to future maintainers that this field has one purpose: saying what type of conflict there is, if any. If we find ourselves wanting to record a new non-conflict status (no idea what that could be: \"almost conflict\"? \"probably conflict soon\"?) there would be less temptation to break existing tests for \"is there a conflict\".\n\n+1 on using \"[fieldname] IS NOT NULL\" to find \"is there a conflict\"\n\nPFA the patch which attempts to implement this.\n\nThis patch changes the existing 'conflicting' field to\n'conflicting_cause' in pg_replication_slots. This new field is always\nNULL for physical slots (like the previous field conflicting), as well\nas for those logical slots which are not invalidated.\n\nthanks\nShveta",
"msg_date": "Wed, 27 Dec 2023 15:08:44 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 3:08 PM shveta malik <[email protected]> wrote:\n>\n> PFA the patch which attempts to implement this.\n>\n> This patch changes the existing 'conflicting' field to\n> 'conflicting_cause' in pg_replication_slots.\n>\n\nThis name sounds a bit odd to me, would it be better to name it as\nconflict_cause?\n\nA few other minor comments:\n=========================\n*\n+ <structfield>conflicting_cause</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Cause if this logical slot conflicted with recovery (and so is now\n+ invalidated). It is always NULL for physical slots, as well as for\n+ those logical slots which are not invalidated. Possible values are:\n\nWould it better to use description as follows:\" Cause of logical\nslot's conflict with recovery. It is always NULL for physical slots,\nas well as for logical slots which are not invalidated. The non-NULL\nvalues indicate that the slot is marked as invalidated. Possible\nvalues are:\n..\"\n\n*\n $res = $node_standby->safe_psql(\n 'postgres', qq(\n- select bool_and(conflicting) from pg_replication_slots;));\n+ select bool_and(conflicting) from\n+ (select conflicting_cause is not NULL as conflicting from\npg_replication_slots);));\n\nWon't the query \"select conflicting_cause is not NULL as conflicting\nfrom pg_replication_slots\" can return false even for physical slots\nand then impact the result of the main query whereas the original\nquery would seem to be simply ignoring physical slots? If this\nobservation is correct then you might want to add a 'slot_type'\ncondition in the new query.\n\n* After introducing check_slots_conflicting_cause(), do we need to\nhave check_slots_conflicting_status()? Aren't both checking the same\nthing?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 16:16:23 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 4:16 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Dec 27, 2023 at 3:08 PM shveta malik <[email protected]> wrote:\n> >\n> > PFA the patch which attempts to implement this.\n> >\n> > This patch changes the existing 'conflicting' field to\n> > 'conflicting_cause' in pg_replication_slots.\n> >\n>\n> This name sounds a bit odd to me, would it be better to name it as\n> conflict_cause?\n>\n> A few other minor comments:\n> =========================\n> *\n> + <structfield>conflicting_cause</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Cause if this logical slot conflicted with recovery (and so is now\n> + invalidated). It is always NULL for physical slots, as well as for\n> + those logical slots which are not invalidated. Possible values are:\n>\n> Would it better to use description as follows:\" Cause of logical\n> slot's conflict with recovery. It is always NULL for physical slots,\n> as well as for logical slots which are not invalidated. The non-NULL\n> values indicate that the slot is marked as invalidated. Possible\n> values are:\n> ..\"\n>\n> *\n> $res = $node_standby->safe_psql(\n> 'postgres', qq(\n> - select bool_and(conflicting) from pg_replication_slots;));\n> + select bool_and(conflicting) from\n> + (select conflicting_cause is not NULL as conflicting from\n> pg_replication_slots);));\n>\n> Won't the query \"select conflicting_cause is not NULL as conflicting\n> from pg_replication_slots\" can return false even for physical slots\n> and then impact the result of the main query whereas the original\n> query would seem to be simply ignoring physical slots? If this\n> observation is correct then you might want to add a 'slot_type'\n> condition in the new query.\n>\n> * After introducing check_slots_conflicting_cause(), do we need to\n> have check_slots_conflicting_status()? Aren't both checking the same\n> thing?\n\nI think it is needed for the case where we want to check that there is\nno conflict.\n\n# Verify slots are reported as non conflicting in pg_replication_slots\ncheck_slots_conflicting_status(0);\n\nFor the cases where there is conflict, I think\ncheck_slots_conflicting_cause() can replace\ncheck_slots_conflicting_status().\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 28 Dec 2023 10:16:26 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 10:16 AM shveta malik <[email protected]> wrote:\n>\n> On Wed, Dec 27, 2023 at 4:16 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Dec 27, 2023 at 3:08 PM shveta malik <[email protected]> wrote:\n> > >\n> > > PFA the patch which attempts to implement this.\n> > >\n> > > This patch changes the existing 'conflicting' field to\n> > > 'conflicting_cause' in pg_replication_slots.\n> > >\n> >\n> > This name sounds a bit odd to me, would it be better to name it as\n> > conflict_cause?\n> >\n> > A few other minor comments:\n> > =========================\n\nThanks for the feedback Amit.\n\n> > *\n> > + <structfield>conflicting_cause</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + Cause if this logical slot conflicted with recovery (and so is now\n> > + invalidated). It is always NULL for physical slots, as well as for\n> > + those logical slots which are not invalidated. Possible values are:\n> >\n> > Would it better to use description as follows:\" Cause of logical\n> > slot's conflict with recovery. It is always NULL for physical slots,\n> > as well as for logical slots which are not invalidated. The non-NULL\n> > values indicate that the slot is marked as invalidated. Possible\n> > values are:\n> > ..\"\n> >\n> > *\n> > $res = $node_standby->safe_psql(\n> > 'postgres', qq(\n> > - select bool_and(conflicting) from pg_replication_slots;));\n> > + select bool_and(conflicting) from\n> > + (select conflicting_cause is not NULL as conflicting from\n> > pg_replication_slots);));\n> >\n> > Won't the query \"select conflicting_cause is not NULL as conflicting\n> > from pg_replication_slots\" can return false even for physical slots\n> > and then impact the result of the main query whereas the original\n> > query would seem to be simply ignoring physical slots? If this\n> > observation is correct then you might want to add a 'slot_type'\n> > condition in the new query.\n> >\n> > * After introducing check_slots_conflicting_cause(), do we need to\n> > have check_slots_conflicting_status()? Aren't both checking the same\n> > thing?\n>\n> I think it is needed for the case where we want to check that there is\n> no conflict.\n>\n> # Verify slots are reported as non conflicting in pg_replication_slots\n> check_slots_conflicting_status(0);\n>\n> For the cases where there is conflict, I think\n> check_slots_conflicting_cause() can replace\n> check_slots_conflicting_status().\n\nI have removed check_slots_conflicting_status() and where it was\nneeded to check non-conflicting, I have added a simple query.\n\nPFA the v2-patch with all your comments addressed.\n\nthanks\nShveta",
"msg_date": "Thu, 28 Dec 2023 14:58:25 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 2:58 PM shveta malik <[email protected]> wrote:\n>\n> PFA the v2-patch with all your comments addressed.\n>\n\nDoes anyone have a preference for a column name? The options on the\ntable are conflict_cause, conflicting_cause, conflict_reason. Any\nothers? I was checking docs for similar usage and found\n\"pg_event_trigger_table_rewrite_reason\" function, so based on that we\ncan even go with conflict_reason.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Dec 2023 09:20:52 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 09:20:52AM +0530, Amit Kapila wrote:\n> Does anyone have a preference for a column name? The options on the\n> table are conflict_cause, conflicting_cause, conflict_reason. Any\n> others? I was checking docs for similar usage and found\n> \"pg_event_trigger_table_rewrite_reason\" function, so based on that we\n> can even go with conflict_reason.\n\n\"conflict_reason\" sounds like the natural choice here.\n--\nMichael",
"msg_date": "Fri, 29 Dec 2023 19:05:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 3:35 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Dec 29, 2023 at 09:20:52AM +0530, Amit Kapila wrote:\n> > Does anyone have a preference for a column name? The options on the\n> > table are conflict_cause, conflicting_cause, conflict_reason. Any\n> > others? I was checking docs for similar usage and found\n> > \"pg_event_trigger_table_rewrite_reason\" function, so based on that we\n> > can even go with conflict_reason.\n>\n> \"conflict_reason\" sounds like the natural choice here.\n\nDo we have more comments on the patch apart from column_name?\n\nthanks\nShveta\n\n\n",
"msg_date": "Mon, 1 Jan 2024 09:14:43 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 9:14 AM shveta malik <[email protected]> wrote:\n>\n> On Fri, Dec 29, 2023 at 3:35 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Fri, Dec 29, 2023 at 09:20:52AM +0530, Amit Kapila wrote:\n> > > Does anyone have a preference for a column name? The options on the\n> > > table are conflict_cause, conflicting_cause, conflict_reason. Any\n> > > others? I was checking docs for similar usage and found\n> > > \"pg_event_trigger_table_rewrite_reason\" function, so based on that we\n> > > can even go with conflict_reason.\n> >\n> > \"conflict_reason\" sounds like the natural choice here.\n>\n> Do we have more comments on the patch apart from column_name?\n>\n> thanks\n> Shveta\n\nPFA v3 after changing column name to 'conflict_reason'\n\nthanks\nShveta",
"msg_date": "Mon, 1 Jan 2024 12:31:54 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 12:32 PM shveta malik <[email protected]> wrote:\n>\n> PFA v3 after changing column name to 'conflict_reason'\n>\n\nFew minor comments:\n===================\n1.\n+ <para>\n+ <literal>wal_removed</literal> = required WAL has been removed.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>rows_removed</literal> = required rows have been removed.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>wal_level_insufficient</literal> = wal_level\ninsufficient on the primary server.\n+ </para>\n\nShould we use the same style to write the description as we are using\nfor the wal_status column? For example, <literal>wal_removed</literal>\nmeans that the required WAL has been removed.\n\n2.\n+ <para>\n+ The reason of logical slot's conflict with recovery.\n\nMy grammar tool says it should be: \"The reason for the logical slot's\nconflict with recovery.\"\n\nOther than these minor comments, the patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Jan 2024 16:30:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 4:30 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jan 1, 2024 at 12:32 PM shveta malik <[email protected]> wrote:\n> >\n> > PFA v3 after changing column name to 'conflict_reason'\n> >\n>\n> Few minor comments:\n> ===================\n> 1.\n> + <para>\n> + <literal>wal_removed</literal> = required WAL has been removed.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <literal>rows_removed</literal> = required rows have been removed.\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <literal>wal_level_insufficient</literal> = wal_level\n> insufficient on the primary server.\n> + </para>\n>\n> Should we use the same style to write the description as we are using\n> for the wal_status column? For example, <literal>wal_removed</literal>\n> means that the required WAL has been removed.\n>\n> 2.\n> + <para>\n> + The reason of logical slot's conflict with recovery.\n>\n> My grammar tool says it should be: \"The reason for the logical slot's\n> conflict with recovery.\"\n>\n> Other than these minor comments, the patch looks good to me.\n\nPFA v4 which addresses the doc comments.\n\nthanks\nShveta",
"msg_date": "Mon, 1 Jan 2024 17:17:19 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 5:17 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Jan 1, 2024 at 4:30 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jan 1, 2024 at 12:32 PM shveta malik <[email protected]> wrote:\n> > >\n> > > PFA v3 after changing column name to 'conflict_reason'\n> > >\n> >\n> > Few minor comments:\n> > ===================\n> > 1.\n> > + <para>\n> > + <literal>wal_removed</literal> = required WAL has been removed.\n> > + </para>\n> > + </listitem>\n> > + <listitem>\n> > + <para>\n> > + <literal>rows_removed</literal> = required rows have been removed.\n> > + </para>\n> > + </listitem>\n> > + <listitem>\n> > + <para>\n> > + <literal>wal_level_insufficient</literal> = wal_level\n> > insufficient on the primary server.\n> > + </para>\n> >\n> > Should we use the same style to write the description as we are using\n> > for the wal_status column? For example, <literal>wal_removed</literal>\n> > means that the required WAL has been removed.\n> >\n> > 2.\n> > + <para>\n> > + The reason of logical slot's conflict with recovery.\n> >\n> > My grammar tool says it should be: \"The reason for the logical slot's\n> > conflict with recovery.\"\n> >\n> > Other than these minor comments, the patch looks good to me.\n>\n> PFA v4 which addresses the doc comments.\n\nPlease ignore the previous patch and PFA new v4 (v4_2). The only\nchange from the earlier v4 is the subject correction in commit msg.\n\nthanks\nShveta",
"msg_date": "Mon, 1 Jan 2024 17:23:58 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 5:24 PM shveta malik <[email protected]> wrote:\n>\n> Please ignore the previous patch and PFA new v4 (v4_2). The only\n> change from the earlier v4 is the subject correction in commit msg.\n>\n\nThe patch looks good to me. I have slightly changed one of the\ndescriptions in the docs and also modified the commit message a bit. I\nwill push this after two days unless there are any more\ncomments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 2 Jan 2024 10:35:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 02, 2024 at 10:35:59AM +0530, Amit Kapila wrote:\n> On Mon, Jan 1, 2024 at 5:24 PM shveta malik <[email protected]> wrote:\n> >\n> > Please ignore the previous patch and PFA new v4 (v4_2). The only\n> > change from the earlier v4 is the subject correction in commit msg.\n> >\n\nThanks for the patch!\n\n> The patch looks good to me. I have slightly changed one of the\n> descriptions in the docs and also modified the commit message a bit. I\n> will push this after two days unless there are any more\n> comments/suggestions.\n>\n\nThe patch LGTM, I just have a Nit comment:\n\n+ <literal>wal_level_insufficient</literal> means that the\n+ <xref linkend=\"guc-wal-level\"/> is insufficient on the primary\n+ server.\n\nI'd prefer \"primary_wal_level\" instead of \"wal_level_insufficient\". I think it's\nbetter to directly mention it is linked to the primary (without the need to refer\nto the documentation) and that the fact that it is \"insufficient\" is more or less\nimplicit.\n\nBasically I think that with \"primary_wal_level\" one would need to refer to the doc\nless frequently than with \"wal_level_insufficient\".\n\nBut again, that's a Nit so feel free to ignore.\n\nRegards,\n \n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 14:07:58 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> + <literal>wal_level_insufficient</literal> means that the\n> + <xref linkend=\"guc-wal-level\"/> is insufficient on the primary\n> + server.\n> \n> I'd prefer \"primary_wal_level\" instead of \"wal_level_insufficient\". I think it's\n> better to directly mention it is linked to the primary (without the need to refer\n> to the documentation) and that the fact that it is \"insufficient\" is more or less\n> implicit.\n> \n> Basically I think that with \"primary_wal_level\" one would need to refer to the doc\n> less frequently than with \"wal_level_insufficient\".\n\nI can see your point, but wal_level_insufficient speaks a bit more to\nme because of its relationship with the GUC setting. Something like\nwal_level_insufficient_on_primary may speak better, but that's also\nquite long. I'm OK with what the patch does.\n\n+ as invalidated. Possible values are:\n+ <itemizedlist spacing=\"compact\">\nHigher-level nit: indentation seems to be one space off here.\n--\nMichael",
"msg_date": "Wed, 3 Jan 2024 10:39:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 7:10 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Jan 02, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> > + <literal>wal_level_insufficient</literal> means that the\n> > + <xref linkend=\"guc-wal-level\"/> is insufficient on the primary\n> > + server.\n> >\n> > I'd prefer \"primary_wal_level\" instead of \"wal_level_insufficient\". I think it's\n> > better to directly mention it is linked to the primary (without the need to refer\n> > to the documentation) and that the fact that it is \"insufficient\" is more or less\n> > implicit.\n> >\n> > Basically I think that with \"primary_wal_level\" one would need to refer to the doc\n> > less frequently than with \"wal_level_insufficient\".\n>\n> I can see your point, but wal_level_insufficient speaks a bit more to\n> me because of its relationship with the GUC setting. Something like\n> wal_level_insufficient_on_primary may speak better, but that's also\n> quite long. I'm OK with what the patch does.\n>\n\nThanks, I also prefer \"wal_level_insufficient\". To me\n\"primary_wal_level\" sounds more along the lines of a GUC name than the\nconflict_reason. The other names that come to mind are\n\"wal_level_lower_than_required\", \"wal_level_lower\",\n\"wal_level_lesser_than_required\", \"wal_level_lesser\" but I feel\n\"wal_level_insufficient\" sounds better than these. Having said that, I\nam open to any of these or better options for this conflict_reason.\n\n> + as invalidated. Possible values are:\n> + <itemizedlist spacing=\"compact\">\n> Higher-level nit: indentation seems to be one space off here.\n>\n\nThanks, fixed in the attached patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 3 Jan 2024 08:53:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jan 03, 2024 at 08:53:44AM +0530, Amit Kapila wrote:\n> On Wed, Jan 3, 2024 at 7:10 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Jan 02, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> > > + <literal>wal_level_insufficient</literal> means that the\n> > > + <xref linkend=\"guc-wal-level\"/> is insufficient on the primary\n> > > + server.\n> > >\n> > > I'd prefer \"primary_wal_level\" instead of \"wal_level_insufficient\". I think it's\n> > > better to directly mention it is linked to the primary (without the need to refer\n> > > to the documentation) and that the fact that it is \"insufficient\" is more or less\n> > > implicit.\n> > >\n> > > Basically I think that with \"primary_wal_level\" one would need to refer to the doc\n> > > less frequently than with \"wal_level_insufficient\".\n> >\n> > I can see your point, but wal_level_insufficient speaks a bit more to\n> > me because of its relationship with the GUC setting. Something like\n> > wal_level_insufficient_on_primary may speak better, but that's also\n> > quite long. I'm OK with what the patch does.\n> >\n> \n> Thanks, I also prefer \"wal_level_insufficient\". To me\n> \"primary_wal_level\" sounds more along the lines of a GUC name than the\n> conflict_reason. The other names that come to mind are\n> \"wal_level_lower_than_required\", \"wal_level_lower\",\n> \"wal_level_lesser_than_required\", \"wal_level_lesser\" but I feel\n> \"wal_level_insufficient\" sounds better than these. Having said that, I\n> am open to any of these or better options for this conflict_reason.\n> \n\nThank you both for giving your thoughts on it, I got your points and I'm OK with\n\"wal_level_insufficient\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 3 Jan 2024 07:33:49 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
},
{
"msg_contents": "On Wed, 3 Jan 2024 at 08:54, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jan 3, 2024 at 7:10 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Jan 02, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> > > + <literal>wal_level_insufficient</literal> means that the\n> > > + <xref linkend=\"guc-wal-level\"/> is insufficient on the primary\n> > > + server.\n> > >\n> > > I'd prefer \"primary_wal_level\" instead of \"wal_level_insufficient\". I think it's\n> > > better to directly mention it is linked to the primary (without the need to refer\n> > > to the documentation) and that the fact that it is \"insufficient\" is more or less\n> > > implicit.\n> > >\n> > > Basically I think that with \"primary_wal_level\" one would need to refer to the doc\n> > > less frequently than with \"wal_level_insufficient\".\n> >\n> > I can see your point, but wal_level_insufficient speaks a bit more to\n> > me because of its relationship with the GUC setting. Something like\n> > wal_level_insufficient_on_primary may speak better, but that's also\n> > quite long. I'm OK with what the patch does.\n> >\n>\n> Thanks, I also prefer \"wal_level_insufficient\". To me\n> \"primary_wal_level\" sounds more along the lines of a GUC name than the\n> conflict_reason. The other names that come to mind are\n> \"wal_level_lower_than_required\", \"wal_level_lower\",\n> \"wal_level_lesser_than_required\", \"wal_level_lesser\" but I feel\n> \"wal_level_insufficient\" sounds better than these. Having said that, I\n> am open to any of these or better options for this conflict_reason.\n>\n> > + as invalidated. Possible values are:\n> > + <itemizedlist spacing=\"compact\">\n> > Higher-level nit: indentation seems to be one space off here.\n> >\n>\n> Thanks, fixed in the attached patch.\n\nI have marked the commitfest entry to the committed state as the patch\nis committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 10 Jan 2024 14:45:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Track in pg_replication_slots the reason why slots conflict?"
}
] |
[
{
"msg_contents": "I was surprised to learn that 2 is a valid boolean (thanks Berge):\n\n# select 2::boolean;\n bool\n──────\n t\n\n... while '2' is not:\n\n# select '2'::boolean;\nERROR: 22P02: invalid input syntax for type boolean: \"2\"\nLINE 1: select '2'::boolean;\n ^\nLOCATION: boolin, bool.c:151\n\n\nThe first cast is the int4_bool function, but it isn't covered by the\nregression tests at all. The attached patch adds tests.\n\nChristoph",
"msg_date": "Thu, 21 Dec 2023 11:56:22 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "int4->bool test coverage"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 11:56:22AM +0100, Christoph Berg wrote:\n> The first cast is the int4_bool function, but it isn't covered by the\n> regression tests at all. The attached patch adds tests.\n\nI don't see why not.\n\nInteresting that there are a few more of these in int.c, like int2up,\nint4inc, int2smaller, int{2,4}shr, int{2,4}not, etc.\n--\nMichael",
"msg_date": "Fri, 22 Dec 2023 11:48:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4->bool test coverage"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 11:48:11AM +0900, Michael Paquier wrote:\n> On Thu, Dec 21, 2023 at 11:56:22AM +0100, Christoph Berg wrote:\n>> The first cast is the int4_bool function, but it isn't covered by the\n>> regression tests at all. The attached patch adds tests.\n> \n> I don't see why not.\n\nAnd one month later, done.\n\n> Interesting that there are a few more of these in int.c, like int2up,\n> int4inc, int2smaller, int{2,4}shr, int{2,4}not, etc.\n\nI've left these out for now.\n--\nMichael",
"msg_date": "Wed, 31 Jan 2024 15:08:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: int4->bool test coverage"
}
] |
[
{
"msg_contents": "Here is a patch to implement this.\nBeing stuck behind a lock for more than a second is almost\nalways a problem, so it is reasonable to turn this on by default.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 21 Dec 2023 14:29:05 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Set log_lock_waits=on by default"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 8:29 AM Laurenz Albe <[email protected]> wrote:\n> Here is a patch to implement this.\n> Being stuck behind a lock for more than a second is almost\n> always a problem, so it is reasonable to turn this on by default.\n\nI think it depends somewhat on the lock type, and also on your\nthreshold for what constitutes a problem. For example, you can wait\nfor 1 second for a relation extension lock pretty easily, I think,\njust because the I/O system is busy. Or I think also a VXID lock held\nby some transaction that has a tuple locked could be not particularly\nexciting. A conflict on a relation lock seems more likely to represent\na real issue, but I guess it's all kind of a judgement call. A second\nisn't really all that long on an overloaded system, and I see an awful\nlot of overloaded systems (because those are the people who call me).\n\nJust a random idea but what if we separated log_lock_waits from\ndeadlock_timeout? Say, it becomes time-valued rather than\nBoolean-valued, but it has to be >= deadlock_timeout? Because I'd\nprobably be more interested in hearing about a lock wait that was more\nthan say 10 seconds, but I don't necessarily want to wait 10 seconds\nfor the deadlock detector to trigger.\n\nIn general, I do kind of like the idea of trying to log more problem\nsituations by default, so that when someone has a major issue, you\ndon't have to start by having them change all the logging settings and\nthen wait until they get hosed a second time before you can\ntroubleshoot anything. I'm just concerned that 1s might be too\nsensitive for a lot of users who aren't as, let's say, diligent about\nkeeping the system healthy as you probably are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 09:14:16 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 05:29 Laurenz Albe <[email protected]> wrote:\n\n> Here is a patch to implement this.\n> Being stuck behind a lock for more than a second is almost\n> always a problem, so it is reasonable to turn this on by default.\n\n\nI think it's a very good idea. On all heavily loaded systems I have\nobserved so far, we always have turned it on. 1s (default deadlock_timeout)\nis quite large value for web/mobile apps, meaning that default frequency of\nlogging is quite low, so any potential suffering from observer effect\ndoesn't happen -- saturation related active session number happens much,\nmuch earlier, even if you have very slow disk IO for logging.\n\nAt the same time, I like the idea by Robert to separate logging of log\nwaits and deadlock_timeout logic -- the current implementation is a quite\nconfusing for new users. I also had cases when people wanted to log lock\nwaits earlier than deadlock detection. And also, most always lock wait\nlogging lacks the information another the blocking session (its state, and\nlast query, first of all), but is maybe an off topic worthing another\neffort of improvements.\n\nNik\n\nOn Thu, Dec 21, 2023 at 05:29 Laurenz Albe <[email protected]> wrote:Here is a patch to implement this.\nBeing stuck behind a lock for more than a second is almost\nalways a problem, so it is reasonable to turn this on by default.I think it's a very good idea. On all heavily loaded systems I have observed so far, we always have turned it on. 1s (default deadlock_timeout) is quite large value for web/mobile apps, meaning that default frequency of logging is quite low, so any potential suffering from observer effect doesn't happen -- saturation related active session number happens much, much earlier, even if you have very slow disk IO for logging.At the same time, I like the idea by Robert to separate logging of log waits and deadlock_timeout logic -- the current implementation is a quite confusing for new users. I also had cases when people wanted to log lock waits earlier than deadlock detection. And also, most always lock wait logging lacks the information another the blocking session (its state, and last query, first of all), but is maybe an off topic worthing another effort of improvements.Nik",
"msg_date": "Thu, 21 Dec 2023 06:58:05 -0800",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "\n\nLe 21/12/2023 à 14:29, Laurenz Albe a écrit :\n> Here is a patch to implement this.\n> Being stuck behind a lock for more than a second is almost\n> always a problem, so it is reasonable to turn this on by default.\n\nI think it's a really good idea. At Dalibo, we advise our customers to \nswitch it on. AFAICT, it's never been a problem.\n\nBest regards,\nFrédéric\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 16:21:26 +0100",
"msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Re: Robert Haas\n> On Thu, Dec 21, 2023 at 8:29 AM Laurenz Albe <[email protected]> wrote:\n> > Here is a patch to implement this.\n> > Being stuck behind a lock for more than a second is almost\n> > always a problem, so it is reasonable to turn this on by default.\n> \n> I think it depends somewhat on the lock type, and also on your\n> threshold for what constitutes a problem. For example, you can wait\n> for 1 second for a relation extension lock pretty easily, I think,\n> just because the I/O system is busy. Or I think also a VXID lock held\n> by some transaction that has a tuple locked could be not particularly\n> exciting. A conflict on a relation lock seems more likely to represent\n> a real issue, but I guess it's all kind of a judgement call. A second\n> isn't really all that long on an overloaded system, and I see an awful\n> lot of overloaded systems (because those are the people who call me).\n\nIf a system is so busy that it's waiting so long for the disk, I would\nlike PG to tell me about it. Likewise, if my transactions are slow\nbecause they are waiting for each other, I'd also like PG to tell me.\nEspecially as the 2nd condition can't be seen by \"it's slow because\nCPU or IO is at 100%\".\n\nIn any case, setting log_lock_waits=on by default helps.\n\nIn fact, everyone I talked to was wondering why log_checkpoints was\nturned on by default, and not this parameter. The info provided by\nlog_lock_waits is much more actionable than the stream of\nlog_checkpoint messages.\n\n> Just a random idea but what if we separated log_lock_waits from\n> deadlock_timeout? Say, it becomes time-valued rather than\n> Boolean-valued, but it has to be >= deadlock_timeout? Because I'd\n> probably be more interested in hearing about a lock wait that was more\n> than say 10 seconds, but I don't necessarily want to wait 10 seconds\n> for the deadlock detector to trigger.\n\nThat's also a good point, but I'd like to see log_lock_waits default\nto 'on' independently from having this extra change.\n\n> In general, I do kind of like the idea of trying to log more problem\n> situations by default, so that when someone has a major issue, you\n> don't have to start by having them change all the logging settings and\n> then wait until they get hosed a second time before you can\n> troubleshoot anything. I'm just concerned that 1s might be too\n> sensitive for a lot of users who aren't as, let's say, diligent about\n> keeping the system healthy as you probably are.\n\nI don't think 1s would be too sensitive by default.\n\nChristoph\n\n\n",
"msg_date": "Fri, 22 Dec 2023 11:33:52 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Thu, 2023-12-21 at 09:14 -0500, Robert Haas wrote:\n> \n> I think it depends somewhat on the lock type, and also on your\n> threshold for what constitutes a problem. For example, you can wait\n> for 1 second for a relation extension lock pretty easily, I think,\n> just because the I/O system is busy. Or I think also a VXID lock held\n> by some transaction that has a tuple locked could be not particularly\n> exciting. A conflict on a relation lock seems more likely to represent\n> a real issue, but I guess it's all kind of a judgement call. A second\n> isn't really all that long on an overloaded system, and I see an awful\n> lot of overloaded systems (because those are the people who call me).\n\nSure, you don't want \"log_lock_waits = on\" in all conceivable databases.\nI have seen applications that use database locks to synchronize\napplication threads (*shudder*). If it is normal for your database\nto experience long lock waits, disable the parameter.\n\nMy point is that in the vast majority of cases, long lock waits\nindicate a problem that you would like to know about, so the parameter\nshould default to \"on\".\n\n(Out of curiosity: what would ever wait for a VXID lock?)\n\n> Just a random idea but what if we separated log_lock_waits from\n> deadlock_timeout? Say, it becomes time-valued rather than\n> Boolean-valued, but it has to be >= deadlock_timeout? Because I'd\n> probably be more interested in hearing about a lock wait that was more\n> than say 10 seconds, but I don't necessarily want to wait 10 seconds\n> for the deadlock detector to trigger.\n\nThat is an appealing thought, but as far as I know, \"log_lock_waits\"\nis implemented by the deadlock detector, which is why it is tied to\n\"deadlock_timeout\". So if we want that, we'd need a separate \"live\nlock detector\". I don't know if we want to go there.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 22 Dec 2023 12:00:33 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On 2023-12-22 20:00, Laurenz Albe wrote:\n\n> My point is that in the vast majority of cases, long lock waits\n> indicate a problem that you would like to know about, so the parameter\n> should default to \"on\".\n\n+1.\nI always set log_lock_waits=on, so I agree with this.\n\n\n>> Just a random idea but what if we separated log_lock_waits from\n>> deadlock_timeout? Say, it becomes time-valued rather than\n>> Boolean-valued, but it has to be >= deadlock_timeout? Because I'd\n>> probably be more interested in hearing about a lock wait that was more\n>> than say 10 seconds, but I don't necessarily want to wait 10 seconds\n>> for the deadlock detector to trigger.\n> \n> That is an appealing thought, but as far as I know, \"log_lock_waits\"\n> is implemented by the deadlock detector, which is why it is tied to\n> \"deadlock_timeout\". So if we want that, we'd need a separate \"live\n> lock detector\". I don't know if we want to go there.\n\nPersonally, I thought it was a good idea to separate log_lock_waits and \ndeadlock_timeout, but I have not checked how that is implemented.\n\n-- \nRegards,\nShinya Kato\nNTT DATA GROUP CORPORATION\n\n\n",
"msg_date": "Thu, 28 Dec 2023 20:28:50 +0900",
"msg_from": "Shinya Kato <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On 12/21/23 6:58 AM, Nikolay Samokhvalov wrote:\n> On Thu, Dec 21, 2023 at 05:29 Laurenz Albe <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Here is a patch to implement this.\n> Being stuck behind a lock for more than a second is almost\n> always a problem, so it is reasonable to turn this on by default.\n> \n> \n> I think it's a very good idea. On all heavily loaded systems I have\n> observed so far, we always have turned it on. 1s (default\n> deadlock_timeout) is quite large value for web/mobile apps, meaning that\n> default frequency of logging is quite low, so any potential suffering\n> from observer effect doesn't happen -- saturation related active session\n> number happens much, much earlier, even if you have very slow disk IO\n> for logging.\n\nFWIW, enabling this setting has also been a long-time \"happiness hint\"\nthat I've passed along to people.\n\nWhat would be the worst case amount of logging that we're going to\ngenerate at scale? I think the worst case would largely scale according\nto connection count? So if someone had a couple thousand backends on a\nbusy top-end system, then I guess they might generate up to a couple\nthousand log messages every second or two under load after this\nparameter became enabled with a 1 second threshold?\n\nI'm not aware of any cases where enabling this parameter with a 1 second\nthreshold overwhelmed the logging collector (unlike, for example,\nlog_statement=all) but I wanted to pose the question in the interest of\nbeing careful.\n\n\n> At the same time, I like the idea by Robert to separate logging of log\n> waits and deadlock_timeout logic -- the current implementation is a\n> quite confusing for new users. I also had cases when people wanted to\n> log lock waits earlier than deadlock detection. And also, most always\n> lock wait logging lacks the information another the blocking session\n> (its state, and last query, first of all), but is maybe an off topic\n> worthing another effort of improvements.\n\nI agree with this, though it's equally true that proliferation of new\nGUCs is confusing for new users. I hope the project avoids too low of a\nbar for adding new GUCs. But using the deadlock_timeout GUC for this\ncompletely unrelated log threshold really doesn't make sense.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 10:31:22 -0800",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Hi,\n\nOn Thu, Dec 21, 2023 at 02:29:05PM +0100, Laurenz Albe wrote:\n> Here is a patch to implement this.\n> Being stuck behind a lock for more than a second is almost\n> always a problem, so it is reasonable to turn this on by default.\n\nI also think that this should be set to on.\n\nI had a look at the patch and it works fine. Regarding the\ndocumentation, maybe the back-reference at deadlock_timeout could be\nmade a bit more explicit then as well, as in the attached patch, but\nthis is mostly bikeshedding.\n\n\nMichael",
"msg_date": "Thu, 11 Jan 2024 15:24:55 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jan 11, 2024 at 03:24:55PM +0100, Michael Banck wrote:\n> On Thu, Dec 21, 2023 at 02:29:05PM +0100, Laurenz Albe wrote:\n> > Here is a patch to implement this.\n> > Being stuck behind a lock for more than a second is almost\n> > always a problem, so it is reasonable to turn this on by default.\n> \n> I also think that this should be set to on.\n> \n> I had a look at the patch and it works fine. \n>\n> Regarding the documentation, maybe the back-reference at\n> deadlock_timeout could be made a bit more explicit then as well, as in\n> the attached patch, but this is mostly bikeshedding.\n\nI've marked it ready for committer now, as the above really is\nbikeshedding.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 24 Jan 2024 18:19:53 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "I saw this was marked ready-for-committer, so I took a look at the thread.\nIt looks like there are two ideas:\n\n* Separate log_lock_waits from deadlock_timeout. It looks like this idea\n has a decent amount of support, but I didn't see any patch for it so far.\n IMHO this is arguably a prerequisite for setting log_lock_waits on by\n default, as we could then easily set it higher by default to help address\n concerns about introducing too much noise in the logs.\n\n* Set log_lock_waits on by default. The folks on this thread seem to\n support this idea, but given the lively discussion for enabling\n log_checkpoints by default [0], I'm hesitant to commit something like\n this without further community discussion.\n\n[0] https://postgr.es/m/CALj2ACX-rW_OeDcp4gqrFUAkf1f50Fnh138dmkd0JkvCNQRKGA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Feb 2024 10:53:43 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> It looks like there are two ideas:\n\n> * Separate log_lock_waits from deadlock_timeout. It looks like this idea\n> has a decent amount of support, but I didn't see any patch for it so far.\n\nI think the support comes from people who have not actually looked at\nthe code. The reason they are not separate is that the same timeout\nservice routine does both things. To pull them apart, you would have\nto (1) detangle that code and (2) incur the overhead of two timeout\nevents queued for every lock wait. It's not clear to me that it's\nworth it. In some sense, deadlock_timeout is exactly the length of\ntime after which you want to get concerned.\n\n> IMHO this is arguably a prerequisite for setting log_lock_waits on by\n> default, as we could then easily set it higher by default to help address\n> concerns about introducing too much noise in the logs.\n\nWell, that's the question --- would it be sane to enable\nlog_lock_waits by default if we don't separate them?\n\n> * Set log_lock_waits on by default. The folks on this thread seem to\n> support this idea, but given the lively discussion for enabling\n> log_checkpoints by default [0], I'm hesitant to commit something like\n> this without further community discussion.\n\nI was, and remain, of the opinion that that was a bad idea that\nwe'll eventually revert, just like we previously got rid of most\ninessential log chatter in the default configuration. So I doubt\nyou'll have much trouble guessing my opinion of this one. I think\nthe default logging configuration should be chosen with the\nunderstanding that nobody ever looks at the logs of most\ninstallations, and we should be more worried about their disk space\nconsumption than anything else. Admittedly, log_lock_waits is less\nbad than log_checkpoints, because no such events will occur in a\nwell-tuned configuration ... but still, it's going to be useless\nchatter in the average installation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Feb 2024 12:29:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Tue, 2024-02-06 at 12:29 -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n> > It looks like there are two ideas:\n> > [...]\n> > * Set log_lock_waits on by default. The folks on this thread seem to\n> > support this idea, but given the lively discussion for enabling\n> > log_checkpoints by default [0], I'm hesitant to commit something like\n> > this without further community discussion.\n> \n> I was, and remain, of the opinion that that was a bad idea that\n> we'll eventually revert, just like we previously got rid of most\n> inessential log chatter in the default configuration. So I doubt\n> you'll have much trouble guessing my opinion of this one. I think\n> the default logging configuration should be chosen with the\n> understanding that nobody ever looks at the logs of most\n> installations, and we should be more worried about their disk space\n> consumption than anything else. Admittedly, log_lock_waits is less\n> bad than log_checkpoints, because no such events will occur in a\n> well-tuned configuration ... but still, it's going to be useless\n> chatter in the average installation.\n\nUnsurprisingly, I want to argue against that.\n\nYou say that it is less bad than \"log_checkpoints = on\", and I agree.\nI can't remember seeing any complaints by users about\n\"log_checkpoints\", and I predict that the complaints about\n\"log_lock_waits = on\" will be about as loud.\n\nI am all for avoiding useless chatter in the log. In my personal\nexperience, that is usually \"database typo does not exist\" and\nconstraint violation errors. I always recommend people to enable\n\"log_lock_waits\", and so far I have not seen it spam the logs.\n\nI agree that usually nobody ever looks into the log file. The\ntime when people *do* look into the log file is when they encounter\ntrouble, and my stance is that the default configuration should be\nsuch that the log contains clues as to what may be the problem.\nIf a statement sometimes takes unreasonably long, it is very\nvaluable corroborative information that the statement occasionally\nwaits more than a second for a lock.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 06 Feb 2024 20:01:45 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe ([email protected]) wrote:\n> On Tue, 2024-02-06 at 12:29 -0500, Tom Lane wrote:\n> > Nathan Bossart <[email protected]> writes:\n> > > It looks like there are two ideas:\n> > > [...]\n> > > * Set log_lock_waits on by default. The folks on this thread seem to\n> > > support this idea, but given the lively discussion for enabling\n> > > log_checkpoints by default [0], I'm hesitant to commit something like\n> > > this without further community discussion.\n> > \n> > I was, and remain, of the opinion that that was a bad idea that\n> > we'll eventually revert, just like we previously got rid of most\n> > inessential log chatter in the default configuration. So I doubt\n> > you'll have much trouble guessing my opinion of this one. I think\n> > the default logging configuration should be chosen with the\n> > understanding that nobody ever looks at the logs of most\n> > installations, and we should be more worried about their disk space\n> > consumption than anything else. Admittedly, log_lock_waits is less\n> > bad than log_checkpoints, because no such events will occur in a\n> > well-tuned configuration ... but still, it's going to be useless\n> > chatter in the average installation.\n> \n> Unsurprisingly, I want to argue against that.\n\nI tend to agree with this position- log_checkpoints being on has been a\nrecommended configuration for a very long time and is valuable\ninformation to have about what's been happening when someone does go and\nlook at the log.\n\nHaving log_lock_waits on by default is likely to be less noisy and even\nmore useful for going back in time to figure out what happened.\n\n> You say that it is less bad than \"log_checkpoints = on\", and I agree.\n> I can't remember seeing any complaints by users about\n> \"log_checkpoints\", and I predict that the complaints about\n> \"log_lock_waits = on\" will be about as loud.\n\nYeah, agreed.\n\n> I am all for avoiding useless chatter in the log. In my personal\n> experience, that is usually \"database typo does not exist\" and\n> constraint violation errors. I always recommend people to enable\n> \"log_lock_waits\", and so far I have not seen it spam the logs.\n\nI really wish we could separate out the messages about typos and\nconstraint violations from these logs about processes waiting a long\ntime for locks or about checkpoints or even PANIC's or other really\nimportant messages. That said, that's a different problem and not\nsomething this change needs to concern itself with.\n\nAs for if we want to separate out log_lock_waits from deadlock_timeout-\nno, I don't think we do, for the reasons that Tom mentioned. I don't\nsee it as necessary either for enabling log_lock_waits by default.\nWaiting deadlock_timeout amount of time for a lock certainly is a\nproblem already and once we've waited that amount of time, I can't see\nthe time spent logging about it as being a serious issue.\n\n+1 for simply enabling log_lock_waits by default.\n\nAll that said ... if we did come up with a nice way to separate out the\ntiming for deadlock_timeout and log_lock_waits, I wouldn't necessarily\nbe against it. Perhaps one approach to that would be to set just one\ntimer but have it be the lower of the two, and then set another when\nthat fires (if there's more waiting to do) and then catch it when it\nhappens... Again, I'd view this as some independent improvement though\nand not a requirement for just enabling log_lock_waits by default.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 7 Feb 2024 09:43:23 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n>> On Tue, 2024-02-06 at 12:29 -0500, Tom Lane wrote:\n>>> I was, and remain, of the opinion that that was a bad idea that\n>>> we'll eventually revert, just like we previously got rid of most\n>>> inessential log chatter in the default configuration.\n\n>> Unsurprisingly, I want to argue against that.\n\n> I tend to agree with this position- log_checkpoints being on has been a\n> recommended configuration for a very long time and is valuable\n> information to have about what's been happening when someone does go and\n> look at the log.\n\nWe turned on default log_checkpoints in v15, which means that behavior\nhas been in the field for about sixteen months. I don't think that\nthat gives it the status of a settled issue; my bet is that most\nusers still have not seen it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Feb 2024 10:05:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> >> On Tue, 2024-02-06 at 12:29 -0500, Tom Lane wrote:\n> >>> I was, and remain, of the opinion that that was a bad idea that\n> >>> we'll eventually revert, just like we previously got rid of most\n> >>> inessential log chatter in the default configuration.\n> \n> >> Unsurprisingly, I want to argue against that.\n> \n> > I tend to agree with this position- log_checkpoints being on has been a\n> > recommended configuration for a very long time and is valuable\n> > information to have about what's been happening when someone does go and\n> > look at the log.\n> \n> We turned on default log_checkpoints in v15, which means that behavior\n> has been in the field for about sixteen months. I don't think that\n> that gives it the status of a settled issue; my bet is that most\n> users still have not seen it.\n\nApologies for not being clear- log_checkpoints being on has been a\nconfiguration setting that I (and many others I've run into) have been\nrecommending since as far back as I can remember.\n\nI was not referring to the change in the default.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 7 Feb 2024 10:08:14 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Hi,\n\nthis patch is still on the table, though for v18 now.\n\nNathan mentioned up-thread that he was hesitant to commit this without\nfurther discussion. Laurenz, Stephen and I are +1 on this, but when it \ncomes to committers having chimed in only Tom did so far and was -1.\n\nAre there any others who have an opinion on this?\n\nOn Tue, Feb 06, 2024 at 12:29:10PM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n> > It looks like there are two ideas:\n> \n> > * Separate log_lock_waits from deadlock_timeout. It looks like this idea\n> > has a decent amount of support, but I didn't see any patch for it so far.\n> \n> I think the support comes from people who have not actually looked at\n> the code. The reason they are not separate is that the same timeout\n> service routine does both things. To pull them apart, you would have\n> to (1) detangle that code and (2) incur the overhead of two timeout\n> events queued for every lock wait. It's not clear to me that it's\n> worth it. In some sense, deadlock_timeout is exactly the length of\n> time after which you want to get concerned.\n> \n> > IMHO this is arguably a prerequisite for setting log_lock_waits on by\n> > default, as we could then easily set it higher by default to help address\n> > concerns about introducing too much noise in the logs.\n> \n> Well, that's the question --- would it be sane to enable\n> log_lock_waits by default if we don't separate them?\n\nI think it would be, I have not seen people change the value of\ndeadlock_timeout so far, and I think 1s is a reasonable long time for a\ndefault lock wait to be reported.\n \n> > * Set log_lock_waits on by default. The folks on this thread seem to\n> > support this idea, but given the lively discussion for enabling\n> > log_checkpoints by default [0], I'm hesitant to commit something like\n> > this without further community discussion.\n> \n> I was, and remain, of the opinion that that was a bad idea that\n> we'll eventually revert, just like we previously got rid of most\n> inessential log chatter in the default configuration.\n\nI somewhat agree here in the sense that log_checkpoints is really only\nuseful for heavily-used servers, but this is a digression and due to the\nfact that log_checkpoints emits log lines periodically while\nlog_lock_waits only emits them for application conflicts (and arguably\napplication bugs), I do not think those would be in the \"issential log\nchatter\" group similar to how all SQL errors are not in that group\neither.\n\n\nMichael\n\n\n",
"msg_date": "Thu, 18 Jul 2024 12:25:55 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Thu, 2024-07-18 at 12:25 +0200, Michael Banck wrote:\n> this patch is still on the table, though for v18 now.\n> \n> Nathan mentioned up-thread that he was hesitant to commit this without\n> further discussion. Laurenz, Stephen and I are +1 on this, but when it \n> comes to committers having chimed in only Tom did so far and was -1.\n> \n> Are there any others who have an opinion on this?\n\nIf we want to tally up, there were also +1 votes from Christoph Berg,\nShinya Kato, Nikolay Samokhvalov, Jeremy Schneider and Frédéric Yhuel.\n\nThe major criticism is Tom's that it might unduly increase the log volume.\n\nI'm not trying to rush things, but I see little point in kicking a trivial\npatch like this through many commitfests. If no committer wants to step\nup and commit this, it should be rejected.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 19 Jul 2024 11:54:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Re: Laurenz Albe\n> The major criticism is Tom's that it might unduly increase the log volume.\n> \n> I'm not trying to rush things, but I see little point in kicking a trivial\n> patch like this through many commitfests. If no committer wants to step\n> up and commit this, it should be rejected.\n\nThat would be a pity, I still think log_lock_waits=on by default would\nbe a good thing.\n\nI have not seen any server yet where normal, legitimate operation\nwould routinely trigger the message. (If it did, people should likely\nhave used SKIP LOCKED or NOWAIT instead.) It would only increase the\nlog volume on systems that have a problem.\n\nChristoph\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:24:24 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 9:24 AM Christoph Berg <[email protected]> wrote:\n> I have not seen any server yet where normal, legitimate operation\n> would routinely trigger the message. (If it did, people should likely\n> have used SKIP LOCKED or NOWAIT instead.) It would only increase the\n> log volume on systems that have a problem.\n\nI've definitely seen systems where this setting would have generated\nregular output, because I see a lot of systems that are chronically\noverloaded. I think waits of more than 1 second for tuple locks could\nbe pretty routine on some servers -- or XID or VXID locks. So I'm more\ncautious about this than most people on this thread: log_checkpoints\nwon't generate more than a few lines of output per checkpoint\ninterval, and a checkpoint cycle will be on the order of minutes, so\nit's really never that much volume. On the other hand, in theory, this\nsetting can generate arbitrarily many messages.\n\nThat's why I originally proposed separating deadlock_timeout from\nlog_lock_waits, because I think 1s might actually end up being kind of\nnoisy for some people. On the other hand, I take Tom's point that\nseparating those things would be hard to code and, probably more\nimportantly, require a second timer.\n\nI'm not strongly opposed to just turning this on by default. It's not\nlike we can't change our minds, and it's also not like individual\ncustomers can't change the default. I think Tom has entirely the wrong\nidea about what a typical log file on a production server looks like.\nIn every case I've seen, it's full of application generated errors\nthat will never be fixed, connection logging, statement logging, and\nother stuff that is there just in case but will normally be ignored.\nFinding the messages that indicate real database problems is typically\nquite difficult, even if they're all enabled. If they're disabled by\ndefault, well then the useless crap is the ONLY thing you find in the\nlog file, and when the customer has a problem, the first thing you\nhave to do is tell them to turn on all the GUCs that log the actually\nimportant stuff and wait until the problem recurs.\n\nI have yet to run into a customer who was thrilled about receiving that message.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jul 2024 10:14:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Re: Robert Haas\n> I've definitely seen systems where this setting would have generated\n> regular output, because I see a lot of systems that are chronically\n> overloaded.\n\nI'd argue this is exactly what I mean by \"this system has a problem\".\nTelling the user about that makes sense.\n\n> cautious about this than most people on this thread: log_checkpoints\n> won't generate more than a few lines of output per checkpoint\n> interval, and a checkpoint cycle will be on the order of minutes, so\n> it's really never that much volume. On the other hand, in theory, this\n> setting can generate arbitrarily many messages.\n\nWell, it's still limited by 1 message per second (times\nmax_connections). It won't suddenly fill up the server with 1000\nmessages per second.\n\nThe log volume is the lesser of the problems. Not printing the message\njust because the system does have a problem isn't the right fix.\n\n> Finding the messages that indicate real database problems is typically\n> quite difficult, even if they're all enabled. If they're disabled by\n> default, well then the useless crap is the ONLY thing you find in the\n> log file, and when the customer has a problem, the first thing you\n> have to do is tell them to turn on all the GUCs that log the actually\n> important stuff and wait until the problem recurs.\n> \n> I have yet to run into a customer who was thrilled about receiving that message.\n\nLet's fix the default. People who have a problem can still disable it,\nbut then everyone else gets the useful messages in the first iteration.\n\nChristoph\n\n\n",
"msg_date": "Fri, 19 Jul 2024 16:22:29 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": ">\n> Are there any others who have an opinion on this?\n\n\nBig +1 to having it on by default. It's already one of the first things I\nturn on by default on any system I come across. The log spam is minimal,\ncompared to all the other stuff that ends up in there. And unlike most of\nthat stuff, this is output you generally want and need, when problems start\noccurring.\n\nCheers,\nGreg\n\nAre there any others who have an opinion on this?Big +1 to having it on by default. It's already one of the first things I turn on by default on any system I come across. The log spam is minimal, compared to all the other stuff that ends up in there. And unlike most of that stuff, this is output you generally want and need, when problems start occurring.Cheers,Greg",
"msg_date": "Fri, 19 Jul 2024 10:28:24 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Late to the party, but +1 from me on this default change also\n\nOn Fri, 19 Jul 2024 at 11:54, Laurenz Albe <[email protected]> wrote:\n\n> On Thu, 2024-07-18 at 12:25 +0200, Michael Banck wrote:\n> > this patch is still on the table, though for v18 now.\n> >\n> > Nathan mentioned up-thread that he was hesitant to commit this without\n> > further discussion. Laurenz, Stephen and I are +1 on this, but when it\n> > comes to committers having chimed in only Tom did so far and was -1.\n> >\n> > Are there any others who have an opinion on this?\n>\n> If we want to tally up, there were also +1 votes from Christoph Berg,\n> Shinya Kato, Nikolay Samokhvalov, Jeremy Schneider and Frédéric Yhuel.\n>\n> The major criticism is Tom's that it might unduly increase the log volume.\n>\n> I'm not trying to rush things, but I see little point in kicking a trivial\n> patch like this through many commitfests. If no committer wants to step\n> up and commit this, it should be rejected.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n\nLate to the party, but +1 from me on this default change alsoOn Fri, 19 Jul 2024 at 11:54, Laurenz Albe <[email protected]> wrote:On Thu, 2024-07-18 at 12:25 +0200, Michael Banck wrote:\n> this patch is still on the table, though for v18 now.\n> \n> Nathan mentioned up-thread that he was hesitant to commit this without\n> further discussion. Laurenz, Stephen and I are +1 on this, but when it \n> comes to committers having chimed in only Tom did so far and was -1.\n> \n> Are there any others who have an opinion on this?\n\nIf we want to tally up, there were also +1 votes from Christoph Berg,\nShinya Kato, Nikolay Samokhvalov, Jeremy Schneider and Frédéric Yhuel.\n\nThe major criticism is Tom's that it might unduly increase the log volume.\n\nI'm not trying to rush things, but I see little point in kicking a trivial\npatch like this through many commitfests. If no committer wants to step\nup and commit this, it should be rejected.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 19 Jul 2024 16:29:30 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 10:22 AM Christoph Berg <[email protected]> wrote:\n> I'd argue this is exactly what I mean by \"this system has a problem\".\n> Telling the user about that makes sense.\n\nThat's a fair position.\n\n> > cautious about this than most people on this thread: log_checkpoints\n> > won't generate more than a few lines of output per checkpoint\n> > interval, and a checkpoint cycle will be on the order of minutes, so\n> > it's really never that much volume. On the other hand, in theory, this\n> > setting can generate arbitrarily many messages.\n>\n> Well, it's still limited by 1 message per second (times\n> max_connections). It won't suddenly fill up the server with 1000\n> messages per second.\n\nYou make it sound like running with max_connections=5000 is a bad idea.\n\n(That was a joke, but yes, people actually do this, and no, it doesn't go well.)\n\n> The log volume is the lesser of the problems. Not printing the message\n> just because the system does have a problem isn't the right fix.\n\nYeah.\n\n> Let's fix the default. People who have a problem can still disable it,\n> but then everyone else gets the useful messages in the first iteration.\n\nReasonable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jul 2024 11:12:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 5:13 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Jul 19, 2024 at 10:22 AM Christoph Berg <[email protected]> wrote:\n[..]\n> > Let's fix the default. People who have a problem can still disable it,\n> > but then everyone else gets the useful messages in the first iteration.\n>\n> Reasonable.\n>\n\nI have feeling that we have three disconnected class of uses:\n\n1. dev/testing DBs: where frankly speaking nobody cares about such DBs\nuntil they stop/crash; this also includes DBs from new users on dev\nlaptops too\n2. production systems: where it matters to have log_lock_waits=on (and\npeople may start getting nervous if they don't have it when the issue\nstrikes)\n3. PG on embedded hardware, where it would be worth to be actually\ndisabled and not consume scare resources\n\nI would like to +1 too to the default value of log_lock_waits=on due\nto mostly working nearby use case #2, and because due to my surprise,\nwe had __ 74.7% __ of individual installations having it already as\n'on' already within last year support reports here at EDB (that may be\nbiased just to class of systems #2).\n\nBut I could be easily convinced too, that it is the embedded space\n(#3) that has the biggest amount of default installations, so we\nshould stick log_lock_waits=off by default. However, I believe that\nsuch specialized use of PG already might require some \"customizations\"\nfirst to even further reduce e.g shared_buffers, right?\n\nI would also like to believe that if people try to use PostgreSQL for\nthe first time (use case #1), they would be much better served when\nthe log would contain info about stuck sessions.\n\nAlso, if there's ever any change to the default, it should be put into\nRelease Notes at front to simply warn people (especially those from\nembedded space?).\n\n-J.\n\n\n",
"msg_date": "Tue, 3 Sep 2024 15:37:24 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
},
{
"msg_contents": "Re: Jakub Wartak\n> 1. dev/testing DBs: where frankly speaking nobody cares about such DBs\n> until they stop/crash; this also includes DBs from new users on dev\n> laptops too\n> 2. production systems: where it matters to have log_lock_waits=on (and\n> people may start getting nervous if they don't have it when the issue\n> strikes)\n> 3. PG on embedded hardware, where it would be worth to be actually\n> disabled and not consume scare resources\n> \n> I would like to +1 too to the default value of log_lock_waits=on due\n> to mostly working nearby use case #2, and because due to my surprise,\n> we had __ 74.7% __ of individual installations having it already as\n> 'on' already within last year support reports here at EDB (that may be\n> biased just to class of systems #2).\n\nAck. Thanks for that field data.\n\n> But I could be easily convinced too, that it is the embedded space\n> (#3) that has the biggest amount of default installations, so we\n> should stick log_lock_waits=off by default. However, I believe that\n> such specialized use of PG already might require some \"customizations\"\n> first to even further reduce e.g shared_buffers, right?\n\nThe ship \"no log spam by default\" has definitely sailed since\nlog_checkpoints defaults to 'on'.\n\n> I would also like to believe that if people try to use PostgreSQL for\n> the first time (use case #1), they would be much better served when\n> the log would contain info about stuck sessions.\n\nDefinitely.\n\nChristoph\n\n\n",
"msg_date": "Tue, 3 Sep 2024 21:39:01 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set log_lock_waits=on by default"
}
] |
[
{
"msg_contents": "Attached is a patch that adds 3 SQL-callable functions to return\nrandom integer/numeric values chosen uniformly from a given range:\n\n random(min int, max int) returns int\n random(min bigint, max bigint) returns bigint\n random(min numeric, max numeric) returns numeric\n\nThe return value is in the range [min, max], and in the numeric case,\nthe result scale equals Max(scale(min), scale(max)), so it can be used\nto generate large random integers, as well as decimals.\n\nThe goal is to provide simple, easy-to-use functions that operate\ncorrectly over arbitrary ranges, which is trickier than it might seem\nusing the existing random() function. The main advantages are:\n\n1. Support for arbitrary bounds (provided that max >= min). A SQL or\nPL/pgSQL implementation based on the existing random() function can\nsuffer from integer overflow if the difference max-min is too large.\n\n2. Uniform results over the full range. It's easy to overlook the fact\nthat in a naive implementation doing something like\n\"((max-min)*random()+min)::int\", the endpoint values will be half as\nlikely as any other value, since casting to integer rounds to nearest.\n\n3. Makes better use of the underlying PRNG, not limited to the 52-bits\nof double precision values.\n\n4. Simpler and more efficient generation of random numeric values.\nThis is something I have commonly wanted in the past, and have usually\nresorted to hacks involving multiple calls to random() to build\nstrings of digits, which is horribly slow, and messy.\n\nThe implementation moves the existing random functions to a new source\nfile, so the new functions all share a common PRNG state with the\nexisting random functions, and that state is kept private to that\nfile.\n\nRegards,\nDean",
"msg_date": "Thu, 21 Dec 2023 17:06:25 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Functions to return random numbers in a given range"
},
{
"msg_contents": "Hi\n\nčt 21. 12. 2023 v 18:06 odesílatel Dean Rasheed <[email protected]>\nnapsal:\n\n> Attached is a patch that adds 3 SQL-callable functions to return\n> random integer/numeric values chosen uniformly from a given range:\n>\n> random(min int, max int) returns int\n> random(min bigint, max bigint) returns bigint\n> random(min numeric, max numeric) returns numeric\n>\nThe return value is in the range [min, max], and in the numeric case,\n> the result scale equals Max(scale(min), scale(max)), so it can be used\n> to generate large random integers, as well as decimals.\n>\n> The goal is to provide simple, easy-to-use functions that operate\n> correctly over arbitrary ranges, which is trickier than it might seem\n> using the existing random() function. The main advantages are:\n>\n> 1. Support for arbitrary bounds (provided that max >= min). A SQL or\n> PL/pgSQL implementation based on the existing random() function can\n> suffer from integer overflow if the difference max-min is too large.\n>\n> 2. Uniform results over the full range. It's easy to overlook the fact\n> that in a naive implementation doing something like\n> \"((max-min)*random()+min)::int\", the endpoint values will be half as\n> likely as any other value, since casting to integer rounds to nearest.\n>\n> 3. Makes better use of the underlying PRNG, not limited to the 52-bits\n> of double precision values.\n>\n> 4. Simpler and more efficient generation of random numeric values.\n> This is something I have commonly wanted in the past, and have usually\n> resorted to hacks involving multiple calls to random() to build\n> strings of digits, which is horribly slow, and messy.\n>\n> The implementation moves the existing random functions to a new source\n> file, so the new functions all share a common PRNG state with the\n> existing random functions, and that state is kept private to that\n> file.\n>\n\n+1\n\nRegards\n\nPavel\n\n\n> Regards,\n> Dean\n>\n\nHičt 21. 12. 2023 v 18:06 odesílatel Dean Rasheed <[email protected]> napsal:Attached is a patch that adds 3 SQL-callable functions to return\nrandom integer/numeric values chosen uniformly from a given range:\n\n random(min int, max int) returns int\n random(min bigint, max bigint) returns bigint\n random(min numeric, max numeric) returns numeric\nThe return value is in the range [min, max], and in the numeric case,\nthe result scale equals Max(scale(min), scale(max)), so it can be used\nto generate large random integers, as well as decimals.\n\nThe goal is to provide simple, easy-to-use functions that operate\ncorrectly over arbitrary ranges, which is trickier than it might seem\nusing the existing random() function. The main advantages are:\n\n1. Support for arbitrary bounds (provided that max >= min). A SQL or\nPL/pgSQL implementation based on the existing random() function can\nsuffer from integer overflow if the difference max-min is too large.\n\n2. Uniform results over the full range. It's easy to overlook the fact\nthat in a naive implementation doing something like\n\"((max-min)*random()+min)::int\", the endpoint values will be half as\nlikely as any other value, since casting to integer rounds to nearest.\n\n3. Makes better use of the underlying PRNG, not limited to the 52-bits\nof double precision values.\n\n4. Simpler and more efficient generation of random numeric values.\nThis is something I have commonly wanted in the past, and have usually\nresorted to hacks involving multiple calls to random() to build\nstrings of digits, which is horribly slow, and messy.\n\nThe implementation moves the existing random functions to a new source\nfile, so the new functions all share a common PRNG state with the\nexisting random functions, and that state is kept private to that\nfile.+1RegardsPavel\n\nRegards,\nDean",
"msg_date": "Thu, 21 Dec 2023 18:43:22 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 1:07 AM Dean Rasheed <[email protected]> wrote:\n>\n> Attached is a patch that adds 3 SQL-callable functions to return\n> random integer/numeric values chosen uniformly from a given range:\n>\n> random(min int, max int) returns int\n> random(min bigint, max bigint) returns bigint\n> random(min numeric, max numeric) returns numeric\n>\n> The return value is in the range [min, max], and in the numeric case,\n> the result scale equals Max(scale(min), scale(max)), so it can be used\n> to generate large random integers, as well as decimals.\n>\n> The goal is to provide simple, easy-to-use functions that operate\n> correctly over arbitrary ranges, which is trickier than it might seem\n> using the existing random() function. The main advantages are:\n>\n> 1. Support for arbitrary bounds (provided that max >= min). A SQL or\n> PL/pgSQL implementation based on the existing random() function can\n> suffer from integer overflow if the difference max-min is too large.\n>\n\nYour patch works.\nperformance is the best amount for other options in [0].\nI don't have deep knowledge about which one is more random.\n\nCurrently we have to explicitly mention the lower and upper bound.\nbut can we do this:\njust give me an int, int means the int data type can be represented.\nor just give me a random bigint.\nbut for numeric, the full numeric values that can be represented are very big.\n\nMaybe we can use the special value null to achieve this\nlike use\nselect random(NULL::int,null)\nto represent a random int in the full range of integers values can be\nrepresented.\n\nDo you think it makes sense?\n\n[0] https://www.postgresql.org/message-id/CAEG8a3LcYXjNU1f2bxMm9c6ThQsPoTcvYO_kOnifx3aGXkbgPw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 28 Dec 2023 15:34:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "Thank you for the patch.\n\nI applied this patch manually to the master branch, resolving a conflict \nin `numeric.h`. It successfully passed both `make check` and `make \ncheck-world`.\n\n\nBest regards,\n\nDavid\n\n\n\n",
"msg_date": "Fri, 26 Jan 2024 12:44:30 -0800",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Thu, 28 Dec 2023 at 07:34, jian he <[email protected]> wrote:\n>\n> Your patch works.\n> performance is the best amount for other options in [0].\n> I don't have deep knowledge about which one is more random.\n>\n\nThanks for testing.\n\n> Currently we have to explicitly mention the lower and upper bound.\n> but can we do this:\n> just give me an int, int means the int data type can be represented.\n> or just give me a random bigint.\n> but for numeric, the full numeric values that can be represented are very big.\n>\n> Maybe we can use the special value null to achieve this\n> like use\n> select random(NULL::int,null)\n> to represent a random int in the full range of integers values can be\n> represented.\n>\n\nHmm, I don't particularly like that idea. It seems pretty ugly. Now\nthat we support literal integers in hex, with underscores, it's\nrelatively easy to pass INT_MIN/MAX as arguments to these functions,\nif that's what you need. I think if we were going to have a shorthand\nfor getting full-range random integers, it would probably be better to\nintroduce separate no-arg functions for that. I'm not really sure if\nthat's a sufficiently common use case to justify the effort though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:38:11 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 20:44, David Zhang <[email protected]> wrote:\n>\n> Thank you for the patch.\n>\n> I applied this patch manually to the master branch, resolving a conflict\n> in `numeric.h`. It successfully passed both `make check` and `make\n> check-world`.\n>\n\nThanks for testing.\n\nInterestingly, the cfbot didn't pick up on the fact that it needed\nrebasing. Anyway, the copyright years in the new file's header comment\nneeded updating, so here is a rebase doing that.\n\nRegards,\nDean",
"msg_date": "Mon, 29 Jan 2024 12:42:32 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "Hi,\n\n> Interestingly, the cfbot didn't pick up on the fact that it needed\n> rebasing. Anyway, the copyright years in the new file's header comment\n> needed updating, so here is a rebase doing that.\n\nMaybe I'm missing something but I'm not sure if I understand what this\ntest tests particularly:\n\n```\n-- There should be no triple duplicates in 1000 full-range 32-bit random()\n-- values. (Each of the C(1000, 3) choices of triplets from the 1000 values\n-- has a probability of 1/(2^32)^2 of being a triple duplicate, so the\n-- average number of triple duplicates is 1000 * 999 * 998 / 6 / 2^64, which\n-- is roughly 9e-12.)\nSELECT r, count(*)\nFROM (SELECT random(-2147483648, 2147483647) r\n FROM generate_series(1, 1000)) ss\nGROUP BY r HAVING count(*) > 2;\n```\n\nThe intent seems to be to check the fact that random numbers are\ndistributed evenly. If this is the case I think the test is wrong. The\nsequence of numbers 100, 100, 100, 100, 100 is as random as 99, 8, 4,\n12, 45 and every particular sequence has low probability. All in all\npersonally I would argue that this is a meaningless test that just\nfails with a low probability. Same for the tests that follow below.\n\nThe proper way of testing PRNG would be to call setseed() and compare\nreturn values with expected ones. I don't mind testing the proposed\ninvariants but they should do this after calling setseed(). Currently\nthe patch places the tests right before the call.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Jan 2024 15:47:36 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 12:47, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Maybe I'm missing something but I'm not sure if I understand what this\n> test tests particularly:\n>\n> ```\n> -- There should be no triple duplicates in 1000 full-range 32-bit random()\n> -- values. (Each of the C(1000, 3) choices of triplets from the 1000 values\n> -- has a probability of 1/(2^32)^2 of being a triple duplicate, so the\n> -- average number of triple duplicates is 1000 * 999 * 998 / 6 / 2^64, which\n> -- is roughly 9e-12.)\n> SELECT r, count(*)\n> FROM (SELECT random(-2147483648, 2147483647) r\n> FROM generate_series(1, 1000)) ss\n> GROUP BY r HAVING count(*) > 2;\n> ```\n>\n> The intent seems to be to check the fact that random numbers are\n> distributed evenly. If this is the case I think the test is wrong. The\n> sequence of numbers 100, 100, 100, 100, 100 is as random as 99, 8, 4,\n> 12, 45 and every particular sequence has low probability. All in all\n> personally I would argue that this is a meaningless test that just\n> fails with a low probability. Same for the tests that follow below.\n>\n\nI'm following the same approach used to test the existing random\nfunctions, and the idea is the same. For example, this existing test:\n\n-- There should be no duplicates in 1000 random() values.\n-- (Assuming 52 random bits in the float8 results, we could\n-- take as many as 3000 values and still have less than 1e-9 chance\n-- of failure, per https://en.wikipedia.org/wiki/Birthday_problem)\nSELECT r, count(*)\nFROM (SELECT random() r FROM generate_series(1, 1000)) ss\nGROUP BY r HAVING count(*) > 1;\n\nIf the underlying PRNG were non-uniform, or the method of reduction to\nthe required range was flawed in some way that reduced the number of\nactual possible return values, then the probability of duplicates\nwould be increased. A non-uniform distribution would probably be\ncaught by the KS tests, but uniform gaps in the possible outputs might\nnot be, so I think this test still has value.\n\n> The proper way of testing PRNG would be to call setseed() and compare\n> return values with expected ones. I don't mind testing the proposed\n> invariants but they should do this after calling setseed(). Currently\n> the patch places the tests right before the call.\n>\n\nThere are also new tests of that nature, following the call to\nsetseed(0.5). They're useful for a quick visual check of the results,\nand confirming the expected number of digits after the decimal point\nin the numeric case. However, I think those tests are insufficient on\ntheir own.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 30 Jan 2024 13:44:25 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "Hi Dean,\n\nI did a quick review and a little bit of testing on the patch today. I\nthink it's a good/useful idea, and I think the code is ready to go (the\ncode is certainly much cleaner than anything I'd written ...).\n\nI do have one minor comments regarding the docs - it refers to \"random\nfunctions\" in a couple places, which sounds to me as if it was talking\nabout some functions arbitrarily taken from some list, although it\nclearly means \"functions generating random numbers\". (I realize this\nmight be just due to me not being native speaker.)\n\n\nDid you think about adding more functions generating either other types\nof data distributions (now we have uniform and normal), or random data\nfor other data types (I often need random strings, for example)?\n\nOf course, I'm not saying this patch needs to do that. But perhaps it\nmight affect how we name stuff to make it \"extensible\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 24 Feb 2024 18:10:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Sat, 24 Feb 2024 at 17:10, Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi Dean,\n>\n> I did a quick review and a little bit of testing on the patch today. I\n> think it's a good/useful idea, and I think the code is ready to go (the\n> code is certainly much cleaner than anything I'd written ...).\n>\n\nThanks for reviewing!\n\n> I do have one minor comments regarding the docs - it refers to \"random\n> functions\" in a couple places, which sounds to me as if it was talking\n> about some functions arbitrarily taken from some list, although it\n> clearly means \"functions generating random numbers\". (I realize this\n> might be just due to me not being native speaker.)\n>\n\nYes, I think you're right, that wording was a bit clumsy. Attached is\nan update that's hopefully a bit better.\n\n> Did you think about adding more functions generating either other types\n> of data distributions (now we have uniform and normal), or random data\n> for other data types (I often need random strings, for example)?\n>\n> Of course, I'm not saying this patch needs to do that. But perhaps it\n> might affect how we name stuff to make it \"extensible\".\n>\n\nI don't have any plans to add more random functions, but I did think\nabout it from that perspective. Currently we have \"random\" and\n\"random_normal\", so the natural extension would be\n\"random_${distribution}\" for other data distributions, with \"uniform\"\nas the default distribution, if omitted.\n\nFor different result datatypes, it ought to be mostly possible to\ndetermine the result type from the arguments. There might be some\nexceptions, like maybe \"random_bytes(length)\" to generate a byte\narray, but I think that would be OK.\n\nRegards,\nDean",
"msg_date": "Tue, 27 Feb 2024 17:33:05 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 17:33, Dean Rasheed <[email protected]> wrote:\n>\n> On Sat, 24 Feb 2024 at 17:10, Tomas Vondra\n> >\n> > I did a quick review and a little bit of testing on the patch today. I\n> > think it's a good/useful idea, and I think the code is ready to go (the\n> > code is certainly much cleaner than anything I'd written ...).\n>\n\nBased on the reviews so far, I think this is ready for commit, so\nunless anyone objects, I will do so in a day or so.\n\nAs a quick summary, this adds a new file:\n\nsrc/backend/utils/adt/pseudorandomfuncs.c\n\nwhich contains SQL-callable functions that access a single shared\npseudorandom number generator, whose state is private to that file.\nCurrently the functions are:\n\n random() returns double precision [moved from float.c]\n random(min integer, max integer) returns integer [new]\n random(min bigint, max bigint) returns bigint [new]\n random(min numeric, max numeric) returns numeric [new]\n random_normal() returns double precision [moved from float.c]\n setseed(seed double precision) returns void [moved from float.c]\n\nIt's possible that functions to return other random distributions or\nother datatypes might get added in the future, but I have no plans to\ndo so at the moment.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 26 Mar 2024 06:57:25 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
},
{
"msg_contents": "On Tue, 26 Mar 2024 at 06:57, Dean Rasheed <[email protected]> wrote:\n>\n> Based on the reviews so far, I think this is ready for commit, so\n> unless anyone objects, I will do so in a day or so.\n>\n\nCommitted. Thanks for the reviews.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:32:43 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Functions to return random numbers in a given range"
}
] |
[
{
"msg_contents": "I happened to notice this stuff getting added to my .psql_history:\n\n\\echo background_psql: ready\nSET password_encryption='scram-sha-256';\n;\n\\echo background_psql: QUERY_SEPARATOR\nSET scram_iterations=42;\n;\n\\echo background_psql: QUERY_SEPARATOR\n\\password scram_role_iter\n\\q\n\nAfter grepping for these strings, this is evidently the fault of\nsrc/test/authentication/t/001_password.pl by way of BackgroundPsql.pm,\nwhich fires up an interactive psql run that is not given the -n switch.\n\nCurrently the only other user of interactive_psql() seems to be\npsql/t/010_tab_completion.pl, which avoids this problem by\nexplicitly redirecting the history file. We could have 001_password.pl\ndo likewise, or we could have it pass the -n switch, but I think we're\ngoing to have this problem resurface repeatedly if we leave it to the\nouter test script to remember to do it.\n\nMy first idea was that BackgroundPsql.pm should take responsibility for\npreventing this, by explicitly setting $ENV{PSQL_HISTORY} to \"/dev/null\"\nif the calling script hasn't set some other value. However, that could\nfail if the user who runs the test habitually sets PSQL_HISTORY.\n\nA messier but safer alternative would be to supply the -n switch by\ndefault, with some way for 010_tab_completion.pl to override that.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Dec 2023 13:21:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "authentication/t/001_password.pl trashes ~/.psql_history"
},
{
"msg_contents": "I wrote:\n> I happened to notice this stuff getting added to my .psql_history:\n> \\echo background_psql: ready\n> SET password_encryption='scram-sha-256';\n> ;\n> \\echo background_psql: QUERY_SEPARATOR\n> SET scram_iterations=42;\n> ;\n> \\echo background_psql: QUERY_SEPARATOR\n> \\password scram_role_iter\n> \\q\n\n> After grepping for these strings, this is evidently the fault of\n> src/test/authentication/t/001_password.pl by way of BackgroundPsql.pm,\n> which fires up an interactive psql run that is not given the -n switch.\n\n> Currently the only other user of interactive_psql() seems to be\n> psql/t/010_tab_completion.pl, which avoids this problem by\n> explicitly redirecting the history file. We could have 001_password.pl\n> do likewise, or we could have it pass the -n switch, but I think we're\n> going to have this problem resurface repeatedly if we leave it to the\n> outer test script to remember to do it.\n\n\nAfter studying this some more, my conclusion is that BackgroundPsql.pm\nfailed to borrow as much as it should have from 010_tab_completion.pl.\nSpecifically, we want all the environment-variable changes that that\nscript performed to be applied in any test using an interactive psql.\nMaybe ~/.inputrc and so forth would never affect any other test scripts,\nbut that doesn't seem like a great bet.\n\nSo that leads me to the attached proposed patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 22 Dec 2023 17:11:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: authentication/t/001_password.pl trashes ~/.psql_history"
},
{
"msg_contents": "\nOn 2023-12-22 Fr 17:11, Tom Lane wrote:\n> I wrote:\n>> I happened to notice this stuff getting added to my .psql_history:\n>> \\echo background_psql: ready\n>> SET password_encryption='scram-sha-256';\n>> ;\n>> \\echo background_psql: QUERY_SEPARATOR\n>> SET scram_iterations=42;\n>> ;\n>> \\echo background_psql: QUERY_SEPARATOR\n>> \\password scram_role_iter\n>> \\q\n>> After grepping for these strings, this is evidently the fault of\n>> src/test/authentication/t/001_password.pl by way of BackgroundPsql.pm,\n>> which fires up an interactive psql run that is not given the -n switch.\n>> Currently the only other user of interactive_psql() seems to be\n>> psql/t/010_tab_completion.pl, which avoids this problem by\n>> explicitly redirecting the history file. We could have 001_password.pl\n>> do likewise, or we could have it pass the -n switch, but I think we're\n>> going to have this problem resurface repeatedly if we leave it to the\n>> outer test script to remember to do it.\n>\n> After studying this some more, my conclusion is that BackgroundPsql.pm\n> failed to borrow as much as it should have from 010_tab_completion.pl.\n> Specifically, we want all the environment-variable changes that that\n> script performed to be applied in any test using an interactive psql.\n> Maybe ~/.inputrc and so forth would never affect any other test scripts,\n> but that doesn't seem like a great bet.\n>\n> So that leads me to the attached proposed patch.\n\n\nLooks fine, after reading your original post I was thinking along the \nsame lines.\n\nYou could shorten this\n\n+ my $history_file = $params{history_file};\n+ $history_file ||= '/dev/null';\n+ $ENV{PSQL_HISTORY} = $history_file;\n\nto just\n\n $ENV{PSQL_HISTORY} = $params{history_file} || '/dev/null';\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 23 Dec 2023 08:20:07 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: authentication/t/001_password.pl trashes ~/.psql_history"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-12-22 Fr 17:11, Tom Lane wrote:\n>> After studying this some more, my conclusion is that BackgroundPsql.pm\n>> failed to borrow as much as it should have from 010_tab_completion.pl.\n>> Specifically, we want all the environment-variable changes that that\n>> script performed to be applied in any test using an interactive psql.\n>> Maybe ~/.inputrc and so forth would never affect any other test scripts,\n>> but that doesn't seem like a great bet.\n\n> Looks fine, after reading your original post I was thinking along the \n> same lines.\n\nThanks for reviewing.\n\n> You could shorten this\n> + my $history_file = $params{history_file};\n> + $history_file ||= '/dev/null';\n> + $ENV{PSQL_HISTORY} = $history_file;\n> to just\n> $ENV{PSQL_HISTORY} = $params{history_file} || '/dev/null';\n\nOK. I was unsure which way would be considered more readable,\nbut based on your advice I shortened it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Dec 2023 11:52:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: authentication/t/001_password.pl trashes ~/.psql_history"
},
{
"msg_contents": "\n\n> On 23 Dec 2023, at 17:52, Tom Lane <[email protected]> wrote:\n> \n> Andrew Dunstan <[email protected]> writes:\n>> On 2023-12-22 Fr 17:11, Tom Lane wrote:\n>>> After studying this some more, my conclusion is that BackgroundPsql.pm\n>>> failed to borrow as much as it should have from 010_tab_completion.pl.\n>>> Specifically, we want all the environment-variable changes that that\n>>> script performed to be applied in any test using an interactive psql.\n>>> Maybe ~/.inputrc and so forth would never affect any other test scripts,\n>>> but that doesn't seem like a great bet.\n> \n>> Looks fine, after reading your original post I was thinking along the \n>> same lines.\n> \n> Thanks for reviewing.\n> \n>> You could shorten this\n>> + my $history_file = $params{history_file};\n>> + $history_file ||= '/dev/null';\n>> + $ENV{PSQL_HISTORY} = $history_file;\n>> to just\n>> $ENV{PSQL_HISTORY} = $params{history_file} || '/dev/null';\n> \n> OK. I was unsure which way would be considered more readable,\n> but based on your advice I shortened it.\n\nSorry for jumping in late, only saw this now (ETOOMUCHCHRISTMASPREP) but the\ncommitted patch looks good to me too. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 23 Dec 2023 18:31:16 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: authentication/t/001_password.pl trashes ~/.psql_history"
}
] |
[
{
"msg_contents": "Problem\n-------\n\nWe are using junk columns for (at least) two slightly different purposes:\n\n1. For passing row IDs and other such data from lower plan nodes to \nLockRows / ModifyTable.\n\n2. To represent ORDER BY and GROUP BY columns that don't appear in the \nSELECT list. For example, in a query like:\n\n SELECT foo FROM mytable ORDER BY bar;\n\nThe parser adds 'bar' to the target list as a junk column. You can see \nthat with EXPLAIN VERBOSE:\n\nexplain (verbose, costs off)\n select foo from mytable order by bar;\n\n QUERY PLAN\n----------------------------------\n Sort\n Output: foo, bar\n Sort Key: mytable.bar\n -> Seq Scan on public.mytable\n Output: foo, bar\n(5 rows)\n\nThe 'bar' column get filtered away in the executor, by the so-called \njunk filter. That's fine for simple cases like the above, but in some \ncases, that causes the ORDER BY value to be computed unnecessarily. For \nexample:\n\ncreate table mytable (foo text, bar text);\ninsert into mytable select g, g from generate_series(1, 10000) g;\ncreate index on mytable (sha256(bar::bytea));\nexplain verbose\n select foo from mytable order by sha256(bar::bytea);\n\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------\n Index Scan using mytable_sha256_idx on public.mytable \n(cost=0.29..737.28 rows=10000 width=64)\n Output: foo, sha256((bar)::bytea)\n(2 rows)\n\nThe index is used to satisfy the ORDER BY, but the expensive ORDER BY \nexpression is still computed for every row, just to be thrown away by \nthe junk filter.\n\nThis came up with pgvector, as the vector distance functions are pretty \nexpensive. All vector operations are expensive, so one extra distance \nfunction call per row doesn't necessarily make that much difference, but \nit sure looks silly. See \nhttps://github.com/pgvector/pgvector/issues/359#issuecomment-1840786021 \n(thanks Matthias for the investigation!).\n\nSolution\n--------\n\nThe obvious solution is that the planner should not include those junk \ncolumns in the plan. But how exactly to implement that is a different \nmatter.\n\nI came up with the attached patch set, which adds a projection to all \nthe paths at the end of planning in grouping_planner(). The projection \nfilters out the unnecessary junk columns. With that, the plan for the \nabove example:\n\npostgres=# explain verbose select foo from mytable order by \nsha256(bar::bytea);\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Index Scan using mytable_sha256_idx on public.mytable \n(cost=0.29..662.24 rows=10000 width=4)\n Output: foo\n(2 rows)\n\n\nProblems with the solution\n--------------------------\n\nSo this seems to work, but I have a few doubts:\n\n1. Because Sort cannot project, this adds an extra Result node on top of \nSort nodes when the the ORDER BY is implemented by sorting:\n\npostgres=# explain verbose select foo from mytable order by bar;\n QUERY PLAN \n\n--------------------------------------------------------------------------------\n Result (cost=818.39..843.39 rows=10000 width=4)\n Output: foo\n -> Sort (cost=818.39..843.39 rows=10000 width=8)\n Output: foo, bar\n Sort Key: mytable.bar\n -> Seq Scan on public.mytable (cost=0.00..154.00 rows=10000 \nwidth=8)\n Output: foo, bar\n(7 rows)\n\n From a performance point of view, I think that's not as bad as it \nsounds. Remember that without this patch, the executor needs to execute \nthe junk filter to filter out the extra column instead. It's not clear \nthat an extra Result is worse than that, although I haven't tried \nbenchmarking it though.\n\nThis makes plans for queries like above more verbose though. Perhaps we \nshould teach Sort (and MergeAppend) to do projection, just to avoid that?\n\nAnother solution would be to continue relying on the junk filter, if \nadding the projection in the planner leads to an extra Result node. \nThat's a bit ugly, because then the final target list of a (sub)query \ndepends on the Path that's chosen.\n\n2. Instead of tacking on the projection to the paths at the end, I first \ntried modifying the code earlier in grouping_planner() that computes the \ntarget lists for the different plan stages. That still feels like a \ncleaner approach to me, although I don't think there's any difference in \nthe generated plans in practice. However I ran into some problems with \nthat approach and gave up.\n\nI basically tried to remove the junk columns from 'final_target', and \nhave create_ordered_paths() create paths with the filtered target list \ndirectly. And if there is no ORDER BY, leave out the junk columns from \n'grouping_target' too, and have create_grouping_paths() generate the \nfinal target list directly. However, create_grouping_paths() assumes \nthat the grouping columns are included in 'grouping_target'. And \nsimilarly in create_ordered_paths(), some partial planning stages assume \nthat the ordering columns are included in 'final_target'. Those \nassumptions could probably be fixed, but I ran out of steam trying to do \nthat.\n\n3. I also considered if we should stop using junk columns to represent \nORDER BY / GROUP BY columns like this in the first place. Perhaps we \nshould have a separate list for those and not stash them in the target \nlist. But that seems like a much bigger change.\n\n4. It would be nice to get rid of the junk filter in the executor \naltogether. With this patch, the junk columns used for RowLocks and \nModifyTable are still included in the final target list, and are still \nfiltered out by the junk filter. But we could add similar projections \nbetween the RowLocks and ModifyTable stages, to eliminate all the junk \ncolumns at the top of the plan. ExecFilterJunk() isn't a lot of code, \nbut it would feel cleaner to me. I didn't try to implement that.\n\n5. I'm not sure the categorization of junk columns that I implemented \nhere is the best one. It might make sense to have more categories, and \ndistinguish row-id columns from others for example. And ORDER BY columns \nfrom GROUP BY columns.\n\nPatches\n-------\n\nSo the attached patches implement that idea, with the above-mentioned \nproblems. I think this approach is fine as it is, despite those \nproblems, but my planner-fu is a rusty so I really need review and a \nsecond opinion on this.\n\nv1-0001-Add-test-for-Min-Max-optimization-with-kNN-index-.patch\nv1-0002-Show-how-ORDER-BY-expression-is-computed-unnecess.patch\n\nThese patches just add to existing tests to demonstrate the problem.\n\nv1-0003-Turn-resjunk-into-an-enum.patch\n\nReplace 'resjunk' boolean with an enum, so that we can distinguish \nbetween different junk columns. The next patch uses that information to \nidentify junk columns that can be filtered out. It's is a separate patch \nfor easier review.\n\nv1-0004-Omit-columns-from-final-tlist-that-were-only-need.patch\n\nThe main patch in this series.\n\nv1-0005-Fix-regression-tests-caused-by-additional-Result-.patch\n\nRegression test output changes, for all the plans with Sort that now \nhave Sort + Result. See \"Problem with the solution\" #1.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 21 Dec 2023 20:38:27 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "Hi Heikii,\nI haven't dug into your patch yet, but for this problem, I have another\nidea.\n-------\nexplain verbose\n select foo from mytable order by sha256(bar::bytea);\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------\n Index Scan using mytable_sha256_idx on public.mytable\n(cost=0.29..737.28 rows=10000 width=64)\n Output: foo, sha256((bar)::bytea)\n(2 rows)\n\nThe index is used to satisfy the ORDER BY, but the expensive ORDER BY\nexpression is still computed for every row, just to be thrown away by\nthe junk filter.\n------\nHow about adding the orderby column value in 'xs_heaptid' with the\n'xs_heaptid'\ntogether? So that we can use that value directly instead of computing it\nwhen using\nan index scan to fetch the ordered data.\nAnother problem I am concerned about is that if we exclude junk\ncolumns in the sort plan, we may change its behavior. I'm not sure if it\ncan lead to some\nother issues.\n\nHeikki Linnakangas <[email protected]> 于2023年12月22日周五 02:39写道:\n\n> Problem\n> -------\n>\n> We are using junk columns for (at least) two slightly different purposes:\n>\n> 1. For passing row IDs and other such data from lower plan nodes to\n> LockRows / ModifyTable.\n>\n> 2. To represent ORDER BY and GROUP BY columns that don't appear in the\n> SELECT list. For example, in a query like:\n>\n> SELECT foo FROM mytable ORDER BY bar;\n>\n> The parser adds 'bar' to the target list as a junk column. You can see\n> that with EXPLAIN VERBOSE:\n>\n> explain (verbose, costs off)\n> select foo from mytable order by bar;\n>\n> QUERY PLAN\n> ----------------------------------\n> Sort\n> Output: foo, bar\n> Sort Key: mytable.bar\n> -> Seq Scan on public.mytable\n> Output: foo, bar\n> (5 rows)\n>\n> The 'bar' column get filtered away in the executor, by the so-called\n> junk filter. That's fine for simple cases like the above, but in some\n> cases, that causes the ORDER BY value to be computed unnecessarily. For\n> example:\n>\n> create table mytable (foo text, bar text);\n> insert into mytable select g, g from generate_series(1, 10000) g;\n> create index on mytable (sha256(bar::bytea));\n> explain verbose\n> select foo from mytable order by sha256(bar::bytea);\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------\n> Index Scan using mytable_sha256_idx on public.mytable\n> (cost=0.29..737.28 rows=10000 width=64)\n> Output: foo, sha256((bar)::bytea)\n> (2 rows)\n>\n> The index is used to satisfy the ORDER BY, but the expensive ORDER BY\n> expression is still computed for every row, just to be thrown away by\n> the junk filter.\n>\n> This came up with pgvector, as the vector distance functions are pretty\n> expensive. All vector operations are expensive, so one extra distance\n> function call per row doesn't necessarily make that much difference, but\n> it sure looks silly. See\n> https://github.com/pgvector/pgvector/issues/359#issuecomment-1840786021\n> (thanks Matthias for the investigation!).\n>\n> Solution\n> --------\n>\n> The obvious solution is that the planner should not include those junk\n> columns in the plan. But how exactly to implement that is a different\n> matter.\n>\n> I came up with the attached patch set, which adds a projection to all\n> the paths at the end of planning in grouping_planner(). The projection\n> filters out the unnecessary junk columns. With that, the plan for the\n> above example:\n>\n> postgres=# explain verbose select foo from mytable order by\n> sha256(bar::bytea);\n> QUERY PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------\n> Index Scan using mytable_sha256_idx on public.mytable\n> (cost=0.29..662.24 rows=10000 width=4)\n> Output: foo\n> (2 rows)\n>\n>\n> Problems with the solution\n> --------------------------\n>\n> So this seems to work, but I have a few doubts:\n>\n> 1. Because Sort cannot project, this adds an extra Result node on top of\n> Sort nodes when the the ORDER BY is implemented by sorting:\n>\n> postgres=# explain verbose select foo from mytable order by bar;\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------\n> Result (cost=818.39..843.39 rows=10000 width=4)\n> Output: foo\n> -> Sort (cost=818.39..843.39 rows=10000 width=8)\n> Output: foo, bar\n> Sort Key: mytable.bar\n> -> Seq Scan on public.mytable (cost=0.00..154.00 rows=10000\n> width=8)\n> Output: foo, bar\n> (7 rows)\n>\n> From a performance point of view, I think that's not as bad as it\n> sounds. Remember that without this patch, the executor needs to execute\n> the junk filter to filter out the extra column instead. It's not clear\n> that an extra Result is worse than that, although I haven't tried\n> benchmarking it though.\n>\n> This makes plans for queries like above more verbose though. Perhaps we\n> should teach Sort (and MergeAppend) to do projection, just to avoid that?\n>\n> Another solution would be to continue relying on the junk filter, if\n> adding the projection in the planner leads to an extra Result node.\n> That's a bit ugly, because then the final target list of a (sub)query\n> depends on the Path that's chosen.\n>\n> 2. Instead of tacking on the projection to the paths at the end, I first\n> tried modifying the code earlier in grouping_planner() that computes the\n> target lists for the different plan stages. That still feels like a\n> cleaner approach to me, although I don't think there's any difference in\n> the generated plans in practice. However I ran into some problems with\n> that approach and gave up.\n>\n> I basically tried to remove the junk columns from 'final_target', and\n> have create_ordered_paths() create paths with the filtered target list\n> directly. And if there is no ORDER BY, leave out the junk columns from\n> 'grouping_target' too, and have create_grouping_paths() generate the\n> final target list directly. However, create_grouping_paths() assumes\n> that the grouping columns are included in 'grouping_target'. And\n> similarly in create_ordered_paths(), some partial planning stages assume\n> that the ordering columns are included in 'final_target'. Those\n> assumptions could probably be fixed, but I ran out of steam trying to do\n> that.\n>\n> 3. I also considered if we should stop using junk columns to represent\n> ORDER BY / GROUP BY columns like this in the first place. Perhaps we\n> should have a separate list for those and not stash them in the target\n> list. But that seems like a much bigger change.\n>\n> 4. It would be nice to get rid of the junk filter in the executor\n> altogether. With this patch, the junk columns used for RowLocks and\n> ModifyTable are still included in the final target list, and are still\n> filtered out by the junk filter. But we could add similar projections\n> between the RowLocks and ModifyTable stages, to eliminate all the junk\n> columns at the top of the plan. ExecFilterJunk() isn't a lot of code,\n> but it would feel cleaner to me. I didn't try to implement that.\n>\n> 5. I'm not sure the categorization of junk columns that I implemented\n> here is the best one. It might make sense to have more categories, and\n> distinguish row-id columns from others for example. And ORDER BY columns\n> from GROUP BY columns.\n>\n> Patches\n> -------\n>\n> So the attached patches implement that idea, with the above-mentioned\n> problems. I think this approach is fine as it is, despite those\n> problems, but my planner-fu is a rusty so I really need review and a\n> second opinion on this.\n>\n> v1-0001-Add-test-for-Min-Max-optimization-with-kNN-index-.patch\n> v1-0002-Show-how-ORDER-BY-expression-is-computed-unnecess.patch\n>\n> These patches just add to existing tests to demonstrate the problem.\n>\n> v1-0003-Turn-resjunk-into-an-enum.patch\n>\n> Replace 'resjunk' boolean with an enum, so that we can distinguish\n> between different junk columns. The next patch uses that information to\n> identify junk columns that can be filtered out. It's is a separate patch\n> for easier review.\n>\n> v1-0004-Omit-columns-from-final-tlist-that-were-only-need.patch\n>\n> The main patch in this series.\n>\n> v1-0005-Fix-regression-tests-caused-by-additional-Result-.patch\n>\n> Regression test output changes, for all the plans with Sort that now\n> have Sort + Result. See \"Problem with the solution\" #1.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n\nHi Heikii, I haven't dug into your patch yet, but for this problem, I have another idea.-------explain verbose select foo from mytable order by sha256(bar::bytea); QUERY PLAN------------------------------------------------------------------------------------------------ Index Scan using mytable_sha256_idx on public.mytable(cost=0.29..737.28 rows=10000 width=64) Output: foo, sha256((bar)::bytea)(2 rows)The index is used to satisfy the ORDER BY, but the expensive ORDER BYexpression is still computed for every row, just to be thrown away bythe junk filter.------How about adding the orderby column value in 'xs_heaptid' with the 'xs_heaptid'together? So that we can use that value directly instead of computing it when usingan index scan to fetch the ordered data.Another problem I am concerned about is that if we exclude junkcolumns in the sort plan, we may change its behavior. I'm not sure if it can lead to some other issues. Heikki Linnakangas <[email protected]> 于2023年12月22日周五 02:39写道:Problem\n-------\n\nWe are using junk columns for (at least) two slightly different purposes:\n\n1. For passing row IDs and other such data from lower plan nodes to \nLockRows / ModifyTable.\n\n2. To represent ORDER BY and GROUP BY columns that don't appear in the \nSELECT list. For example, in a query like:\n\n SELECT foo FROM mytable ORDER BY bar;\n\nThe parser adds 'bar' to the target list as a junk column. You can see \nthat with EXPLAIN VERBOSE:\n\nexplain (verbose, costs off)\n select foo from mytable order by bar;\n\n QUERY PLAN\n----------------------------------\n Sort\n Output: foo, bar\n Sort Key: mytable.bar\n -> Seq Scan on public.mytable\n Output: foo, bar\n(5 rows)\n\nThe 'bar' column get filtered away in the executor, by the so-called \njunk filter. That's fine for simple cases like the above, but in some \ncases, that causes the ORDER BY value to be computed unnecessarily. For \nexample:\n\ncreate table mytable (foo text, bar text);\ninsert into mytable select g, g from generate_series(1, 10000) g;\ncreate index on mytable (sha256(bar::bytea));\nexplain verbose\n select foo from mytable order by sha256(bar::bytea);\n\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------\n Index Scan using mytable_sha256_idx on public.mytable \n(cost=0.29..737.28 rows=10000 width=64)\n Output: foo, sha256((bar)::bytea)\n(2 rows)\n\nThe index is used to satisfy the ORDER BY, but the expensive ORDER BY \nexpression is still computed for every row, just to be thrown away by \nthe junk filter.\n\nThis came up with pgvector, as the vector distance functions are pretty \nexpensive. All vector operations are expensive, so one extra distance \nfunction call per row doesn't necessarily make that much difference, but \nit sure looks silly. See \nhttps://github.com/pgvector/pgvector/issues/359#issuecomment-1840786021 \n(thanks Matthias for the investigation!).\n\nSolution\n--------\n\nThe obvious solution is that the planner should not include those junk \ncolumns in the plan. But how exactly to implement that is a different \nmatter.\n\nI came up with the attached patch set, which adds a projection to all \nthe paths at the end of planning in grouping_planner(). The projection \nfilters out the unnecessary junk columns. With that, the plan for the \nabove example:\n\npostgres=# explain verbose select foo from mytable order by \nsha256(bar::bytea);\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------\n Index Scan using mytable_sha256_idx on public.mytable \n(cost=0.29..662.24 rows=10000 width=4)\n Output: foo\n(2 rows)\n\n\nProblems with the solution\n--------------------------\n\nSo this seems to work, but I have a few doubts:\n\n1. Because Sort cannot project, this adds an extra Result node on top of \nSort nodes when the the ORDER BY is implemented by sorting:\n\npostgres=# explain verbose select foo from mytable order by bar;\n QUERY PLAN \n\n--------------------------------------------------------------------------------\n Result (cost=818.39..843.39 rows=10000 width=4)\n Output: foo\n -> Sort (cost=818.39..843.39 rows=10000 width=8)\n Output: foo, bar\n Sort Key: mytable.bar\n -> Seq Scan on public.mytable (cost=0.00..154.00 rows=10000 \nwidth=8)\n Output: foo, bar\n(7 rows)\n\n From a performance point of view, I think that's not as bad as it \nsounds. Remember that without this patch, the executor needs to execute \nthe junk filter to filter out the extra column instead. It's not clear \nthat an extra Result is worse than that, although I haven't tried \nbenchmarking it though.\n\nThis makes plans for queries like above more verbose though. Perhaps we \nshould teach Sort (and MergeAppend) to do projection, just to avoid that?\n\nAnother solution would be to continue relying on the junk filter, if \nadding the projection in the planner leads to an extra Result node. \nThat's a bit ugly, because then the final target list of a (sub)query \ndepends on the Path that's chosen.\n\n2. Instead of tacking on the projection to the paths at the end, I first \ntried modifying the code earlier in grouping_planner() that computes the \ntarget lists for the different plan stages. That still feels like a \ncleaner approach to me, although I don't think there's any difference in \nthe generated plans in practice. However I ran into some problems with \nthat approach and gave up.\n\nI basically tried to remove the junk columns from 'final_target', and \nhave create_ordered_paths() create paths with the filtered target list \ndirectly. And if there is no ORDER BY, leave out the junk columns from \n'grouping_target' too, and have create_grouping_paths() generate the \nfinal target list directly. However, create_grouping_paths() assumes \nthat the grouping columns are included in 'grouping_target'. And \nsimilarly in create_ordered_paths(), some partial planning stages assume \nthat the ordering columns are included in 'final_target'. Those \nassumptions could probably be fixed, but I ran out of steam trying to do \nthat.\n\n3. I also considered if we should stop using junk columns to represent \nORDER BY / GROUP BY columns like this in the first place. Perhaps we \nshould have a separate list for those and not stash them in the target \nlist. But that seems like a much bigger change.\n\n4. It would be nice to get rid of the junk filter in the executor \naltogether. With this patch, the junk columns used for RowLocks and \nModifyTable are still included in the final target list, and are still \nfiltered out by the junk filter. But we could add similar projections \nbetween the RowLocks and ModifyTable stages, to eliminate all the junk \ncolumns at the top of the plan. ExecFilterJunk() isn't a lot of code, \nbut it would feel cleaner to me. I didn't try to implement that.\n\n5. I'm not sure the categorization of junk columns that I implemented \nhere is the best one. It might make sense to have more categories, and \ndistinguish row-id columns from others for example. And ORDER BY columns \nfrom GROUP BY columns.\n\nPatches\n-------\n\nSo the attached patches implement that idea, with the above-mentioned \nproblems. I think this approach is fine as it is, despite those \nproblems, but my planner-fu is a rusty so I really need review and a \nsecond opinion on this.\n\nv1-0001-Add-test-for-Min-Max-optimization-with-kNN-index-.patch\nv1-0002-Show-how-ORDER-BY-expression-is-computed-unnecess.patch\n\nThese patches just add to existing tests to demonstrate the problem.\n\nv1-0003-Turn-resjunk-into-an-enum.patch\n\nReplace 'resjunk' boolean with an enum, so that we can distinguish \nbetween different junk columns. The next patch uses that information to \nidentify junk columns that can be filtered out. It's is a separate patch \nfor easier review.\n\nv1-0004-Omit-columns-from-final-tlist-that-were-only-need.patch\n\nThe main patch in this series.\n\nv1-0005-Fix-regression-tests-caused-by-additional-Result-.patch\n\nRegression test output changes, for all the plans with Sort that now \nhave Sort + Result. See \"Problem with the solution\" #1.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 22 Dec 2023 17:05:35 +0800",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "On 22/12/2023 11:05, Xiaoran Wang wrote:\n> I haven't dug into your patch yet, but for this problem, I have another \n> idea.\n> -------\n> explain verbose\n> select foo from mytable order by sha256(bar::bytea);\n> \n> QUERY PLAN\n> \n> ------------------------------------------------------------------------------------------------\n> Index Scan using mytable_sha256_idx on public.mytable\n> (cost=0.29..737.28 rows=10000 width=64)\n> Output: foo, sha256((bar)::bytea)\n> (2 rows)\n> \n> The index is used to satisfy the ORDER BY, but the expensive ORDER BY\n> expression is still computed for every row, just to be thrown away by\n> the junk filter.\n> ------\n> How about adding the orderby column value in 'xs_heaptid' with the \n> 'xs_heaptid'\n> together? So that we can use that value directly instead of computing it \n> when using\n> an index scan to fetch the ordered data.\n\nHmm, so return the computed column from the index instead of recomputing \nit? Yeah, that makes sense too and would help in this example. It won't \nhelp in all cases though, the index might not store the original value \nin the first place.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 22 Dec 2023 12:48:53 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Hmm, so return the computed column from the index instead of recomputing \n> it? Yeah, that makes sense too and would help in this example.\n\nYeah, that's been on the to-do list for ages. The main problems are\n(1) we need the planner to not spend too much effort on looking for\nsubexpression matches, and (2) amcanreturn ability isn't implemented\nby the executor in plain indexscans. There's another thread right now\ndiscussing fixing (2), after which we could perhaps work on this.\n\n> It won't \n> help in all cases though, the index might not store the original value \n> in the first place.\n\nI'm a little skeptical that an index could produce an accurate ORDER BY\nresult if it doesn't store the values-to-be-sorted exactly. Any loss\nof information would compromise its ability to sort nearly-identical\nvalues correctly. A more credible argument is that the index might\nexpose amcanorder ability but not amcanreturn; but what I'm saying is\nthat that's probably an AM implementation gap that ought to be fixed.\n\nHow much of your patchset still makes sense if we assume that we\ncan always extract the ORDER BY column values from the index?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Dec 2023 10:24:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "On 22/12/2023 17:24, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> It won't\n>> help in all cases though, the index might not store the original value\n>> in the first place.\n> \n> I'm a little skeptical that an index could produce an accurate ORDER BY\n> result if it doesn't store the values-to-be-sorted exactly. Any loss\n> of information would compromise its ability to sort nearly-identical\n> values correctly.\n\nIn the context of pgvector, its ordering is approximate anyway. Aside \nfrom that, there's one trick that it implements: it compares squares of \ndistances, avoiding a sqrt() calculation. (I wonder if we could do the \nsame in GiST opclasses)\n\n> A more credible argument is that the index might\n> expose amcanorder ability but not amcanreturn; but what I'm saying is\n> that that's probably an AM implementation gap that ought to be fixed.\n> \n> How much of your patchset still makes sense if we assume that we\n> can always extract the ORDER BY column values from the index?\n\nThat would make it much less interesting. But I don't think that's a \ngood assumption. Especially in the kNN case, the ORDER BY value would \nnot be stored in the index. Most likely the index needs to calculate it \nin some form, but it might take shortcuts like avoiding the sqrt().\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 22 Dec 2023 18:55:00 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 22/12/2023 17:24, Tom Lane wrote:\n>> How much of your patchset still makes sense if we assume that we\n>> can always extract the ORDER BY column values from the index?\n\n> That would make it much less interesting. But I don't think that's a \n> good assumption. Especially in the kNN case, the ORDER BY value would \n> not be stored in the index. Most likely the index needs to calculate it \n> in some form, but it might take shortcuts like avoiding the sqrt().\n\nYeah, fair point. I'll try to take a look at your patchset after\nthe holidays.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Dec 2023 12:31:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 1:32 AM Tom Lane <[email protected]> wrote:\n\n> Heikki Linnakangas <[email protected]> writes:\n> > On 22/12/2023 17:24, Tom Lane wrote:\n> >> How much of your patchset still makes sense if we assume that we\n> >> can always extract the ORDER BY column values from the index?\n>\n> > That would make it much less interesting. But I don't think that's a\n> > good assumption. Especially in the kNN case, the ORDER BY value would\n> > not be stored in the index. Most likely the index needs to calculate it\n> > in some form, but it might take shortcuts like avoiding the sqrt().\n>\n> Yeah, fair point. I'll try to take a look at your patchset after\n> the holidays.\n\n\nAgreed.\n\nI haven't looked into these patches, but it seems that there is an issue\nwith how the targetlist is handled for foreign rels. The following test\ncase for postgres_fdw hits the Assert in apply_tlist_labeling().\n\ncontrib_regression=# SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1;\nserver closed the connection unexpectedly\n\nWhen we create foreign final path in add_foreign_final_paths(), we use\nroot->upper_targets[UPPERREL_FINAL] as the PathTarget. This PathTarget\nis set in grouping_planner(), without removing the junk columns. I\nthink this is why the above query hits the Assert.\n\nThanks\nRichard\n\nOn Sat, Dec 23, 2023 at 1:32 AM Tom Lane <[email protected]> wrote:Heikki Linnakangas <[email protected]> writes:\n> On 22/12/2023 17:24, Tom Lane wrote:\n>> How much of your patchset still makes sense if we assume that we\n>> can always extract the ORDER BY column values from the index?\n\n> That would make it much less interesting. But I don't think that's a \n> good assumption. Especially in the kNN case, the ORDER BY value would \n> not be stored in the index. Most likely the index needs to calculate it \n> in some form, but it might take shortcuts like avoiding the sqrt().\n\nYeah, fair point. I'll try to take a look at your patchset after\nthe holidays.Agreed.I haven't looked into these patches, but it seems that there is an issuewith how the targetlist is handled for foreign rels. The following testcase for postgres_fdw hits the Assert in apply_tlist_labeling().contrib_regression=# SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1;server closed the connection unexpectedlyWhen we create foreign final path in add_foreign_final_paths(), we useroot->upper_targets[UPPERREL_FINAL] as the PathTarget. This PathTargetis set in grouping_planner(), without removing the junk columns. Ithink this is why the above query hits the Assert.ThanksRichard",
"msg_date": "Mon, 25 Dec 2023 19:21:42 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 2:38 AM Heikki Linnakangas <[email protected]> wrote:\n\n> v1-0004-Omit-columns-from-final-tlist-that-were-only-need.patch\n>\n> The main patch in this series.\n\n\nThis patch filters out the junk columns created for ORDER BY/GROUP BY,\nand retains the junk columns created for RowLocks. I'm afraid this may\nhave a problem about out-of-order resnos. For instance,\n\ncreate table mytable (foo text, bar text);\n\n# explain select foo from mytable order by bar for update of mytable;\nserver closed the connection unexpectedly\n\nThis query triggers another Assert in apply_tlist_labeling:\n\n Assert(dest_tle->resno == src_tle->resno);\n\nAt first there are three TargetEntry items: foo, bar and ctid, with\nresnos being 1, 2 and 3. And then the second item 'bar' is removed,\nleaving only two items: foo and ctid, with resnos being 1 and 3. So now\nwe have a missing resno, and finally hit the Assert.\n\nThanks\nRichard\n\nOn Fri, Dec 22, 2023 at 2:38 AM Heikki Linnakangas <[email protected]> wrote:\nv1-0004-Omit-columns-from-final-tlist-that-were-only-need.patch\n\nThe main patch in this series.This patch filters out the junk columns created for ORDER BY/GROUP BY,and retains the junk columns created for RowLocks. I'm afraid this mayhave a problem about out-of-order resnos. For instance,create table mytable (foo text, bar text);# explain select foo from mytable order by bar for update of mytable;server closed the connection unexpectedlyThis query triggers another Assert in apply_tlist_labeling: Assert(dest_tle->resno == src_tle->resno);At first there are three TargetEntry items: foo, bar and ctid, withresnos being 1, 2 and 3. And then the second item 'bar' is removed,leaving only two items: foo and ctid, with resnos being 1 and 3. So nowwe have a missing resno, and finally hit the Assert.ThanksRichard",
"msg_date": "Tue, 26 Dec 2023 11:18:16 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "I wrote:\n> Yeah, fair point. I'll try to take a look at your patchset after\n> the holidays.\n\nI spent some time looking at this patch, and I'm not very pleased\nwith it. My basic complaint is that this is a band-aid that only\ntouches things at a surface level, whereas I think we need a much\ndeeper set of changes in order to have a plausible solution.\nSpecifically, you're only munging the final top-level targetlist\nnot anything for lower plan levels. That looks like it works in\nsimple cases, but it leaves everything on the table as soon as\nthere's more than one level of plan involved. I didn't stop to\ntrace the details but I'm pretty sure this is why you're getting the\nbogus extra projections shown in the 0005 patch. Moreover, this\nisn't doing anything for cost estimates, which means the planner\ndoesn't really understand that more-desirable plans are more\ndesirable, and it may not pick an available plan that would exploit\nwhat we want to have happen here.\n\nAs an example, say we have an index on f(x) and the query requires\nsorting by f(x) but not final output of f(x). If we devise a plan\nthat uses the index to sort and doesn't emit f(x), we need to not\ncharge the evaluation cost of f() for that plan --- this might\nmake the difference between picking that plan and not. Right now\nI believe we'll charge all plans for f(), so that some other plan\nmight look cheaper even when f() is very expensive.\n\nAnother example: we might be using an indexscan but not relying on\nits sort order, for example because its output is going into a hash\njoin and then we'll sort afterwards. For expensive f() it would\nstill be useful if the index could emit f(x) so that we don't have\nto calculate f() at the sort step. Right now I don't think we can\neven emit that plan shape, never mind recognize why it's cheaper.\n\nI have only vague ideas about how to do this better. It might work\nto set up multiple PathTargets for a relation that has such an\nindex, one for the base case where the scan only emits x and f() is\ncomputed above, one for the case where we don't need either x or\nf(x) because we're relying on the index order, and one that emits\nf(x) with the expectation that a sort will happen above. Then we'd\npotentially generate multiple Paths representing the very same\nindexscan but with different PathTargets, and differing targets\nwould have to become something that add_path treats as a reason to\nkeep multiple Paths for the same relation. I'm a little frightened\nabout the possible growth in number of paths considered, but how\nelse would we keep track of the differing costs of these cases?\n\nSo your proposed patch is far from a final solution in this area,\nand I'm not even sure that it represents a plausible incremental\nstep. I've got mixed feelings about the \"make resjunk an enum\"\nidea --- it seems quite invasive and I'm not sure it's buying\nenough to risk breaking non-planner code for. It might be\nbetter to leave the targetlist representation alone and deal\nwith this strictly inside the planner. We could, for example,\nadd an array to PlannerInfo that's indexed by query-tlist resno\nand indicates the reason for a particular tlist item to exist.\n\nAlso, as you mentioned, this categorization of junk columns\nseems a little unprincipled. It might help to think about that\nin terms similar to the \"attr_needed\" data we keep for Vars,\nthat is: how far up in the plan tree is this tlist item needed?\nI think the possibilities are:\n\n* needed for final output (ie, not resjunk)\n\n* needed for final grouping/sorting steps (I think all\n resjunk items produced by the parser are this case;\n but do we need to distinguish GROUP BY from ORDER BY?)\n \n* needed at the ModifyTable node (for rowmark columns)\n\n* needed at the SetOp node (for flag columns added in\n prepunion.c)\n\nIt looks to me like these cases are mutually exclusive, so\nthat we just need an enum value not a bitmask. Only\n\"needed for final grouping/sorting\" is something we can\npotentially omit from the plan.\n\nBTW, the hack you invented JUNK_PLANNER_ONLY for, namely\nthat create_indexscan_plan feeds canreturn flags forward to\nset_indexonlyscan_references, could be done another way:\nwe don't really have to overload resjunk for that. At worst,\nset_indexonlyscan_references could re-look-up the IndexOptInfo\nusing the index OID from the plan node, and get the canreturn\nflags directly from that. If we keep resjunk as a bool I'd be\ninclined to leave this alone; but if we change resjunk, we\ndon't need the replacement design to account for this bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Dec 2023 18:42:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "On 29/12/2023 01:42, Tom Lane wrote:\n> I wrote:\n>> Yeah, fair point. I'll try to take a look at your patchset after\n>> the holidays.\n> \n> I spent some time looking at this patch, and I'm not very pleased\n> with it. My basic complaint is that this is a band-aid that only\n> touches things at a surface level, whereas I think we need a much\n> deeper set of changes in order to have a plausible solution.\n> Specifically, you're only munging the final top-level targetlist\n> not anything for lower plan levels. That looks like it works in\n> simple cases, but it leaves everything on the table as soon as\n> there's more than one level of plan involved.\n\nYeah, that's fair.\n\n> I didn't stop to trace the details but I'm pretty sure this is why\n> you're getting the bogus extra projections shown in the 0005 patch.\nThey're not bogus. With the patches, projecting away the junk columns is \nvisible in the plan as an extra Result node, while currently it's done \nas an implicit step in the executor. That seems fine and arguably an \neven more honest representation of what's happening, although I don't \nlike the extra verbosity in EXPLAIN output.\n\n> Moreover, this isn't doing anything for cost estimates, which means\n> the planner doesn't really understand that more-desirable plans are\n> more desirable, and it may not pick an available plan that would\n> exploit what we want to have happen here.\n> \n> As an example, say we have an index on f(x) and the query requires\n> sorting by f(x) but not final output of f(x). If we devise a plan\n> that uses the index to sort and doesn't emit f(x), we need to not\n> charge the evaluation cost of f() for that plan --- this might\n> make the difference between picking that plan and not. Right now\n> I believe we'll charge all plans for f(), so that some other plan\n> might look cheaper even when f() is very expensive.\n> > Another example: we might be using an indexscan but not relying on\n> its sort order, for example because its output is going into a hash\n> join and then we'll sort afterwards. For expensive f() it would\n> still be useful if the index could emit f(x) so that we don't have\n> to calculate f() at the sort step. Right now I don't think we can\n> even emit that plan shape, never mind recognize why it's cheaper.\n\nRelated to this, we are not currently costing the target list evaluation \ncorrectly for index-only scans. Here's an example:\n\ncreate or replace function expensive(i int) returns int\nlanguage plpgsql as\n$$\n begin return i; end;\n$$\nimmutable cost 1000000;\n\ncreate table atab (i int);\ninsert into atab select g from generate_series(1, 10000) g;\ncreate index on atab (i, expensive(i));\n\npostgres=# explain verbose select expensive(i) from atab order by \nexpensive(i);\n QUERY PLAN \n\n----------------------------------------------------------------------------\n Sort (cost=25000809.39..25000834.39 rows=10000 width=4)\n Output: (expensive(i))\n Sort Key: (expensive(atab.i))\n -> Seq Scan on public.atab (cost=0.00..25000145.00 rows=10000 width=4)\n Output: expensive(i)\n(5 rows)\n\npostgres=# set enable_seqscan=off; set enable_bitmapscan=off;\nSET\nSET\npostgres=# explain verbose select expensive(i) from atab order by \nexpensive(i);\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------------\n Sort (cost=25001114.67..25001139.67 rows=10000 width=4)\n Output: (expensive(i))\n Sort Key: (expensive(atab.i))\n -> Index Only Scan using atab_i_expensive_idx on public.atab \n(cost=0.29..25000450.29 rows=10000 width=4)\n Output: (expensive(i))\n(5 rows)\n\nThe cost of the index only scan ought to be much lower than the seqscan, \nas it can return the pre-computed expensive(i) from the index.\n\nThat could probably be fixed without any of the other changes we've been \ndiscussing here, though.\n\n> I have only vague ideas about how to do this better. It might work\n> to set up multiple PathTargets for a relation that has such an\n> index, one for the base case where the scan only emits x and f() is\n> computed above, one for the case where we don't need either x or\n> f(x) because we're relying on the index order, and one that emits\n> f(x) with the expectation that a sort will happen above. Then we'd\n> potentially generate multiple Paths representing the very same\n> indexscan but with different PathTargets, and differing targets\n> would have to become something that add_path treats as a reason to\n> keep multiple Paths for the same relation. I'm a little frightened\n> about the possible growth in number of paths considered, but how\n> else would we keep track of the differing costs of these cases?\n\nHmm, if there are multiple functions like that in the target list, would \nyou need to create different paths for each combination of expressions? \nThat could really explode the number of paths.\n\nPerhaps each Path could include \"optional\" target entries that it can \ncalculate more cheaply, with a separate cost for each such expression. \nadd_path() would treat the presence of optional target entries as a \nreason to retain the path, but you wouldn't need to keep a separate path \nfor each PathTarget.\n\nAnother idea is to include f(x) in the PathTarget only if it's \"cheap to \nemit\". For example, if it's an expression index on f(x). If it turns out \nthat f(x) is not needed higher up in the plan, it's not a big error in \nthe estimate because it was cheap to emit. That wouldn't work well if \nthe index AM could calculate f(x) more cheaply than the executor, but \nthe cost was still not trivial. I'm not sure if that situation can arise \ncurrently.\n\n> * needed for final grouping/sorting steps (I think all\n> resjunk items produced by the parser are this case;\n> but do we need to distinguish GROUP BY from ORDER BY?)\n\nIt would seem useful to distinguish GROUP BY and ORDER BY. For example:\n\nSELECT COUNT(*) FROM table GROUP BY a, b ORDER BY a;\n\nIf this is implemented as HashAgg + Sort for example, only 'a' would be \nneeded by the sort. Including less data in a Sort is good.\n\nI wanted to implement that in my patch already, by removing the junk \ncolumns needed for grouping but not sorting from sort_input_target. But \nthe grouping path generation has some assumptions that the grouping \noutput target list includes all the grouping columns. I don't remember \nthe exact problem that made me give up on that, but it probably could be \nfixed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 29 Dec 2023 19:38:47 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 29/12/2023 01:42, Tom Lane wrote:\n>> I didn't stop to trace the details but I'm pretty sure this is why\n>> you're getting the bogus extra projections shown in the 0005 patch.\n\n> They're not bogus. With the patches, projecting away the junk columns is \n> visible in the plan as an extra Result node, while currently it's done \n> as an implicit step in the executor. That seems fine and arguably an \n> even more honest representation of what's happening, although I don't \n> like the extra verbosity in EXPLAIN output.\n\nI didn't bring it up in my previous message, but I'm not really on\nboard with trying to get rid of the executor junkfilter. It seems\nto me that that code is probably faster than a general-purpose\nprojection, and surely faster than an extra Result node, so I think\nthat is not a goal that would improve matters.\n\nHowever, what I *was* trying to say is that I think those projections\noccur because the lower-level plan node is still outputting the\ncolumns you want to suppress, and then the planner realizes that it\nneeds a projection to create the shortened tlist. But that's not\nsaving any work, because we still computed the expensive function :-(.\nWe need a more thorough treatment of the problem to ensure that the\nlower-level plan nodes don't emit these columns either.\n\n> Related to this, we are not currently costing the target list evaluation \n> correctly for index-only scans.\n\nRight, and we're dumb in other ways too: if the index can return\nf(x) but not x, we fail to realize that we can use it for an IOS\nin the first place, because there's an assumption that we'd better\nbe able to fetch x. Currently I think the best way around that\nmight be via the other discussion that's going on about unifying\nregular indexscans and index-only scans [1]. If we do that then\nwe could postpone the decision about whether we actually need x\nitself, and perhaps that would simplify getting this right.\n\nI'm kind of inclined to think that it'd be best to push the\nother discussion forward first, and come back to this area\nwhen it's done, because that will be touching a lot of the\nsame code as well as (maybe) simplifying the planner's problem.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20230609000600.syqy447e6metnvyj%40awork3.anarazel.de\n\n\n",
"msg_date": "Fri, 29 Dec 2023 13:23:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\", but it seems like\nthere were some CFbot test failures last time it was run [1]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4717\n\n\n",
"msg_date": "Mon, 22 Jan 2024 15:11:43 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid computing ORDER BY junk columns unnecessarily"
}
] |
[
{
"msg_contents": "I found the following message introduced by a recent commit.\n\n> errdetail(\"The first unsummarized LSN is this range is %X/%X.\",\n\t \nShouldn't the \"is\" following \"LSN\" be \"in\"?\n\n\ndiff --git a/src/backend/backup/basebackup_incremental.c b/src/backend/backup/basebackup_incremental.c\nindex 42bbe564e2..22b861ce52 100644\n--- a/src/backend/backup/basebackup_incremental.c\n+++ b/src/backend/backup/basebackup_incremental.c\n@@ -575,7 +575,7 @@ PrepareForIncrementalBackup(IncrementalBackupInfo *ib,\n \t\t\t\t\t\t\t\ttle->tli,\n \t\t\t\t\t\t\t\tLSN_FORMAT_ARGS(tli_start_lsn),\n \t\t\t\t\t\t\t\tLSN_FORMAT_ARGS(tli_end_lsn)),\n-\t\t\t\t\t\t errdetail(\"The first unsummarized LSN is this range is %X/%X.\",\n+\t\t\t\t\t\t errdetail(\"The first unsummarized LSN in this range is %X/%X.\",\n \t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(tli_missing_lsn))));\n \t\t}\n \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Dec 2023 15:49:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "A typo in a messsage?"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 1:50 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> I found the following message introduced by a recent commit.\n>\n> > errdetail(\"The first unsummarized LSN is this range is %X/%X.\",\n>\n> Shouldn't the \"is\" following \"LSN\" be \"in\"?\n\nI think you're right, will push.\n\n\n",
"msg_date": "Sat, 23 Dec 2023 07:35:22 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A typo in a messsage?"
},
{
"msg_contents": "A new function check_control_file() in pg_combinebackup.c has the\nfollowing message.\n\n>\t\t\tpg_fatal(\"%s: crc is incorrect\", controlpath);\n\nI think \"crc\" should be in all uppercase in general and a brief\ngrep'ing told me that it is almost always or consistently used in\nuppercase in our tree.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 25 Dec 2023 14:51:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should \"CRC\" be in uppercase?"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 1:50 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> I found the following message introduced by a recent commit.\n>\n> > errdetail(\"The first unsummarized LSN is this range is %X/%X.\",\n>\n> Shouldn't the \"is\" following \"LSN\" be \"in\"?\n\nPushed.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 13:34:50 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A typo in a messsage?"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 12:51 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> A new function check_control_file() in pg_combinebackup.c has the\n> following message.\n>\n> > pg_fatal(\"%s: crc is incorrect\", controlpath);\n>\n> I think \"crc\" should be in all uppercase in general and a brief\n> grep'ing told me that it is almost always or consistently used in\n> uppercase in our tree.\n\nI pushed this also.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 13:35:24 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should \"CRC\" be in uppercase?"
},
{
"msg_contents": "At Wed, 27 Dec 2023 13:34:50 +0700, John Naylor <[email protected]> wrote in \n> > Shouldn't the \"is\" following \"LSN\" be \"in\"?\n> \n> Pushed.\n\nAt Wed, 27 Dec 2023 13:35:24 +0700, John Naylor <[email protected]> wrote in \n>> I think \"crc\" should be in all uppercase in general and a brief\n>> grep'ing told me that it is almost always or consistently used in\n>> uppercase in our tree.\n> I pushed this also.\n\nThanks for pushing them!\n\nregads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Sun, 31 Dec 2023 18:36:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A typo in a messsage?"
}
] |
[
{
"msg_contents": "On 21/12/2023 12:10, Alexander Korotkov wrote:\n > I took a closer look at the patch in [9]. I should drop my argument\n > about breaking the model, because add_path() already considers other\n > aspects than just costs. But I have two more note about that patch:\n >\n > 1) It seems that you're determining the fact that the index path\n > should return strictly one row by checking path->rows <= 1.0 and\n > indexinfo->unique. Is it really guaranteed that in this case quals\n > are matching unique constraint? path->rows <= 1.0 could be just an\n > estimation error. Or one row could be correctly estimated, but it's\n > going to be selected by some quals matching unique constraint and\n > other quals in recheck. So, it seems there is a risk to select\n > suboptimal index due to this condition.\n\nOperating inside the optimizer, we consider all estimations to be the \nsooth. This patch modifies only one place: having two equal assumptions, \nwe just choose one that generally looks more stable.\nFiltered tuples should be calculated and included in the cost of the \npath. The decision on the equality of paths has been made in view of the \nestimation of these filtered tuples.\n\n > 2) Even for non-unique indexes this patch is putting new logic on top\n > of the subsequent code. How we can prove it's going to be a win?\n > That could lead, for instance, to dropping parallel-safe paths in\n > cases we didn't do so before.\nBecause we must trust all predictions made by the planner, we just \nchoose the most trustworthy path. According to the planner logic, it is \na path with a smaller selectivity. We can make mistakes anyway just \nbecause of the nature of estimation.\n\n > Anyway, please start a separate thread if you're willing to put more\n > work into this.\n\nDone\n\n > 9. https://www.postgresql.org/message-id/154f786a-06a0-4fb1-\n > b8a4-16c66149731b%40postgrespro.ru\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Fri, 22 Dec 2023 08:53:01 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization outcome depends on the index order"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 8:53 AM Andrei Lepikhov <[email protected]>\nwrote:\n> On 21/12/2023 12:10, Alexander Korotkov wrote:\n> > I took a closer look at the patch in [9]. I should drop my argument\n> > about breaking the model, because add_path() already considers other\n> > aspects than just costs. But I have two more note about that patch:\n> >\n> > 1) It seems that you're determining the fact that the index path\n> > should return strictly one row by checking path->rows <= 1.0 and\n> > indexinfo->unique. Is it really guaranteed that in this case quals\n> > are matching unique constraint? path->rows <= 1.0 could be just an\n> > estimation error. Or one row could be correctly estimated, but it's\n> > going to be selected by some quals matching unique constraint and\n> > other quals in recheck. So, it seems there is a risk to select\n> > suboptimal index due to this condition.\n>\n> Operating inside the optimizer, we consider all estimations to be the\n> sooth. This patch modifies only one place: having two equal assumptions,\n> we just choose one that generally looks more stable.\n> Filtered tuples should be calculated and included in the cost of the\n> path. The decision on the equality of paths has been made in view of the\n> estimation of these filtered tuples.\n\nEven if estimates are accurate the conditions in the patch doesn't\nguarantee there is actually a unique condition.\n\n# create table t as select i/1000 a, i % 1000 b, i % 1000 c from\ngenerate_series(1,1000000) i;\n# create unique index t_unique_idx on t(a,b);\n# create index t_another_idx on t(a,c);\n# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\n c | integer | | |\nIndexes:\n \"t_another_idx\" btree (a, c)\n \"t_unique_idx\" UNIQUE, btree (a, b)\n# set enable_bitmapscan = off; explain select * from t where a = 1 and c =\n1;\nSET\nTime: 0.459 ms\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Scan using t_unique_idx on t (cost=0.42..1635.16 rows=1 width=12)\n Index Cond: (a = 1)\n Filter: (c = 1)\n(3 rows)\n\n\n> > 2) Even for non-unique indexes this patch is putting new logic on top\n> > of the subsequent code. How we can prove it's going to be a win?\n> > That could lead, for instance, to dropping parallel-safe paths in\n> > cases we didn't do so before.\n> Because we must trust all predictions made by the planner, we just\n> choose the most trustworthy path. According to the planner logic, it is\n> a path with a smaller selectivity. We can make mistakes anyway just\n> because of the nature of estimation.\n\nEven if we need to take selectivity into account here, it's still not clear\nwhy this should be on top of other logic later in add_path().\n\n> > Anyway, please start a separate thread if you're willing to put more\n> > work into this.\n>\n> Done\n\nThanks.\n\n------\nRegards,\nAlexander Korotkov\n\nOn Fri, Dec 22, 2023 at 8:53 AM Andrei Lepikhov <[email protected]> wrote:> On 21/12/2023 12:10, Alexander Korotkov wrote:> > I took a closer look at the patch in [9]. I should drop my argument> > about breaking the model, because add_path() already considers other> > aspects than just costs. But I have two more note about that patch:> >> > 1) It seems that you're determining the fact that the index path> > should return strictly one row by checking path->rows <= 1.0 and> > indexinfo->unique. Is it really guaranteed that in this case quals> > are matching unique constraint? path->rows <= 1.0 could be just an> > estimation error. Or one row could be correctly estimated, but it's> > going to be selected by some quals matching unique constraint and> > other quals in recheck. So, it seems there is a risk to select> > suboptimal index due to this condition.>> Operating inside the optimizer, we consider all estimations to be the> sooth. This patch modifies only one place: having two equal assumptions,> we just choose one that generally looks more stable.> Filtered tuples should be calculated and included in the cost of the> path. The decision on the equality of paths has been made in view of the> estimation of these filtered tuples.Even if estimates are accurate the conditions in the patch doesn't guarantee there is actually a unique condition.# create table t as select i/1000 a, i % 1000 b, i % 1000 c from generate_series(1,1000000) i;# create unique index t_unique_idx on t(a,b);# create index t_another_idx on t(a,c);# \\d t Table \"public.t\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- a | integer | | | b | integer | | | c | integer | | |Indexes: \"t_another_idx\" btree (a, c) \"t_unique_idx\" UNIQUE, btree (a, b)# set enable_bitmapscan = off; explain select * from t where a = 1 and c = 1;SETTime: 0.459 ms QUERY PLAN-------------------------------------------------------------------------- Index Scan using t_unique_idx on t (cost=0.42..1635.16 rows=1 width=12) Index Cond: (a = 1) Filter: (c = 1)(3 rows) > > 2) Even for non-unique indexes this patch is putting new logic on top> > of the subsequent code. How we can prove it's going to be a win?> > That could lead, for instance, to dropping parallel-safe paths in> > cases we didn't do so before.> Because we must trust all predictions made by the planner, we just> choose the most trustworthy path. According to the planner logic, it is> a path with a smaller selectivity. We can make mistakes anyway just> because of the nature of estimation.Even if we need to take selectivity into account here, it's still not clear why this should be on top of other logic later in add_path().> > Anyway, please start a separate thread if you're willing to put more> > work into this.>> DoneThanks.------Regards,Alexander Korotkov",
"msg_date": "Fri, 22 Dec 2023 11:48:06 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization outcome depends on the index order"
},
{
"msg_contents": "On 22/12/2023 11:48, Alexander Korotkov wrote:\n> > Because we must trust all predictions made by the planner, we just\n> > choose the most trustworthy path. According to the planner logic, it is\n> > a path with a smaller selectivity. We can make mistakes anyway just\n> > because of the nature of estimation.\n> \n> Even if we need to take selectivity into account here, it's still not \n> clear why this should be on top of other logic later in add_path().\nI got your point now, thanks for pointing it out. In the next version of \nthe patch selectivity is used as a criteria only in the case of COSTS_EQUAL.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Fri, 22 Dec 2023 19:24:37 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization outcome depends on the index order"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 7:24 PM Andrei Lepikhov\n<[email protected]> wrote:\n> On 22/12/2023 11:48, Alexander Korotkov wrote:\n> > > Because we must trust all predictions made by the planner, we just\n> > > choose the most trustworthy path. According to the planner logic, it is\n> > > a path with a smaller selectivity. We can make mistakes anyway just\n> > > because of the nature of estimation.\n> >\n> > Even if we need to take selectivity into account here, it's still not\n> > clear why this should be on top of other logic later in add_path().\n> I got your point now, thanks for pointing it out. In the next version of\n> the patch selectivity is used as a criteria only in the case of COSTS_EQUAL.\n\nIt looks better now. But it's hard for me to judge these heuristics\nin add_path(). Tom, what do you think about this?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Dec 2023 13:36:37 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization outcome depends on the index order"
},
{
"msg_contents": "On 25/12/2023 18:36, Alexander Korotkov wrote:\n> On Fri, Dec 22, 2023 at 7:24 PM Andrei Lepikhov\n> <[email protected]> wrote:\n>> On 22/12/2023 11:48, Alexander Korotkov wrote:\n>>> > Because we must trust all predictions made by the planner, we just\n>>> > choose the most trustworthy path. According to the planner logic, it is\n>>> > a path with a smaller selectivity. We can make mistakes anyway just\n>>> > because of the nature of estimation.\n>>>\n>>> Even if we need to take selectivity into account here, it's still not\n>>> clear why this should be on top of other logic later in add_path().\n>> I got your point now, thanks for pointing it out. In the next version of\n>> the patch selectivity is used as a criteria only in the case of COSTS_EQUAL.\n> \n> It looks better now. But it's hard for me to judge these heuristics\n> in add_path(). Tom, what do you think about this?\nJust food for thought:\nAnother approach I have considered was to initialize the relation index \nlist according to some consistent rule: place unique indexes at the head \nof the list, arrange indexes according to the number of columns involved \nand sort in some lexical order.\nBut IMO, the implemented approach looks better.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 26 Dec 2023 11:53:34 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization outcome depends on the index order"
}
] |
[
{
"msg_contents": "Hello.\nThere is date_trunc(interval, timestamptz, timezone) function.\nFirst parameter can be '5 year', '2 month', '6 hour', '3 hour', '15 \nminute', '10 second' etc.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 22 Dec 2023 20:26:15 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "date_trunc function in interval version"
},
{
"msg_contents": "Hi\n\npá 22. 12. 2023 v 20:26 odesílatel Przemysław Sztoch <[email protected]>\nnapsal:\n\n> Hello.\n> There is date_trunc(interval, timestamptz, timezone) function.\n> First parameter can be '5 year', '2 month', '6 hour', '3 hour', '15\n> minute', '10 second' etc.\n>\n\nshould not be named interval_trunc instead? In this case the good name can\nbe hard to choose, but with the name date_trunc it can be hard to find it.\n\nRegards\n\nPavel\n\n\n> --\n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\n\nHipá 22. 12. 2023 v 20:26 odesílatel Przemysław Sztoch <[email protected]> napsal:\n\nHello.\nThere is date_trunc(interval, timestamptz, timezone) function.\nFirst parameter can be '5 year', '2 month', '6 hour', '3 hour', '15 \nminute', '10 second' etc.should not be named interval_trunc instead? In this case the good name can be hard to choose, but with the name date_trunc it can be hard to find it.RegardsPavel \n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 22 Dec 2023 20:43:45 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "In my opinion date_trunc is very good name.\nTruncated data is timestamp type, not interval.\nFirst parameter has same meaning in original date_trunc and in my new \nversion.\nNew version provides only more granularity.\n\nPavel Stehule wrote on 12/22/2023 8:43 PM:\n> Hi\n>\n> pá 22. 12. 2023 v 20:26 odesílatel Przemysław Sztoch \n> <[email protected] <mailto:[email protected]>> napsal:\n>\n> Hello.\n> There is date_trunc(interval, timestamptz, timezone) function.\n> First parameter can be '5 year', '2 month', '6 hour', '3 hour',\n> '15 minute', '10 second' etc.\n>\n>\n> should not be named interval_trunc instead? In this case the good name \n> can be hard to choose, but with the name date_trunc it can be hard to \n> find it.\n>\n> Regards\n>\n> Pavel\n>\n> -- \n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nIn my opinion date_trunc is very good name.\nTruncated data is timestamp type, not interval.\nFirst parameter has same meaning in original date_trunc and in my new \nversion.\nNew version provides only more granularity.\n\nPavel Stehule wrote on 12/22/2023 8:43 PM:\n\n\nHipá 22. 12. 2023 v 20:26 odesílatel \nPrzemysław Sztoch <[email protected]> napsal:\nHello.\nThere is date_trunc(interval, timestamptz, timezone) function.\nFirst parameter can be '5 year', '2 month', '6 hour', '3 hour', '15 \nminute', '10 second' etc.should\n not be named interval_trunc instead? In this case the good name can be \nhard to choose, but with the name date_trunc it can be hard to find it.RegardsPavel \n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 22 Dec 2023 23:25:51 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 5:26 AM Przemysław Sztoch <[email protected]> wrote:\n>\n> In my opinion date_trunc is very good name.\n> Truncated data is timestamp type, not interval.\n> First parameter has same meaning in original date_trunc and in my new version.\n> New version provides only more granularity.\n\nI haven't looked at the patch, but your description sounds awfully\nclose to date_bin(), which already takes an arbitrary interval.\n\n\n",
"msg_date": "Sat, 23 Dec 2023 07:32:31 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Hi\n\npá 22. 12. 2023 v 23:25 odesílatel Przemysław Sztoch <[email protected]>\nnapsal:\n\n> In my opinion date_trunc is very good name.\n> Truncated data is timestamp type, not interval.\n> First parameter has same meaning in original date_trunc and in my new\n> version.\n> New version provides only more granularity.\n>\n\nok, I miss it.\n\nRegards\n\nPavel\n\n\n>\n> Pavel Stehule wrote on 12/22/2023 8:43 PM:\n>\n> Hi\n>\n> pá 22. 12. 2023 v 20:26 odesílatel Przemysław Sztoch <[email protected]>\n> napsal:\n>\n>> Hello.\n>> There is date_trunc(interval, timestamptz, timezone) function.\n>> First parameter can be '5 year', '2 month', '6 hour', '3 hour', '15\n>> minute', '10 second' etc.\n>>\n>\n> should not be named interval_trunc instead? In this case the good name can\n> be hard to choose, but with the name date_trunc it can be hard to find it.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Przemysław Sztoch | Mobile +48 509 99 00 66\n>>\n>\n> --\n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\n\nHipá 22. 12. 2023 v 23:25 odesílatel Przemysław Sztoch <[email protected]> napsal:\nIn my opinion date_trunc is very good name.\nTruncated data is timestamp type, not interval.\nFirst parameter has same meaning in original date_trunc and in my new \nversion.\nNew version provides only more granularity.ok, I miss it.RegardsPavel \n\nPavel Stehule wrote on 12/22/2023 8:43 PM:\n\nHipá 22. 12. 2023 v 20:26 odesílatel \nPrzemysław Sztoch <[email protected]> napsal:\nHello.\nThere is date_trunc(interval, timestamptz, timezone) function.\nFirst parameter can be '5 year', '2 month', '6 hour', '3 hour', '15 \nminute', '10 second' etc.should\n not be named interval_trunc instead? In this case the good name can be \nhard to choose, but with the name date_trunc it can be hard to find it.RegardsPavel \n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sat, 23 Dec 2023 13:33:50 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "so 23. 12. 2023 v 13:33 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> pá 22. 12. 2023 v 23:25 odesílatel Przemysław Sztoch <[email protected]>\n> napsal:\n>\n>> In my opinion date_trunc is very good name.\n>> Truncated data is timestamp type, not interval.\n>> First parameter has same meaning in original date_trunc and in my new\n>> version.\n>> New version provides only more granularity.\n>>\n>\n> ok, I miss it.\n>\n\nI was confused - I am sorry, I imagined something different. Then the name\nis correct.\n\n\n\nRegards\n\nPavel\n\n\n>\n> Pavel Stehule wrote on 12/22/2023 8:43 PM:\n>\n> Hi\n>\n> pá 22. 12. 2023 v 20:26 odesílatel Przemysław Sztoch <[email protected]>\n> napsal:\n>\n>> Hello.\n>> There is date_trunc(interval, timestamptz, timezone) function.\n>> First parameter can be '5 year', '2 month', '6 hour', '3 hour', '15\n>> minute', '10 second' etc.\n>>\n>\n> should not be named interval_trunc instead? In this case the good name can\n> be hard to choose, but with the name date_trunc it can be hard to find it.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Przemysław Sztoch | Mobile +48 509 99 00 66\n>>\n>\n> --\n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\n\nso 23. 12. 2023 v 13:33 odesílatel Pavel Stehule <[email protected]> napsal:Hipá 22. 12. 2023 v 23:25 odesílatel Przemysław Sztoch <[email protected]> napsal:\nIn my opinion date_trunc is very good name.\nTruncated data is timestamp type, not interval.\nFirst parameter has same meaning in original date_trunc and in my new \nversion.\nNew version provides only more granularity.ok, I miss it.I was confused - I am sorry, I imagined something different. Then the name is correct.RegardsPavel \n\nPavel Stehule wrote on 12/22/2023 8:43 PM:\n\nHipá 22. 12. 2023 v 20:26 odesílatel \nPrzemysław Sztoch <[email protected]> napsal:\nHello.\nThere is date_trunc(interval, timestamptz, timezone) function.\nFirst parameter can be '5 year', '2 month', '6 hour', '3 hour', '15 \nminute', '10 second' etc.should\n not be named interval_trunc instead? In this case the good name can be \nhard to choose, but with the name date_trunc it can be hard to find it.RegardsPavel \n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sat, 23 Dec 2023 13:36:10 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "date_bin has big problem with DST.\nIn example, if you put origin in winter zone, then generated bin will be \nincorrect for summer input date.\n\ndate_trunc is resistant for this problem.\nMy version of date_trunc is additionally more flexible, you can select \nmore granular interval, 12h, 8h, 6h, 15min, 10 min etc...\n\nJohn Naylor wrote on 23.12.2023 01:32:\n> On Sat, Dec 23, 2023 at 5:26 AM Przemysław Sztoch <[email protected]> wrote:\n>> In my opinion date_trunc is very good name.\n>> Truncated data is timestamp type, not interval.\n>> First parameter has same meaning in original date_trunc and in my new version.\n>> New version provides only more granularity.\n> I haven't looked at the patch, but your description sounds awfully\n> close to date_bin(), which already takes an arbitrary interval.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\ndate_bin has big problem with DST.\nIn example, if you put origin in winter zone, then generated bin will be\n incorrect for summer input date.\n\ndate_trunc is resistant for this problem.\nMy version of date_trunc is additionally more flexible, you can select \nmore granular interval, 12h, 8h, 6h, 15min, 10 min etc... \n\nJohn Naylor wrote on 23.12.2023 01:32:\n\nOn Sat, Dec 23, 2023 at 5:26 AM Przemysław Sztoch <[email protected]> wrote:\n\n\nIn my opinion date_trunc is very good name.\nTruncated data is timestamp type, not interval.\nFirst parameter has same meaning in original date_trunc and in my new version.\nNew version provides only more granularity.\n\n\nI haven't looked at the patch, but your description sounds awfully\nclose to date_bin(), which already takes an arbitrary interval.\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sat, 23 Dec 2023 23:45:23 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Hi,\n\nPlease don't too-post on this list. The custom is to bottom-post or\nreply inline, and it's much easier to follow such replies.\n\nOn 12/23/23 23:45, Przemysław Sztoch wrote:\n> date_bin has big problem with DST.\n> In example, if you put origin in winter zone, then generated bin will be\n> incorrect for summer input date.\n> \n> date_trunc is resistant for this problem.\n> My version of date_trunc is additionally more flexible, you can select\n> more granular interval, 12h, 8h, 6h, 15min, 10 min etc...\n> \n\nI'm not very familiar with date_bin(), but is this issue inherent or\ncould we maybe fix date_bin() to handle DST better?\n\nIn particular, isn't part of the problem that date_bin() is defined only\nfor timestamp and not for timestamptz? Also, date_trunc() allows to\nspecify a timezone, but date_bin() does not.\n\n\nIn any case, the patch needs to add the new stuff to the SGML docs (to\ndoc/src/sgml/func.sgml), which now documents the date_trunc(text,...)\nvariant only.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Feb 2024 01:29:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "\n\n> On 18 Feb 2024, at 05:29, Tomas Vondra <[email protected]> wrote:\n> \n> I'm not very familiar with date_bin(), but is this issue inherent or\n> could we maybe fix date_bin() to handle DST better?\n> \n> In particular, isn't part of the problem that date_bin() is defined only\n> for timestamp and not for timestamptz? Also, date_trunc() allows to\n> specify a timezone, but date_bin() does not.\n> \n> \n> In any case, the patch needs to add the new stuff to the SGML docs (to\n> doc/src/sgml/func.sgml), which now documents the date_trunc(text,...)\n> variant only.\n\nHi Przemysław,\n\nPlease address above notes.\nI’ve flipped CF entry [0] to “Waiting on author”, feel free to switch it back.\n\nThank you!\n\n\nBest regards, Andrey Borodin.\n[0] https://commitfest.postgresql.org/47/4761/\n\n",
"msg_date": "Mon, 4 Mar 2024 11:38:01 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Tomas Vondra wrote on 18.02.2024 01:29:\n> Hi,\n>\n> Please don't too-post on this list. The custom is to bottom-post or\n> reply inline, and it's much easier to follow such replies.\n>\n> On 12/23/23 23:45, Przemysław Sztoch wrote:\n>> date_bin has big problem with DST.\n>> In example, if you put origin in winter zone, then generated bin will be\n>> incorrect for summer input date.\n>>\n>> date_trunc is resistant for this problem.\n>> My version of date_trunc is additionally more flexible, you can select\n>> more granular interval, 12h, 8h, 6h, 15min, 10 min etc...\n>>\n> I'm not very familiar with date_bin(), but is this issue inherent or\n> could we maybe fix date_bin() to handle DST better?\nApparently the functionality is identical to date_bin.\nWhen I saw date_bin in the documentation, I thought it solved all my \nproblems.\nUnfortunately, DST problems have many corner cases.\nI tried to change date_bin several times, but unfortunately in some \ncases it would start working differently than before.\n\n> In any case, the patch needs to add the new stuff to the SGML docs (to\n> doc/src/sgml/func.sgml), which now documents the date_trunc(text,...)\n> variant only.\nUpdated.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 4 Mar 2024 11:03:14 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 5:03 AM Przemysław Sztoch <[email protected]> wrote:\n> Apparently the functionality is identical to date_bin.\n> When I saw date_bin in the documentation, I thought it solved all my problems.\n> Unfortunately, DST problems have many corner cases.\n> I tried to change date_bin several times, but unfortunately in some cases it would start working differently than before.\n\nSo, first of all, thanks for taking an interest and sending a patch.\n\nIn order for the patch to have a chance of being accepted, we would\nneed to have a clear understanding of exactly how this patch is\ndifferent from the existing date_bin(). If we knew that, we could\ndecide either that (a) date_bin does the right thing and your patch\ndoes the wrong thing and therefore we should reject your patch, or we\ncould decide that (b) date_bin does the wrong thing and therefore we\nshould fix it, or we could decide that (c) both date_bin and what this\npatch does are correct, in the sense of being sensible things to do,\nand there is a reason to have both. But if we don't really understand\nhow they are different, which seems to be the case right now, then we\ncan't make any decisions. And what that means in practice is that\nnobody is going to be willing to commit anything, and we're just going\nto go around in circles.\n\nTypically, this kind of research is the responsibility of the patch\nauthor: you're the one who wants something changed, so that means you\nneed to provide convincing evidence that it should be. If someone else\nvolunteers to do it, that's also cool, but it absolutely has to be\ndone in order for there to be a chance of progress here. No committer\nis going to say \"well, we already have date_bin, but Przemysław says\nhis date_trunc is different somehow, so let's have both without\nunderstanding how exactly they're different.\" That's just not a\nrealistic scenario. Just to name one problem, how would we document\neach of them? Users would expect the documentation to explain how two\nclosely-related functions differ, but we will be unable to explain\nthat if we don't know the answer ourselves.\n\nIf you can't figure out exactly what the differences are by code\ninspection, then maybe one thing you could do to help unblock things\nhere is provide some very clear examples of when they deliver the same\nresults and when they deliver different results. Although there are no\nguarantees, that might lead somebody else to jump in and suggest an\nexplanation, or further avenues of analysis, or some other helpful\ncomment.\n\nPersonally, what I suspect is that there's already a way to do what\nyou want using date_bin(), maybe in conjunction with some casting or\nsome calls to other functions that we already have. But it's hard to\nbe sure because we just don't have the details. \"DST problems have\nmany corner cases\" and \"in some cases [date_bin] would start working\ndifferently than before\" may be true statements as far as they go, but\nthey're not very specific complaints. If you can describe *exactly*\nhow date_bin fails to meet your expectations, there is a much better\nchance that something useful will happen here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:29:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Robert Haas wrote on 5/15/2024 9:29 PM:\n> On Mon, Mar 4, 2024 at 5:03 AM Przemysław Sztoch <[email protected]> wrote:\n>> Apparently the functionality is identical to date_bin.\n>> When I saw date_bin in the documentation, I thought it solved all my problems.\n>> Unfortunately, DST problems have many corner cases.\n>> I tried to change date_bin several times, but unfortunately in some cases it would start working differently than before.\n> So, first of all, thanks for taking an interest and sending a patch.\n>\n> In order for the patch to have a chance of being accepted, we would\n> need to have a clear understanding of exactly how this patch is\n> different from the existing date_bin(). If we knew that, we could\n> decide either that (a) date_bin does the right thing and your patch\n> does the wrong thing and therefore we should reject your patch, or we\n> could decide that (b) date_bin does the wrong thing and therefore we\n> should fix it, or we could decide that (c) both date_bin and what this\n> patch does are correct, in the sense of being sensible things to do,\n> and there is a reason to have both. But if we don't really understand\n> how they are different, which seems to be the case right now, then we\n> can't make any decisions. And what that means in practice is that\n> nobody is going to be willing to commit anything, and we're just going\n> to go around in circles.\n>\n> Typically, this kind of research is the responsibility of the patch\n> author: you're the one who wants something changed, so that means you\n> need to provide convincing evidence that it should be. If someone else\n> volunteers to do it, that's also cool, but it absolutely has to be\n> done in order for there to be a chance of progress here. No committer\n> is going to say \"well, we already have date_bin, but Przemysław says\n> his date_trunc is different somehow, so let's have both without\n> understanding how exactly they're different.\" That's just not a\n> realistic scenario. Just to name one problem, how would we document\n> each of them? Users would expect the documentation to explain how two\n> closely-related functions differ, but we will be unable to explain\n> that if we don't know the answer ourselves.\n>\n> If you can't figure out exactly what the differences are by code\n> inspection, then maybe one thing you could do to help unblock things\n> here is provide some very clear examples of when they deliver the same\n> results and when they deliver different results. Although there are no\n> guarantees, that might lead somebody else to jump in and suggest an\n> explanation, or further avenues of analysis, or some other helpful\n> comment.\n>\n> Personally, what I suspect is that there's already a way to do what\n> you want using date_bin(), maybe in conjunction with some casting or\n> some calls to other functions that we already have. But it's hard to\n> be sure because we just don't have the details. \"DST problems have\n> many corner cases\" and \"in some cases [date_bin] would start working\n> differently than before\" may be true statements as far as they go, but\n> they're not very specific complaints. If you can describe *exactly*\n> how date_bin fails to meet your expectations, there is a much better\n> chance that something useful will happen here.\n>\nI would also like to thank Robert for presenting the matter in detail.\n\nMy function date_trunc ( interval, timestamp, ...) is similar to \noriginal function date_trunc ( text, timestamp ...) .\n\nMy extension only gives more granularity.\nWe don't have a jump from hour to day. We can use 6h and 12h. It's the \nsame with minutes.\nWe can round to 30 minutes, 20minutes, 15 minutes, etc.\n\nUsing date_bin has a similar effect, but requires specifying the origin. \nAccording to this origin,\nsubsequent buckets are then calculated. The need to provide this origin \nis sometimes a very big problem.\nEspecially since you cannot use one origin when changing from summer to \nwinter time.\n\nIf we use one origin for example begin of year: 2024-01-01 00:00:00 then:\n# SET timezone='Europe/Warsaw';\n# SELECT date_bin('1 day', '2024-03-05 11:22:33', '2024-01-01 \n00:00:00'), date_trunc('day', '2024-03-05 11:22:33'::timestamptz);\n2024-03-05 00:00:00+01 2024-03-05 00:00:00+01 date_bin works ok, \nbecause we are before DST\n# SELECT date_bin('1 day', '2024-05-05 11:22:33', '2024-01-01 \n00:00:00'), date_trunc('day', '2024-05-05 11:22:33'::timestamptz);\n2024-05-05 01:00:00+02 2024-05-05 00:00:00+02 date_bin has problem, \nbecause we are in May after DST\n\nIf anyone has an idea how to make date_bin work like date_trunc, please \nprovide an example.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nRobert Haas wrote on 5/15/2024 9:29 \nPM:\n\nOn Mon, Mar 4, 2024 at 5:03 AM Przemysław Sztoch <[email protected]> wrote:\n\nApparently the functionality is identical to date_bin.\nWhen I saw date_bin in the documentation, I thought it solved all my problems.\nUnfortunately, DST problems have many corner cases.\nI tried to change date_bin several times, but unfortunately in some cases it would start working differently than before.\n\n\nSo, first of all, thanks for taking an interest and sending a patch.\n\nIn order for the patch to have a chance of being accepted, we would\nneed to have a clear understanding of exactly how this patch is\ndifferent from the existing date_bin(). If we knew that, we could\ndecide either that (a) date_bin does the right thing and your patch\ndoes the wrong thing and therefore we should reject your patch, or we\ncould decide that (b) date_bin does the wrong thing and therefore we\nshould fix it, or we could decide that (c) both date_bin and what this\npatch does are correct, in the sense of being sensible things to do,\nand there is a reason to have both. But if we don't really understand\nhow they are different, which seems to be the case right now, then we\ncan't make any decisions. And what that means in practice is that\nnobody is going to be willing to commit anything, and we're just going\nto go around in circles.\n\nTypically, this kind of research is the responsibility of the patch\nauthor: you're the one who wants something changed, so that means you\nneed to provide convincing evidence that it should be. If someone else\nvolunteers to do it, that's also cool, but it absolutely has to be\ndone in order for there to be a chance of progress here. No committer\nis going to say \"well, we already have date_bin, but Przemysław says\nhis date_trunc is different somehow, so let's have both without\nunderstanding how exactly they're different.\" That's just not a\nrealistic scenario. Just to name one problem, how would we document\neach of them? Users would expect the documentation to explain how two\nclosely-related functions differ, but we will be unable to explain\nthat if we don't know the answer ourselves.\n\nIf you can't figure out exactly what the differences are by code\ninspection, then maybe one thing you could do to help unblock things\nhere is provide some very clear examples of when they deliver the same\nresults and when they deliver different results. Although there are no\nguarantees, that might lead somebody else to jump in and suggest an\nexplanation, or further avenues of analysis, or some other helpful\ncomment.\n\nPersonally, what I suspect is that there's already a way to do what\nyou want using date_bin(), maybe in conjunction with some casting or\nsome calls to other functions that we already have. But it's hard to\nbe sure because we just don't have the details. \"DST problems have\nmany corner cases\" and \"in some cases [date_bin] would start working\ndifferently than before\" may be true statements as far as they go, but\nthey're not very specific complaints. If you can describe *exactly*\nhow date_bin fails to meet your expectations, there is a much better\nchance that something useful will happen here.\n\n\n\nI would also like to thank Robert for presenting the matter in detail.\n\n\nMy function date_trunc ( interval, timestamp, ...) is similar to \noriginal function date_trunc ( text, timestamp ...) .\n\n\n\nMy extension only gives more granularity.\n\n\nWe don't have a jump from hour to day. We can use 6h and 12h. It's \nthe \nsame with minutes.\n\n\nWe can round to 30 minutes, 20 minutes, 15\n minutes, etc.\n\n\n\nUsing date_bin has a similar effect, but requires specifying the origin.\n According to this origin,\n\n\nsubsequent buckets are then calculated. The need to provide this origin \nis sometimes a very big problem.\n\n\nEspecially since you cannot use one origin when changing from summer to \nwinter time.\n\n\n\nIf we use one origin for example begin of year: 2024-01-01 00:00:00\n \nthen:\n\n\n# SET timezone='Europe/Warsaw';\n\n\n# SELECT date_bin('1 day', '2024-03-05\n11:22:33',\n '2024-01-01 \n00:00:00'), \ndate_trunc('day', '2024-03-05\n 11:22:33'::timestamptz);\n2024-03-05\n 00:00:00+01 2024-03-05 \n00:00:00+01 date_bin works ok, \nbecause we are before DST\n\n\n# SELECT date_bin('1 day', '2024-05-05\n11:22:33',\n '2024-01-01 \n00:00:00'), \ndate_trunc('day', '2024-05-05 \n11:22:33'::timestamptz);\n2024-05-05\n 01:00:00+02 2024-05-05 \n00:00:00+02 date_bin has \nproblem, because we are in May after DST\n\n\nIf anyone has an idea how to make date_bin work like date_trunc, please \nprovide an example.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sat, 18 May 2024 23:19:56 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "On Sun, May 19, 2024 at 2:20 AM Przemysław Sztoch <[email protected]>\nwrote:\n\n> Robert Haas wrote on 5/15/2024 9:29 PM:\n>\n> On Mon, Mar 4, 2024 at 5:03 AM Przemysław Sztoch <[email protected]> <[email protected]> wrote:\n>\n> Apparently the functionality is identical to date_bin.\n> When I saw date_bin in the documentation, I thought it solved all my problems.\n> Unfortunately, DST problems have many corner cases.\n> I tried to change date_bin several times, but unfortunately in some cases it would start working differently than before.\n>\n> So, first of all, thanks for taking an interest and sending a patch.\n>\n> In order for the patch to have a chance of being accepted, we would\n> need to have a clear understanding of exactly how this patch is\n> different from the existing date_bin(). If we knew that, we could\n> decide either that (a) date_bin does the right thing and your patch\n> does the wrong thing and therefore we should reject your patch, or we\n> could decide that (b) date_bin does the wrong thing and therefore we\n> should fix it, or we could decide that (c) both date_bin and what this\n> patch does are correct, in the sense of being sensible things to do,\n> and there is a reason to have both. But if we don't really understand\n> how they are different, which seems to be the case right now, then we\n> can't make any decisions. And what that means in practice is that\n> nobody is going to be willing to commit anything, and we're just going\n> to go around in circles.\n>\n> Typically, this kind of research is the responsibility of the patch\n> author: you're the one who wants something changed, so that means you\n> need to provide convincing evidence that it should be. If someone else\n> volunteers to do it, that's also cool, but it absolutely has to be\n> done in order for there to be a chance of progress here. No committer\n> is going to say \"well, we already have date_bin, but Przemysław says\n> his date_trunc is different somehow, so let's have both without\n> understanding how exactly they're different.\" That's just not a\n> realistic scenario. Just to name one problem, how would we document\n> each of them? Users would expect the documentation to explain how two\n> closely-related functions differ, but we will be unable to explain\n> that if we don't know the answer ourselves.\n>\n> If you can't figure out exactly what the differences are by code\n> inspection, then maybe one thing you could do to help unblock things\n> here is provide some very clear examples of when they deliver the same\n> results and when they deliver different results. Although there are no\n> guarantees, that might lead somebody else to jump in and suggest an\n> explanation, or further avenues of analysis, or some other helpful\n> comment.\n>\n> Personally, what I suspect is that there's already a way to do what\n> you want using date_bin(), maybe in conjunction with some casting or\n> some calls to other functions that we already have. But it's hard to\n> be sure because we just don't have the details. \"DST problems have\n> many corner cases\" and \"in some cases [date_bin] would start working\n> differently than before\" may be true statements as far as they go, but\n> they're not very specific complaints. If you can describe *exactly*\n> how date_bin fails to meet your expectations, there is a much better\n> chance that something useful will happen here.\n>\n>\n> I would also like to thank Robert for presenting the matter in detail.\n>\n> My function date_trunc ( interval, timestamp, ...) is similar to original\n> function date_trunc ( text, timestamp ...) .\n>\n> My extension only gives more granularity.\n> We don't have a jump from hour to day. We can use 6h and 12h. It's the\n> same with minutes.\n> We can round to 30 minutes, 20 minutes, 15 minutes, etc.\n>\n> Using date_bin has a similar effect, but requires specifying the origin.\n> According to this origin,\n> subsequent buckets are then calculated. The need to provide this origin is\n> sometimes a very big problem.\n> Especially since you cannot use one origin when changing from summer to\n> winter time.\n>\n> If we use one origin for example begin of year: 2024-01-01 00:00:00 then:\n> # SET timezone='Europe/Warsaw';\n> # SELECT date_bin('1 day', '2024-03-05 11:22:33', '2024-01-01 00:00:00'),\n> date_trunc('day', '2024-03-05 11:22:33'::timestamptz);\n> 2024-03-05 00:00:00+01 2024-03-05 00:00:00+01 date_bin works ok,\n> because we are before DST\n> # SELECT date_bin('1 day', '2024-05-05 11:22:33', '2024-01-01 00:00:00'),\n> date_trunc('day', '2024-05-05 11:22:33'::timestamptz);\n> 2024-05-05 01:00:00+02 2024-05-05 00:00:00+02 date_bin has\n> problem, because we are in May after DST\n>\n> If anyone has an idea how to make date_bin work like date_trunc, please\n> provide an example.\n>\n>\nHere is an example which will make date_bin() to behave like date_trunc():\n# SELECT date_bin('1 day', '2024-05-05 11:22:33', '0001-01-01'::timestamp),\ndate_trunc('day', '2024-05-05 11:22:33'::timestamptz);\n date_bin | date_trunc\n---------------------+------------------------\n 2024-05-05 00:00:00 | 2024-05-05 00:00:00+02\n(1 row)\n\nIn general, to make date_bin work similarly to date_trunc in PostgreSQL,\nyou need to set the interval length appropriately and use an origin\ntimestamp that aligns with the start of the interval you want to bin.\n\nHere's how you can use date_bin to mimic the behavior of date_trunc:\n\nTruncate to the Start of the Year:\n# SELECT date_bin('1 year', timestamp_column, '0001-01-01'::timestamp) FROM\nyour_table;\nTruncate to the Start of the Month:\n# SELECT date_bin('1 month', timestamp_column, '0001-01-01'::timestamp)\nFROM your_table;\nTruncate to the Start of the Day:\n# SELECT date_bin('1 day', timestamp_column, '0001-01-01'::timestamp) FROM\nyour_table;\nTruncate to the Start of the Hour:\n# SELECT date_bin('1 hour', timestamp_column, '0001-01-01'::timestamp) FROM\nyour_table;\nTruncate to the Start of the Minute:\n# SELECT date_bin('1 minute', timestamp_column, '0001-01-01'::timestamp)\nFROM your_table;\n\n\n-- \n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\n\nOn Sun, May 19, 2024 at 2:20 AM Przemysław Sztoch <[email protected]> wrote:\nRobert Haas wrote on 5/15/2024 9:29 \nPM:\n\nOn Mon, Mar 4, 2024 at 5:03 AM Przemysław Sztoch <[email protected]> wrote:\n\nApparently the functionality is identical to date_bin.\nWhen I saw date_bin in the documentation, I thought it solved all my problems.\nUnfortunately, DST problems have many corner cases.\nI tried to change date_bin several times, but unfortunately in some cases it would start working differently than before.\n\nSo, first of all, thanks for taking an interest and sending a patch.\n\nIn order for the patch to have a chance of being accepted, we would\nneed to have a clear understanding of exactly how this patch is\ndifferent from the existing date_bin(). If we knew that, we could\ndecide either that (a) date_bin does the right thing and your patch\ndoes the wrong thing and therefore we should reject your patch, or we\ncould decide that (b) date_bin does the wrong thing and therefore we\nshould fix it, or we could decide that (c) both date_bin and what this\npatch does are correct, in the sense of being sensible things to do,\nand there is a reason to have both. But if we don't really understand\nhow they are different, which seems to be the case right now, then we\ncan't make any decisions. And what that means in practice is that\nnobody is going to be willing to commit anything, and we're just going\nto go around in circles.\n\nTypically, this kind of research is the responsibility of the patch\nauthor: you're the one who wants something changed, so that means you\nneed to provide convincing evidence that it should be. If someone else\nvolunteers to do it, that's also cool, but it absolutely has to be\ndone in order for there to be a chance of progress here. No committer\nis going to say \"well, we already have date_bin, but Przemysław says\nhis date_trunc is different somehow, so let's have both without\nunderstanding how exactly they're different.\" That's just not a\nrealistic scenario. Just to name one problem, how would we document\neach of them? Users would expect the documentation to explain how two\nclosely-related functions differ, but we will be unable to explain\nthat if we don't know the answer ourselves.\n\nIf you can't figure out exactly what the differences are by code\ninspection, then maybe one thing you could do to help unblock things\nhere is provide some very clear examples of when they deliver the same\nresults and when they deliver different results. Although there are no\nguarantees, that might lead somebody else to jump in and suggest an\nexplanation, or further avenues of analysis, or some other helpful\ncomment.\n\nPersonally, what I suspect is that there's already a way to do what\nyou want using date_bin(), maybe in conjunction with some casting or\nsome calls to other functions that we already have. But it's hard to\nbe sure because we just don't have the details. \"DST problems have\nmany corner cases\" and \"in some cases [date_bin] would start working\ndifferently than before\" may be true statements as far as they go, but\nthey're not very specific complaints. If you can describe *exactly*\nhow date_bin fails to meet your expectations, there is a much better\nchance that something useful will happen here.\n\n\n\nI would also like to thank Robert for presenting the matter in detail.\n\n\nMy function date_trunc ( interval, timestamp, ...) is similar to \noriginal function date_trunc ( text, timestamp ...) .\n\n\n\nMy extension only gives more granularity.\n\n\nWe don't have a jump from hour to day. We can use 6h and 12h. It's \nthe \nsame with minutes.\n\n\nWe can round to 30 minutes, 20 minutes, 15\n minutes, etc.\n\n\n\nUsing date_bin has a similar effect, but requires specifying the origin.\n According to this origin,\n\n\nsubsequent buckets are then calculated. The need to provide this origin \nis sometimes a very big problem.\n\n\nEspecially since you cannot use one origin when changing from summer to \nwinter time.\n\n\n\nIf we use one origin for example begin of year: 2024-01-01 00:00:00\n \nthen:\n\n\n# SET timezone='Europe/Warsaw';\n\n\n# SELECT date_bin('1 day', '2024-03-05\n11:22:33',\n '2024-01-01 \n00:00:00'), \ndate_trunc('day', '2024-03-05\n 11:22:33'::timestamptz);\n2024-03-05\n 00:00:00+01 2024-03-05 \n00:00:00+01 date_bin works ok, \nbecause we are before DST\n\n\n# SELECT date_bin('1 day', '2024-05-05\n11:22:33',\n '2024-01-01 \n00:00:00'), \ndate_trunc('day', '2024-05-05 \n11:22:33'::timestamptz);\n2024-05-05\n 01:00:00+02 2024-05-05 \n00:00:00+02 date_bin has \nproblem, because we are in May after DST\n\n\nIf anyone has an idea how to make date_bin work like date_trunc, please \nprovide an example.\nHere is an example which will make date_bin() to behave like date_trunc(): # SELECT date_bin('1 day', '2024-05-05 11:22:33', '0001-01-01'::timestamp), date_trunc('day', '2024-05-05 11:22:33'::timestamptz); date_bin | date_trunc ---------------------+------------------------ 2024-05-05 00:00:00 | 2024-05-05 00:00:00+02(1 row)In general, to make date_bin work similarly to date_trunc in PostgreSQL, you need to set the interval length appropriately and use an origin timestamp that aligns with the start of the interval you want to bin.Here's how you can use date_bin to mimic the behavior of date_trunc:Truncate to the Start of the Year:# SELECT date_bin('1 year', timestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of the Month:# SELECT date_bin('1 month', timestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of the Day:# SELECT date_bin('1 day', timestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of the Hour:# SELECT date_bin('1 hour', timestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of the Minute:# SELECT date_bin('1 minute', timestamp_column, '0001-01-01'::timestamp) FROM your_table;\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sun, 19 May 2024 03:03:51 +0500",
"msg_from": "Yasir <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Yasir wrote on 19.05.2024 00:03:\n>\n> I would also like to thank Robert for presenting the matter in detail.\n>\n> My function date_trunc ( interval, timestamp, ...) is similar to\n> original function date_trunc ( text, timestamp ...) .\n>\n> My extension only gives more granularity.\n> We don't have a jump from hour to day. We can use 6h and 12h. It's\n> the same with minutes.\n> We can round to 30 minutes, 20minutes, 15 minutes, etc.\n>\n> Using date_bin has a similar effect, but requires specifying the\n> origin. According to this origin,\n> subsequent buckets are then calculated. The need to provide this\n> origin is sometimes a very big problem.\n> Especially since you cannot use one origin when changing from\n> summer to winter time.\n>\n> If we use one origin for example begin of year: 2024-01-01\n> 00:00:00 then:\n> # SET timezone='Europe/Warsaw';\n> # SELECT date_bin('1 day', '2024-03-05 11:22:33', '2024-01-01\n> 00:00:00'), date_trunc('day', '2024-03-05 11:22:33'::timestamptz);\n> 2024-03-05 00:00:00+01 2024-03-05 00:00:00+01 date_bin works\n> ok, because we are before DST\n> # SELECT date_bin('1 day', '2024-05-05 11:22:33', '2024-01-01\n> 00:00:00'), date_trunc('day', '2024-05-05 11:22:33'::timestamptz);\n> 2024-05-05 01:00:00+02 2024-05-05 00:00:00+02 date_bin has\n> problem, because we are in May after DST\n>\n> If anyone has an idea how to make date_bin work like date_trunc,\n> please provide an example.\n>\n>\n> Here is an example which will make date_bin() to behave like \n> date_trunc():\n> # SELECT date_bin('1 day', '2024-05-05 11:22:33', \n> '0001-01-01'::timestamp), date_trunc('day', '2024-05-05 \n> 11:22:33'::timestamptz);\n> date_bin | date_trunc\n> ---------------------+------------------------\n> 2024-05-05 00:00:00 | 2024-05-05 00:00:00+02\n> (1 row)\n>\n> In general, to make date_bin work similarly to date_trunc in \n> PostgreSQL, you need to set the interval length appropriately and use \n> an origin timestamp that aligns with the start of the interval you \n> want to bin.\n>\n> Here's how you can use date_bin to mimic the behavior of date_trunc:\n>\n> Truncate to the Start of the Year:\n> # SELECT date_bin('1 year', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Month:\n> # SELECT date_bin('1 month', timestamp_column, \n> '0001-01-01'::timestamp) FROM your_table;\n> Truncate to the Start of the Day:\n> # SELECT date_bin('1 day', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Hour:\n> # SELECT date_bin('1 hour', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Minute:\n> # SELECT date_bin('1 minute', timestamp_column, \n> '0001-01-01'::timestamp) FROM your_table;\n>\n>\n> -- \n> Przemysław Sztoch | Mobile +48 509 99 00 66\n>\nPlease, use it with timestamptz for '2 hours' or '3 hours' interval.\n\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;\n\n ts | one_hour_bin | two_hours_bin \n| three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n 2022-03-26 22:00:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 22:30:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 23:00:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 23:30:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-27 00:00:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 00:30:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 01:00:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 01:30:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 03:00:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 \n01:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 03:30:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 \n01:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 04:00:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 \n01:36:00+01 | 2022-03-27 03:36:00+02\n 2022-03-27 04:30:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 \n01:36:00+01 | 2022-03-27 03:36:00+02\n 2022-03-27 05:00:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 05:30:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 06:00:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 06:30:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 07:00:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 07:30:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 08:00:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 08:30:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 09:00:00+02 | 2022-03-27 08:36:00+02 | 2022-03-27 \n08:36:00+02 | 2022-03-27 06:36:00+02\n(21 rows)\n\nWe have 36 minutes offset (historical time change).\n\nIf we use origin from current year, we have wrong value after DST too:\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;^C\npostgres=# \\e\n ts | one_hour_bin | two_hours_bin \n| three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n 2022-03-26 22:00:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 22:30:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 23:00:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 23:30:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-27 00:00:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 00:30:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 01:00:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 01:30:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 03:00:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 00:00:00+01\n 2022-03-27 03:30:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 00:00:00+01\n 2022-03-27 04:00:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 04:30:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 05:00:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 05:30:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 06:00:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 06:30:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 07:00:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 07:30:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 08:00:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 08:30:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 09:00:00+02 | 2022-03-27 09:00:00+02 | 2022-03-27 \n09:00:00+02 | 2022-03-27 07:00:00+02\n(21 rows)\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nYasir wrote on 19.05.2024 00:03:\n\n\nI would also like to thank Robert for presenting the matter in detail.\n\n\nMy function date_trunc ( interval, timestamp, ...) is similar to \noriginal function date_trunc ( text, timestamp ...) .\n\n\n\nMy extension only gives more granularity.\n\n\nWe don't have a jump from hour to day. We can use 6h\n and 12h.\n It's \nthe \nsame with minutes.\n\n\nWe can round to 30\n minutes, 20\n minutes, 15\n minutes, etc.\n\n\n\nUsing date_bin has a similar effect, but requires specifying the origin.\n According to this origin,\n\n\nsubsequent buckets are then calculated. The need to provide this origin \nis sometimes a very big problem.\n\n\nEspecially since you cannot use one origin when changing from summer to \nwinter time.\n\n\n\nIf we use one origin for example begin of year: 2024-01-01\n 00:00:00\n \nthen:\n\n\n# SET timezone='Europe/Warsaw';\n\n\n# SELECT date_bin('1 day', '2024-03-05\n11:22:33',\n '2024-01-01\n \n00:00:00'), \ndate_trunc('day', '2024-03-05\n 11:22:33'::timestamptz);\n2024-03-05\n 00:00:00+01 2024-03-05 \n00:00:00+01 date_bin works ok, \nbecause we are before DST\n\n\n# SELECT date_bin('1 day', '2024-05-05\n11:22:33',\n '2024-01-01\n \n00:00:00'), \ndate_trunc('day', '2024-05-05\n \n11:22:33'::timestamptz);\n2024-05-05\n 01:00:00+02 2024-05-05 \n00:00:00+02 date_bin has \nproblem, because we are in May after DST\n\n\nIf anyone has an idea how to make date_bin work like date_trunc, please \nprovide an example.\nHere is an example which will\n make date_bin() to behave like date_trunc(): # SELECT date_bin('1 \nday', '2024-05-05 11:22:33', '0001-01-01'::timestamp), date_trunc('day',\n '2024-05-05 11:22:33'::timestamptz); date_bin | \ndate_trunc ---------------------+------------------------ 2024-05-05\n 00:00:00 | 2024-05-05 00:00:00+02(1 row)In general, to make\n date_bin work similarly to date_trunc in PostgreSQL, you need to set \nthe interval length appropriately and use an origin timestamp that \naligns with the start of the interval you want to bin.Here's how\n you can use date_bin to mimic the behavior of date_trunc:Truncate\n to the Start of the Year:# SELECT date_bin('1 year', \ntimestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate \nto the Start of the Month:# SELECT date_bin('1 month', \ntimestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate \nto the Start of the Day:# SELECT date_bin('1 day', timestamp_column,\n '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of \nthe Hour:# SELECT date_bin('1 hour', timestamp_column, \n'0001-01-01'::timestamp) FROM your_table;Truncate to the Start of \nthe Minute:# SELECT date_bin('1 minute', timestamp_column, \n'0001-01-01'::timestamp) FROM your_table;\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66\n\n\nPlease, use it with timestamptz for '2 hours' or '3 hours' interval.\n\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;\n\n ts | one_hour_bin | \ntwo_hours_bin | three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n 2022-03-26 22:00:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 22:30:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 23:00:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-26 23:30:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 \n21:36:00+01 | 2022-03-26 20:36:00+01\n 2022-03-27 00:00:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 00:30:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 01:00:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 01:30:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 \n23:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 03:00:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 \n01:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 03:30:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 \n01:36:00+01 | 2022-03-26 23:36:00+01\n 2022-03-27 04:00:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 \n01:36:00+01 | 2022-03-27 03:36:00+02\n 2022-03-27 04:30:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 \n01:36:00+01 | 2022-03-27 03:36:00+02\n 2022-03-27 05:00:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 05:30:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 06:00:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 06:30:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 \n04:36:00+02 | 2022-03-27 03:36:00+02\n 2022-03-27 07:00:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 07:30:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 08:00:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 08:30:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 \n06:36:00+02 | 2022-03-27 06:36:00+02\n 2022-03-27 09:00:00+02 | 2022-03-27 08:36:00+02 | 2022-03-27 \n08:36:00+02 | 2022-03-27 06:36:00+02\n(21 rows)\n\nWe have 36 minutes offset (historical time change).\n\nIf we use origin from current year, we have wrong value after DST too:\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;^C\npostgres=# \\e\n ts | one_hour_bin | \ntwo_hours_bin | three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n 2022-03-26 22:00:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 22:30:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 23:00:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-26 23:30:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 \n22:00:00+01 | 2022-03-26 21:00:00+01\n 2022-03-27 00:00:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 00:30:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 01:00:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 01:30:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 \n00:00:00+01 | 2022-03-27 00:00:00+01\n 2022-03-27 03:00:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 00:00:00+01\n 2022-03-27 03:30:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 00:00:00+01\n 2022-03-27 04:00:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 04:30:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 \n03:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 05:00:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 05:30:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 06:00:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 06:30:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 \n05:00:00+02 | 2022-03-27 04:00:00+02\n 2022-03-27 07:00:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 07:30:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 08:00:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 08:30:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 \n07:00:00+02 | 2022-03-27 07:00:00+02\n 2022-03-27 09:00:00+02 | 2022-03-27 09:00:00+02 | 2022-03-27 \n09:00:00+02 | 2022-03-27 07:00:00+02\n(21 rows)\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 20 May 2024 17:58:13 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
},
{
"msg_contents": "Yasir wrote on 19.05.2024 00:03:\n> On Sun, May 19, 2024 at 2:20 AM Przemysław Sztoch \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> I would also like to thank Robert for presenting the matter in detail.\n>\n> My function date_trunc ( interval, timestamp, ...) is similar to\n> original function date_trunc ( text, timestamp ...) .\n>\n> My extension only gives more granularity.\n> We don't have a jump from hour to day. We can use 6h and 12h. It's\n> the same with minutes.\n> We can round to 30 minutes, 20minutes, 15 minutes, etc.\n>\n> Using date_bin has a similar effect, but requires specifying the\n> origin. According to this origin,\n> subsequent buckets are then calculated. The need to provide this\n> origin is sometimes a very big problem.\n> Especially since you cannot use one origin when changing from\n> summer to winter time.\n>\n> If we use one origin for example begin of year: 2024-01-01\n> 00:00:00 then:\n> # SET timezone='Europe/Warsaw';\n> # SELECT date_bin('1 day', '2024-03-05 11:22:33', '2024-01-01\n> 00:00:00'), date_trunc('day', '2024-03-05 11:22:33'::timestamptz);\n> 2024-03-05 00:00:00+01 2024-03-05 00:00:00+01 date_bin works\n> ok, because we are before DST\n> # SELECT date_bin('1 day', '2024-05-05 11:22:33', '2024-01-01\n> 00:00:00'), date_trunc('day', '2024-05-05 11:22:33'::timestamptz);\n> 2024-05-05 01:00:00+02 2024-05-05 00:00:00+02 date_bin has\n> problem, because we are in May after DST\n>\n> If anyone has an idea how to make date_bin work like date_trunc,\n> please provide an example.\n>\n>\n> Here is an example which will make date_bin() to behave like \n> date_trunc():\n> # SELECT date_bin('1 day', '2024-05-05 11:22:33', \n> '0001-01-01'::timestamp), date_trunc('day', '2024-05-05 \n> 11:22:33'::timestamptz);\n> date_bin | date_trunc\n> ---------------------+------------------------\n> 2024-05-05 00:00:00 | 2024-05-05 00:00:00+02\n> (1 row)\n>\n> In general, to make date_bin work similarly to date_trunc in \n> PostgreSQL, you need to set the interval length appropriately and use \n> an origin timestamp that aligns with the start of the interval you \n> want to bin.\n>\n> Here's how you can use date_bin to mimic the behavior of date_trunc:\n>\n> Truncate to the Start of the Year:\n> # SELECT date_bin('1 year', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Month:\n> # SELECT date_bin('1 month', timestamp_column, \n> '0001-01-01'::timestamp) FROM your_table;\n> Truncate to the Start of the Day:\n> # SELECT date_bin('1 day', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Hour:\n> # SELECT date_bin('1 hour', timestamp_column, '0001-01-01'::timestamp) \n> FROM your_table;\n> Truncate to the Start of the Minute:\n> # SELECT date_bin('1 minute', timestamp_column, \n> '0001-01-01'::timestamp) FROM your_table;\nPlease, use it with timestamptz for '2 hours' or '3 hours' interval.\n\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;\n\n ts | one_hour_bin | two_hours_bin \n| three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n2022-03-26 22:00:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 21:36:00+01 \n| 2022-03-26 20:36:00+01\n2022-03-26 22:30:00+01 | 2022-03-26 21:36:00+01 | 2022-03-26 21:36:00+01 \n| 2022-03-26 20:36:00+01\n2022-03-26 23:00:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 21:36:00+01 \n| 2022-03-26 20:36:00+01\n2022-03-26 23:30:00+01 | 2022-03-26 22:36:00+01 | 2022-03-26 21:36:00+01 \n| 2022-03-26 20:36:00+01\n2022-03-27 00:00:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 23:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 00:30:00+01 | 2022-03-26 23:36:00+01 | 2022-03-26 23:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 01:00:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 23:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 01:30:00+01 | 2022-03-27 00:36:00+01 | 2022-03-26 23:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 03:00:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 01:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 03:30:00+02 | 2022-03-27 01:36:00+01 | 2022-03-27 01:36:00+01 \n| 2022-03-26 23:36:00+01\n2022-03-27 04:00:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 01:36:00+01 \n| 2022-03-27 03:36:00+02\n2022-03-27 04:30:00+02 | 2022-03-27 03:36:00+02 | 2022-03-27 01:36:00+01 \n| 2022-03-27 03:36:00+02\n2022-03-27 05:00:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 04:36:00+02 \n| 2022-03-27 03:36:00+02\n2022-03-27 05:30:00+02 | 2022-03-27 04:36:00+02 | 2022-03-27 04:36:00+02 \n| 2022-03-27 03:36:00+02\n2022-03-27 06:00:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 04:36:00+02 \n| 2022-03-27 03:36:00+02\n2022-03-27 06:30:00+02 | 2022-03-27 05:36:00+02 | 2022-03-27 04:36:00+02 \n| 2022-03-27 03:36:00+02\n2022-03-27 07:00:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 06:36:00+02 \n| 2022-03-27 06:36:00+02\n2022-03-27 07:30:00+02 | 2022-03-27 06:36:00+02 | 2022-03-27 06:36:00+02 \n| 2022-03-27 06:36:00+02\n2022-03-27 08:00:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 06:36:00+02 \n| 2022-03-27 06:36:00+02\n2022-03-27 08:30:00+02 | 2022-03-27 07:36:00+02 | 2022-03-27 06:36:00+02 \n| 2022-03-27 06:36:00+02\n2022-03-27 09:00:00+02 | 2022-03-27 08:36:00+02 | 2022-03-27 08:36:00+02 \n| 2022-03-27 06:36:00+02\n(21 rows)\n\nWe have 36 minutes offset (historical time change).\n\nIf we use origin from current year, we have wrong value after DST too:\nSET timezone TO 'Europe/Warsaw';\nSELECT ts,\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n FROM generate_series('2022-03-26 21:00:00+00'::timestamptz,\n '2022-03-27 07:00:00+00'::timestamptz,\n '30 min'::interval,\n 'Europe/Warsaw') AS ts;^C\npostgres=# \\e\n ts | one_hour_bin | two_hours_bin \n| three_hours_bin\n------------------------+------------------------+------------------------+------------------------\n2022-03-26 22:00:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 22:00:00+01 \n| 2022-03-26 21:00:00+01\n2022-03-26 22:30:00+01 | 2022-03-26 22:00:00+01 | 2022-03-26 22:00:00+01 \n| 2022-03-26 21:00:00+01\n2022-03-26 23:00:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 22:00:00+01 \n| 2022-03-26 21:00:00+01\n2022-03-26 23:30:00+01 | 2022-03-26 23:00:00+01 | 2022-03-26 22:00:00+01 \n| 2022-03-26 21:00:00+01\n2022-03-27 00:00:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 00:00:00+01 \n| 2022-03-27 00:00:00+01\n2022-03-27 00:30:00+01 | 2022-03-27 00:00:00+01 | 2022-03-27 00:00:00+01 \n| 2022-03-27 00:00:00+01\n2022-03-27 01:00:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 00:00:00+01 \n| 2022-03-27 00:00:00+01\n2022-03-27 01:30:00+01 | 2022-03-27 01:00:00+01 | 2022-03-27 00:00:00+01 \n| 2022-03-27 00:00:00+01\n2022-03-27 03:00:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 03:00:00+02 \n| 2022-03-27 00:00:00+01\n2022-03-27 03:30:00+02 | 2022-03-27 03:00:00+02 | 2022-03-27 03:00:00+02 \n| 2022-03-27 00:00:00+01\n2022-03-27 04:00:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 03:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 04:30:00+02 | 2022-03-27 04:00:00+02 | 2022-03-27 03:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 05:00:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 05:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 05:30:00+02 | 2022-03-27 05:00:00+02 | 2022-03-27 05:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 06:00:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 05:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 06:30:00+02 | 2022-03-27 06:00:00+02 | 2022-03-27 05:00:00+02 \n| 2022-03-27 04:00:00+02\n2022-03-27 07:00:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 07:00:00+02 \n| 2022-03-27 07:00:00+02\n2022-03-27 07:30:00+02 | 2022-03-27 07:00:00+02 | 2022-03-27 07:00:00+02 \n| 2022-03-27 07:00:00+02\n2022-03-27 08:00:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 07:00:00+02 \n| 2022-03-27 07:00:00+02\n2022-03-27 08:30:00+02 | 2022-03-27 08:00:00+02 | 2022-03-27 07:00:00+02 \n| 2022-03-27 07:00:00+02\n2022-03-27 09:00:00+02 | 2022-03-27 09:00:00+02 | 2022-03-27 09:00:00+02 \n| 2022-03-27 07:00:00+02\n(21 rows)\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nYasir wrote on 19.05.2024 00:03:\n\n\nOn Sun, May 19, 2024 at 2:20 AM Przemysław Sztoch \n<[email protected]>\n wrote:\n\n\n\nI would also like to thank Robert for presenting the matter in detail.\n\n\nMy function date_trunc ( interval, timestamp, ...) is similar to \noriginal function date_trunc ( text, timestamp ...) .\n\n\n\nMy extension only gives more granularity.\n\n\nWe don't have a jump from hour to day. We can use 6h\n and 12h.\n It's \nthe \nsame with minutes.\n\n\nWe can round to 30\n minutes, 20\n minutes, 15\n minutes, etc.\n\n\n\nUsing date_bin has a similar effect, but requires specifying the origin.\n According to this origin,\n\n\nsubsequent buckets are then calculated. The need to provide this origin \nis sometimes a very big problem.\n\n\nEspecially since you cannot use one origin when changing from summer to \nwinter time.\n\n\n\nIf we use one origin for example begin of year: 2024-01-01\n 00:00:00\n \nthen:\n\n\n# SET timezone='Europe/Warsaw';\n\n\n# SELECT date_bin('1 day', '2024-03-05\n11:22:33',\n '2024-01-01\n \n00:00:00'), \ndate_trunc('day', '2024-03-05\n 11:22:33'::timestamptz);\n2024-03-05\n 00:00:00+01 2024-03-05 \n00:00:00+01 date_bin works ok, \nbecause we are before DST\n\n\n# SELECT date_bin('1 day', '2024-05-05\n11:22:33',\n '2024-01-01\n \n00:00:00'), \ndate_trunc('day', '2024-05-05\n \n11:22:33'::timestamptz);\n2024-05-05\n 01:00:00+02 2024-05-05 \n00:00:00+02 date_bin has \nproblem, because we are in May after DST\n\n\nIf anyone has an idea how to make date_bin work like date_trunc, please \nprovide an example.\nHere is an example which will\n make date_bin() to behave like date_trunc(): # SELECT date_bin('1 \nday', '2024-05-05 11:22:33', '0001-01-01'::timestamp), date_trunc('day',\n '2024-05-05 11:22:33'::timestamptz); date_bin | \ndate_trunc ---------------------+------------------------ 2024-05-05\n 00:00:00 | 2024-05-05 00:00:00+02(1 row)In general, to make\n date_bin work similarly to date_trunc in PostgreSQL, you need to set \nthe interval length appropriately and use an origin timestamp that \naligns with the start of the interval you want to bin.Here's how\n you can use date_bin to mimic the behavior of date_trunc:Truncate\n to the Start of the Year:# SELECT date_bin('1 year', \ntimestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate \nto the Start of the Month:# SELECT date_bin('1 month', \ntimestamp_column, '0001-01-01'::timestamp) FROM your_table;Truncate \nto the Start of the Day:# SELECT date_bin('1 day', timestamp_column,\n '0001-01-01'::timestamp) FROM your_table;Truncate to the Start of \nthe Hour:# SELECT date_bin('1 hour', timestamp_column, \n'0001-01-01'::timestamp) FROM your_table;Truncate to the Start of \nthe Minute:# SELECT date_bin('1 minute', timestamp_column, \n'0001-01-01'::timestamp) FROM your_table;\n\nPlease, use it with timestamptz for '2 hours' or '3 hours' interval.\n\n\nSET timezone TO 'Europe/Warsaw';\n\nSELECT ts,\n\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n\n FROM generate_series('2022-03-26\n 21:00:00+00'::timestamptz,\n\n '2022-03-27\n 07:00:00+00'::timestamptz,\n\n '30 min'::interval,\n\n 'Europe/Warsaw') AS ts;\n\n\n ts | one_hour_bin | \ntwo_hours_bin | three_hours_bin\n\n------------------------+------------------------+------------------------+------------------------\n\n 2022-03-26\n 22:00:00+01 | 2022-03-26\n 21:36:00+01 | 2022-03-26\n \n21:36:00+01 | 2022-03-26\n 20:36:00+01\n\n 2022-03-26\n 22:30:00+01 | 2022-03-26\n 21:36:00+01 | 2022-03-26\n \n21:36:00+01 | 2022-03-26\n 20:36:00+01\n\n 2022-03-26\n 23:00:00+01 | 2022-03-26\n 22:36:00+01 | 2022-03-26\n \n21:36:00+01 | 2022-03-26\n 20:36:00+01\n\n 2022-03-26\n 23:30:00+01 | 2022-03-26\n 22:36:00+01 | 2022-03-26\n \n21:36:00+01 | 2022-03-26\n 20:36:00+01\n\n 2022-03-27\n 00:00:00+01 | 2022-03-26\n 23:36:00+01 | 2022-03-26\n \n23:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 00:30:00+01 | 2022-03-26\n 23:36:00+01 | 2022-03-26\n \n23:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 01:00:00+01 | 2022-03-27\n 00:36:00+01 | 2022-03-26\n \n23:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 01:30:00+01 | 2022-03-27\n 00:36:00+01 | 2022-03-26\n \n23:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 03:00:00+02 | 2022-03-27\n 01:36:00+01 | 2022-03-27\n \n01:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 03:30:00+02 | 2022-03-27\n 01:36:00+01 | 2022-03-27\n \n01:36:00+01 | 2022-03-26\n 23:36:00+01\n\n 2022-03-27\n 04:00:00+02 | 2022-03-27\n 03:36:00+02 | 2022-03-27\n \n01:36:00+01 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 04:30:00+02 | 2022-03-27\n 03:36:00+02 | 2022-03-27\n \n01:36:00+01 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 05:00:00+02 | 2022-03-27\n 04:36:00+02 | 2022-03-27\n \n04:36:00+02 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 05:30:00+02 | 2022-03-27\n 04:36:00+02 | 2022-03-27\n \n04:36:00+02 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 06:00:00+02 | 2022-03-27\n 05:36:00+02 | 2022-03-27\n \n04:36:00+02 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 06:30:00+02 | 2022-03-27\n 05:36:00+02 | 2022-03-27\n \n04:36:00+02 | 2022-03-27 \n03:36:00+02\n\n 2022-03-27\n 07:00:00+02 | 2022-03-27\n 06:36:00+02 | 2022-03-27\n \n06:36:00+02 | 2022-03-27 \n06:36:00+02\n\n 2022-03-27\n 07:30:00+02 | 2022-03-27\n 06:36:00+02 | 2022-03-27\n \n06:36:00+02 | 2022-03-27 \n06:36:00+02\n\n 2022-03-27\n 08:00:00+02 | 2022-03-27\n 07:36:00+02 | 2022-03-27\n \n06:36:00+02 | 2022-03-27 \n06:36:00+02\n\n 2022-03-27\n 08:30:00+02 | 2022-03-27\n 07:36:00+02 | 2022-03-27\n \n06:36:00+02 | 2022-03-27 \n06:36:00+02\n\n 2022-03-27\n 09:00:00+02 | 2022-03-27\n 08:36:00+02 | 2022-03-27\n \n08:36:00+02 | 2022-03-27 \n06:36:00+02\n\n(21 rows)\n\n\nWe have 36 minutes offset (historical time change).\n\n\nIf we use origin from current year, we have wrong value after DST too:\n\nSET timezone TO 'Europe/Warsaw';\n\nSELECT ts,\n\n date_bin('1 hour'::interval, ts, '0001-01-01 00:00:00') AS \none_hour_bin,\n\n date_bin('2 hour'::interval, ts, '0001-01-01 00:00:00') AS \ntwo_hours_bin,\n\n date_bin('3 hour'::interval, ts, '0001-01-01 00:00:00') AS \nthree_hours_bin\n\n FROM generate_series('2022-03-26\n 21:00:00+00'::timestamptz,\n\n '2022-03-27\n 07:00:00+00'::timestamptz,\n\n '30 min'::interval,\n\n 'Europe/Warsaw') AS ts;^C\n\npostgres=# \\e\n\n ts | one_hour_bin | \ntwo_hours_bin | three_hours_bin\n\n------------------------+------------------------+------------------------+------------------------\n\n 2022-03-26\n 22:00:00+01 | 2022-03-26\n 22:00:00+01 | 2022-03-26\n \n22:00:00+01 | 2022-03-26\n 21:00:00+01\n\n 2022-03-26\n 22:30:00+01 | 2022-03-26\n 22:00:00+01 | 2022-03-26\n \n22:00:00+01 | 2022-03-26\n 21:00:00+01\n\n 2022-03-26\n 23:00:00+01 | 2022-03-26\n 23:00:00+01 | 2022-03-26\n \n22:00:00+01 | 2022-03-26\n 21:00:00+01\n\n 2022-03-26\n 23:30:00+01 | 2022-03-26\n 23:00:00+01 | 2022-03-26\n \n22:00:00+01 | 2022-03-26\n 21:00:00+01\n\n 2022-03-27\n 00:00:00+01 | 2022-03-27\n 00:00:00+01 | 2022-03-27\n \n00:00:00+01 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 00:30:00+01 | 2022-03-27\n 00:00:00+01 | 2022-03-27\n \n00:00:00+01 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 01:00:00+01 | 2022-03-27\n 01:00:00+01 | 2022-03-27\n \n00:00:00+01 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 01:30:00+01 | 2022-03-27\n 01:00:00+01 | 2022-03-27\n \n00:00:00+01 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 03:00:00+02 | 2022-03-27\n 03:00:00+02 | 2022-03-27\n \n03:00:00+02 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 03:30:00+02 | 2022-03-27\n 03:00:00+02 | 2022-03-27\n \n03:00:00+02 | 2022-03-27 \n00:00:00+01\n\n 2022-03-27\n 04:00:00+02 | 2022-03-27\n 04:00:00+02 | 2022-03-27\n \n03:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 04:30:00+02 | 2022-03-27\n 04:00:00+02 | 2022-03-27\n \n03:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 05:00:00+02 | 2022-03-27\n 05:00:00+02 | 2022-03-27\n \n05:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 05:30:00+02 | 2022-03-27\n 05:00:00+02 | 2022-03-27\n \n05:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 06:00:00+02 | 2022-03-27\n 06:00:00+02 | 2022-03-27\n \n05:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 06:30:00+02 | 2022-03-27\n 06:00:00+02 | 2022-03-27\n \n05:00:00+02 | 2022-03-27 \n04:00:00+02\n\n 2022-03-27\n 07:00:00+02 | 2022-03-27\n 07:00:00+02 | 2022-03-27\n \n07:00:00+02 | 2022-03-27 \n07:00:00+02\n\n 2022-03-27\n 07:30:00+02 | 2022-03-27\n 07:00:00+02 | 2022-03-27\n \n07:00:00+02 | 2022-03-27 \n07:00:00+02\n\n 2022-03-27\n 08:00:00+02 | 2022-03-27\n 08:00:00+02 | 2022-03-27\n \n07:00:00+02 | 2022-03-27 \n07:00:00+02\n\n 2022-03-27\n 08:30:00+02 | 2022-03-27\n 08:00:00+02 | 2022-03-27\n \n07:00:00+02 | 2022-03-27 \n07:00:00+02\n\n 2022-03-27\n 09:00:00+02 | 2022-03-27\n 09:00:00+02 | 2022-03-27\n \n09:00:00+02 | 2022-03-27 \n07:00:00+02\n\n(21 rows)\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 20 May 2024 18:08:01 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: date_trunc function in interval version"
}
] |
[
{
"msg_contents": "Hello,\n\npgbench mixes int and int64 to initialize the tables.\nWhen a large enough scale factor is passed, initPopulateTable\noverflows leading to it never completing, ie.\n\n2147400000 of 2200000000 tuples (97%) of\npgbench_accounts done (elapsed 4038.83 s, remaining 98.93 s)\n-2147400000 of 2200000000 tuples (-97%) of\npgbench_accounts done (elapsed 4038.97 s, remaining -8176.86 s)\n\n\nAttached is a patch that fixes this, pgbench -i -s 22000 works now.\n\n-- \nJohn Hsu - Amazon Web Services",
"msg_date": "Fri, 22 Dec 2023 15:18:16 -0800",
"msg_from": "Chen Hao Hsu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixing pgbench init overflow"
},
{
"msg_contents": "\nOn Sat, 23 Dec 2023 at 07:18, Chen Hao Hsu <[email protected]> wrote:\n> Hello,\n>\n> pgbench mixes int and int64 to initialize the tables.\n> When a large enough scale factor is passed, initPopulateTable\n> overflows leading to it never completing, ie.\n>\n> 2147400000 of 2200000000 tuples (97%) of\n> pgbench_accounts done (elapsed 4038.83 s, remaining 98.93 s)\n> -2147400000 of 2200000000 tuples (-97%) of\n> pgbench_accounts done (elapsed 4038.97 s, remaining -8176.86 s)\n>\n>\n> Attached is a patch that fixes this, pgbench -i -s 22000 works now.\n\nI think only the following line can fix this.\n\n+\tint64\t\t\tk;\n\nDo not need to modify the type of `n`, right?\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Sat, 23 Dec 2023 13:23:44 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing pgbench init overflow"
},
{
"msg_contents": " <ME3P282MB316684190982F54BDBD4FE90B69BA@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM>\n\n> \n> On Sat, 23 Dec 2023 at 07:18, Chen Hao Hsu <[email protected]> wrote:\n>> Hello,\n>>\n>> pgbench mixes int and int64 to initialize the tables.\n>> When a large enough scale factor is passed, initPopulateTable\n>> overflows leading to it never completing, ie.\n>>\n>> 2147400000 of 2200000000 tuples (97%) of\n>> pgbench_accounts done (elapsed 4038.83 s, remaining 98.93 s)\n>> -2147400000 of 2200000000 tuples (-97%) of\n>> pgbench_accounts done (elapsed 4038.97 s, remaining -8176.86 s)\n>>\n>>\n>> Attached is a patch that fixes this, pgbench -i -s 22000 works now.\n> \n> I think only the following line can fix this.\n> \n> +\tint64\t\t\tk;\n> \n> Do not need to modify the type of `n`, right?\n\nYou are right. n represents the return value of pg_snprintf, which is\nthe byte length of the formatted data, which is int, not int64.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 23 Dec 2023 16:22:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing pgbench init overflow"
},
{
"msg_contents": "On Sat, 23 Dec 2023 at 15:22, Tatsuo Ishii <[email protected]> wrote:\n> <ME3P282MB316684190982F54BDBD4FE90B69BA@ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM>\n>\n>>\n>> On Sat, 23 Dec 2023 at 07:18, Chen Hao Hsu <[email protected]> wrote:\n>>> Hello,\n>>>\n>>> pgbench mixes int and int64 to initialize the tables.\n>>> When a large enough scale factor is passed, initPopulateTable\n>>> overflows leading to it never completing, ie.\n>>>\n>>> 2147400000 of 2200000000 tuples (97%) of\n>>> pgbench_accounts done (elapsed 4038.83 s, remaining 98.93 s)\n>>> -2147400000 of 2200000000 tuples (-97%) of\n>>> pgbench_accounts done (elapsed 4038.97 s, remaining -8176.86 s)\n>>>\n>>>\n>>> Attached is a patch that fixes this, pgbench -i -s 22000 works now.\n>>\n>> I think only the following line can fix this.\n>>\n>> +\tint64\t\t\tk;\n>>\n>> Do not need to modify the type of `n`, right?\n>\n> You are right. n represents the return value of pg_snprintf, which is\n> the byte length of the formatted data, which is int, not int64.\n>\n\nThanks for you confirmation! Please consider the v2 patch to review.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.",
"msg_date": "Sat, 23 Dec 2023 15:37:33 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing pgbench init overflow"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 03:37:33PM +0800, Japin Li wrote:\n> Thanks for you confirmation! Please consider the v2 patch to review.\n\nThis oversight is from me via commit e35cc3b3f2d0. I'll take care of\nit. Thanks for the report!\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 10:41:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing pgbench init overflow"
}
] |
[
{
"msg_contents": "Hi,\n\nI didn't see anyone volunteering for the January Commitfest, so I'll\nvolunteer to be CF manager for January 2024 Commitfest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 23 Dec 2023 08:52:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest manager January 2024"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> I didn't see anyone volunteering for the January Commitfest, so I'll\n> volunteer to be CF manager for January 2024 Commitfest.\n\n(Adding Magnus in CC.)\n\nThat would be really helpful. Thanks for helping! Do you have the\nadmin rights on the CF app? You are going to require them in order to\nmark the CF as in-process, and you would also need to switch the CF\nafter that from \"Future\" to \"Open\" so as people can still post\npatches once January one begins. \n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 10:46:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > volunteer to be CF manager for January 2024 Commitfest.\n>\n> (Adding Magnus in CC.)\n>\n> That would be really helpful. Thanks for helping! Do you have the\n> admin rights on the CF app? You are going to require them in order to\n> mark the CF as in-process, and you would also need to switch the CF\n> after that from \"Future\" to \"Open\" so as people can still post\n> patches once January one begins.\n\nI don't have admin rights for the CF app. Please provide admin rights.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 24 Dec 2023 18:40:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Sun, 24 Dec 2023 at 18:40, vignesh C <[email protected]> wrote:\n>\n> On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n> >\n> > On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > > volunteer to be CF manager for January 2024 Commitfest.\n> >\n> > (Adding Magnus in CC.)\n> >\n> > That would be really helpful. Thanks for helping! Do you have the\n> > admin rights on the CF app? You are going to require them in order to\n> > mark the CF as in-process, and you would also need to switch the CF\n> > after that from \"Future\" to \"Open\" so as people can still post\n> > patches once January one begins.\n>\n> I don't have admin rights for the CF app. Please provide admin rights.\n\nI have not yet got the admin rights, Kindly provide admin rights for the CF app.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 1 Jan 2024 09:05:27 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 8:53 AM vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> I didn't see anyone volunteering for the January Commitfest, so I'll\n> volunteer to be CF manager for January 2024 Commitfest.\n\nI can assist with the January 2024 Commitfest.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Mon, 1 Jan 2024 10:06:45 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 4:35 AM vignesh C <[email protected]> wrote:\n>\n> On Sun, 24 Dec 2023 at 18:40, vignesh C <[email protected]> wrote:\n> >\n> > On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > > > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > > > volunteer to be CF manager for January 2024 Commitfest.\n> > >\n> > > (Adding Magnus in CC.)\n> > >\n> > > That would be really helpful. Thanks for helping! Do you have the\n> > > admin rights on the CF app? You are going to require them in order to\n> > > mark the CF as in-process, and you would also need to switch the CF\n> > > after that from \"Future\" to \"Open\" so as people can still post\n> > > patches once January one begins.\n> >\n> > I don't have admin rights for the CF app. Please provide admin rights.\n>\n> I have not yet got the admin rights, Kindly provide admin rights for the CF app.\n\nIt's been christmas holidays...\n\n\nWhat's your community username?\n\n//Magnus\n\n\n",
"msg_date": "Mon, 1 Jan 2024 16:30:50 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Mon, 1 Jan 2024 at 21:01, Magnus Hagander <[email protected]> wrote:\n>\n> On Mon, Jan 1, 2024 at 4:35 AM vignesh C <[email protected]> wrote:\n> >\n> > On Sun, 24 Dec 2023 at 18:40, vignesh C <[email protected]> wrote:\n> > >\n> > > On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n> > > >\n> > > > On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > > > > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > > > > volunteer to be CF manager for January 2024 Commitfest.\n> > > >\n> > > > (Adding Magnus in CC.)\n> > > >\n> > > > That would be really helpful. Thanks for helping! Do you have the\n> > > > admin rights on the CF app? You are going to require them in order to\n> > > > mark the CF as in-process, and you would also need to switch the CF\n> > > > after that from \"Future\" to \"Open\" so as people can still post\n> > > > patches once January one begins.\n> > >\n> > > I don't have admin rights for the CF app. Please provide admin rights.\n> >\n> > I have not yet got the admin rights, Kindly provide admin rights for the CF app.\n>\n> It's been christmas holidays...\n>\n>\n> What's your community username?\n\nMy username is vignesh.postgres\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 2 Jan 2024 08:15:40 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 3:45 AM vignesh C <[email protected]> wrote:\n>\n> On Mon, 1 Jan 2024 at 21:01, Magnus Hagander <[email protected]> wrote:\n> >\n> > On Mon, Jan 1, 2024 at 4:35 AM vignesh C <[email protected]> wrote:\n> > >\n> > > On Sun, 24 Dec 2023 at 18:40, vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n> > > > >\n> > > > > On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > > > > > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > > > > > volunteer to be CF manager for January 2024 Commitfest.\n> > > > >\n> > > > > (Adding Magnus in CC.)\n> > > > >\n> > > > > That would be really helpful. Thanks for helping! Do you have the\n> > > > > admin rights on the CF app? You are going to require them in order to\n> > > > > mark the CF as in-process, and you would also need to switch the CF\n> > > > > after that from \"Future\" to \"Open\" so as people can still post\n> > > > > patches once January one begins.\n> > > >\n> > > > I don't have admin rights for the CF app. Please provide admin rights.\n> > >\n> > > I have not yet got the admin rights, Kindly provide admin rights for the CF app.\n> >\n> > It's been christmas holidays...\n> >\n> >\n> > What's your community username?\n>\n> My username is vignesh.postgres\n\nCF admin permissions granted!\n\n//Magnus\n\n\n",
"msg_date": "Tue, 2 Jan 2024 11:12:50 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 15:43, Magnus Hagander <[email protected]> wrote:\n>\n> On Tue, Jan 2, 2024 at 3:45 AM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 1 Jan 2024 at 21:01, Magnus Hagander <[email protected]> wrote:\n> > >\n> > > On Mon, Jan 1, 2024 at 4:35 AM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Sun, 24 Dec 2023 at 18:40, vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > On Sun, 24 Dec 2023 at 07:16, Michael Paquier <[email protected]> wrote:\n> > > > > >\n> > > > > > On Sat, Dec 23, 2023 at 08:52:38AM +0530, vignesh C wrote:\n> > > > > > > I didn't see anyone volunteering for the January Commitfest, so I'll\n> > > > > > > volunteer to be CF manager for January 2024 Commitfest.\n> > > > > >\n> > > > > > (Adding Magnus in CC.)\n> > > > > >\n> > > > > > That would be really helpful. Thanks for helping! Do you have the\n> > > > > > admin rights on the CF app? You are going to require them in order to\n> > > > > > mark the CF as in-process, and you would also need to switch the CF\n> > > > > > after that from \"Future\" to \"Open\" so as people can still post\n> > > > > > patches once January one begins.\n> > > > >\n> > > > > I don't have admin rights for the CF app. Please provide admin rights.\n> > > >\n> > > > I have not yet got the admin rights, Kindly provide admin rights for the CF app.\n> > >\n> > > It's been christmas holidays...\n> > >\n> > >\n> > > What's your community username?\n> >\n> > My username is vignesh.postgres\n>\n> CF admin permissions granted!\n\nThanks, I have updated the commitfest 2024-01 to In Progress and\ncommitfest 2024-03 to Open.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 2 Jan 2024 15:55:11 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "Hi Vignesh,\n\nIf you would like any assistance processing the 100s of CF entries I\nam happy to help in some way.\n\n======\nKind Regards,,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Jan 2024 14:49:21 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager January 2024"
},
{
"msg_contents": "On Fri, 19 Jan 2024 at 09:19, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> If you would like any assistance processing the 100s of CF entries I\n> am happy to help in some way.\n\nThanks Peter, that will be great.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 19 Jan 2024 10:10:58 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager January 2024"
}
] |
[
{
"msg_contents": "pg_stat_statements has some significant gaps in test coverage, \nespecially around the serialization of data around server restarts, so I \nwrote a test for that and also made some other smaller tweaks to \nincrease the coverage a bit. These patches are all independent of each \nother.\n\nAfter that, the only major function that isn't tested is gc_qtexts(). \nMaybe a future project.",
"msg_date": "Sat, 23 Dec 2023 15:18:01 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements: more test coverage"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 03:18:01PM +0100, Peter Eisentraut wrote:\n> +/* LCOV_EXCL_START */\n> +PG_FUNCTION_INFO_V1(pg_stat_statements);\n> PG_FUNCTION_INFO_V1(pg_stat_statements_1_2);\n> +/* LCOV_EXCL_STOP */\n\nThe only reason why I've seen this used at the C level was to bump up\nthe coverage requirements because of some internal company projects.\nI'm pretty sure to have proposed in the past at least one patch that\nwould make use of that, but it got rejected. It is not the only code\narea that has a similar pattern. So why introducing that now?\n\n> Subject: [PATCH v1 2/5] pg_stat_statements: Add coverage for\n> pg_stat_statements_1_8()\n>\n> [...]\n>\n> Subject: [PATCH v1 3/5] pg_stat_statements: Add coverage for\n> pg_stat_statements_reset_1_7\n\nYep, why not.\n\n> +SELECT format('create table t%s (a int)', lpad(i::text, 3, '0')) FROM generate_series(1, 101) AS x(i) \\gexec\n> +create table t001 (a int)\n> [...]\n> +create table t101 (a int)\n\nThat's a lot of bloat. This relies on pg_stat_statements.max's\ndefault to be at 100. Based on the regression tests, the maximum\nnumber of rows we have reported from the view pg_stat_statements is\n39 in utility.c. I think that you should just:\n- Use a DO block of a PL function, say with something like that to\nensure an amount of N queries? Say with something like that after\ntweaking pg_stat_statements.track:\nCREATE OR REPLACE FUNCTION create_tables(num_tables int)\n RETURNS VOID AS\n $func$\n BEGIN\n FOR i IN 1..num_tables LOOP\n EXECUTE format('\n CREATE TABLE IF NOT EXISTS %I (id int)', 't_' || i);\n END LOOP;\nEND\n$func$ LANGUAGE plpgsql;\n- Switch the minimum to be around 40~50 in the local\npg_stat_statements.conf used for the regression tests.\n\n> +SELECT count(*) <= 100 AND count(*) > 0 FROM pg_stat_statements;\n\nYou could fetch the max value in a \\get and reuse it here, as well.\n\n> +is( $node->safe_psql(\n> +\t\t'postgres',\n> +\t\t\"SELECT count(*) FROM pg_stat_statements WHERE query LIKE '%t1%'\"),\n> +\t'2',\n> +\t'pg_stat_statements data kept across restart');\n\nChecking that the contents match would be a bit more verbose than just\na count. One trick I've used in the patch is in\n027_stream_regress.pl, where there is a query grouping the stats\ndepending on the beginning of the queries. Not exact, a bit more\nverbose.\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 11:03:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On 24.12.23 03:03, Michael Paquier wrote:\n> On Sat, Dec 23, 2023 at 03:18:01PM +0100, Peter Eisentraut wrote:\n>> +/* LCOV_EXCL_START */\n>> +PG_FUNCTION_INFO_V1(pg_stat_statements);\n>> PG_FUNCTION_INFO_V1(pg_stat_statements_1_2);\n>> +/* LCOV_EXCL_STOP */\n> \n> The only reason why I've seen this used at the C level was to bump up\n> the coverage requirements because of some internal company projects.\n> I'm pretty sure to have proposed in the past at least one patch that\n> would make use of that, but it got rejected. It is not the only code\n> area that has a similar pattern. So why introducing that now?\n\nWhat other code areas have similar patterns (meaning, extension entry \npoints for upgrade support that are not covered by currently available \nextension installation files)?\n\n> That's a lot of bloat. This relies on pg_stat_statements.max's\n> default to be at 100.\n\nThe default is 5000. I set 100 explicitly in the configuration file for \nthe test.\n\n> - Use a DO block of a PL function, say with something like that to\n> ensure an amount of N queries? Say with something like that after\n> tweaking pg_stat_statements.track:\n> CREATE OR REPLACE FUNCTION create_tables(num_tables int)\n> RETURNS VOID AS\n> $func$\n> BEGIN\n> FOR i IN 1..num_tables LOOP\n> EXECUTE format('\n> CREATE TABLE IF NOT EXISTS %I (id int)', 't_' || i);\n> END LOOP;\n> END\n> $func$ LANGUAGE plpgsql;\n\nI tried it like this first, but this doesn't register as separately \nexecuted commands for pg_stat_statements.\n\n> - Switch the minimum to be around 40~50 in the local\n> pg_stat_statements.conf used for the regression tests.\n\n100 is the hardcoded minimum for the setting.\n\n>> +is( $node->safe_psql(\n>> +\t\t'postgres',\n>> +\t\t\"SELECT count(*) FROM pg_stat_statements WHERE query LIKE '%t1%'\"),\n>> +\t'2',\n>> +\t'pg_stat_statements data kept across restart');\n> \n> Checking that the contents match would be a bit more verbose than just\n> a count. One trick I've used in the patch is in\n> 027_stream_regress.pl, where there is a query grouping the stats\n> depending on the beginning of the queries. Not exact, a bit more\n> verbose.\n\nYeah, this could be expanded a bit.\n\n\n\n",
"msg_date": "Tue, 26 Dec 2023 22:03:20 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 26, 2023 at 10:03 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 24.12.23 03:03, Michael Paquier wrote:\n> > - Use a DO block of a PL function, say with something like that to\n> > ensure an amount of N queries? Say with something like that after\n> > tweaking pg_stat_statements.track:\n> > CREATE OR REPLACE FUNCTION create_tables(num_tables int)\n> > RETURNS VOID AS\n> > $func$\n> > BEGIN\n> > FOR i IN 1..num_tables LOOP\n> > EXECUTE format('\n> > CREATE TABLE IF NOT EXISTS %I (id int)', 't_' || i);\n> > END LOOP;\n> > END\n> > $func$ LANGUAGE plpgsql;\n>\n> I tried it like this first, but this doesn't register as separately\n> executed commands for pg_stat_statements.\n\nI was a bit surprised by that so I checked locally. It does work as\nexpected provided that you set pg_stat_statements.track to all:\n=# select create_tables(5);\n=# select queryid, query from pg_stat_statements where query like 'CREATE%';\n queryid | query\n----------------------+-----------------------------------------\n -4985234599080337259 | CREATE TABLE IF NOT EXISTS t_5 (id int)\n -790506371630237058 | CREATE TABLE IF NOT EXISTS t_2 (id int)\n -1104545884488896333 | CREATE TABLE IF NOT EXISTS t_3 (id int)\n -2961032912789520428 | CREATE TABLE IF NOT EXISTS t_4 (id int)\n 7273321309563119428 | CREATE TABLE IF NOT EXISTS t_1 (id int)\n\n\n",
"msg_date": "Wed, 27 Dec 2023 09:08:37 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On 27.12.23 09:08, Julien Rouhaud wrote:\n> Hi,\n> \n> On Tue, Dec 26, 2023 at 10:03 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 24.12.23 03:03, Michael Paquier wrote:\n>>> - Use a DO block of a PL function, say with something like that to\n>>> ensure an amount of N queries? Say with something like that after\n>>> tweaking pg_stat_statements.track:\n>>> CREATE OR REPLACE FUNCTION create_tables(num_tables int)\n>>> RETURNS VOID AS\n>>> $func$\n>>> BEGIN\n>>> FOR i IN 1..num_tables LOOP\n>>> EXECUTE format('\n>>> CREATE TABLE IF NOT EXISTS %I (id int)', 't_' || i);\n>>> END LOOP;\n>>> END\n>>> $func$ LANGUAGE plpgsql;\n>>\n>> I tried it like this first, but this doesn't register as separately\n>> executed commands for pg_stat_statements.\n> \n> I was a bit surprised by that so I checked locally. It does work as\n> expected provided that you set pg_stat_statements.track to all:\n\nOk, here is an updated patch set that does it that way.\n\nI have committed the patches 0002 and 0003 from the previous patch set \nalready.\n\nI have also enhanced the TAP test a bit to check the actual content of \nthe output across restarts.\n\nI'm not too hung up on the 0001 patch if others don't like that approach.",
"msg_date": "Wed, 27 Dec 2023 13:53:06 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 8:53 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 27.12.23 09:08, Julien Rouhaud wrote:\n> >\n> > I was a bit surprised by that so I checked locally. It does work as\n> > expected provided that you set pg_stat_statements.track to all:\n>\n> Ok, here is an updated patch set that does it that way.\n\nIt looks good to me. One minor complaint, I'm a bit dubious about\nthose queries:\n\nSELECT count(*) <= 100 AND count(*) > 0 FROM pg_stat_statements;\n\nIs it to actually test that pg_stat_statements won't store more than\npg_stat_statements.max records? Note also that this query can't\nreturn 0 rows, as the currently executed query will have an entry\nadded during post_parse_analyze. Maybe a comment saying what this is\nactually testing would help.\n\nIt would also be good to test that pg_stat_statements_info.dealloc is\nmore than 0 once enough statements have been issued.\n\n> I have committed the patches 0002 and 0003 from the previous patch set\n> already.\n>\n> I have also enhanced the TAP test a bit to check the actual content of\n> the output across restarts.\n\nNothing much to say about this one, it all looks good.\n\n> I'm not too hung up on the 0001 patch if others don't like that approach.\n\nI agree with Michael on this one, the only times I saw this pattern\nwas to comply with some company internal policy for minimal coverage\nnumbers.\n\n\n",
"msg_date": "Fri, 29 Dec 2023 13:14:00 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On 29.12.23 06:14, Julien Rouhaud wrote:\n\n> It looks good to me. One minor complaint, I'm a bit dubious about\n> those queries:\n> \n> SELECT count(*) <= 100 AND count(*) > 0 FROM pg_stat_statements;\n> \n> Is it to actually test that pg_stat_statements won't store more than\n> pg_stat_statements.max records? Note also that this query can't\n> return 0 rows, as the currently executed query will have an entry\n> added during post_parse_analyze. Maybe a comment saying what this is\n> actually testing would help.\n\nYeah, I think I added that query before I added the queries to check the \ncontents of pg_stat_statements.query itself, so it's a bit obsolete. I \nreworked that part.\n\n> It would also be good to test that pg_stat_statements_info.dealloc is\n> more than 0 once enough statements have been issued.\n\nI added that.\n\n> \n>> I have committed the patches 0002 and 0003 from the previous patch set\n>> already.\n>>\n>> I have also enhanced the TAP test a bit to check the actual content of\n>> the output across restarts.\n> \n> Nothing much to say about this one, it all looks good.\n\nOk, I have committed these two patches.\n\n>> I'm not too hung up on the 0001 patch if others don't like that approach.\n> \n> I agree with Michael on this one, the only times I saw this pattern\n> was to comply with some company internal policy for minimal coverage\n> numbers.\n\nOk, skipped that.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 20:39:47 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:\n> On 29.12.23 06:14, Julien Rouhaud wrote:\n>> I agree with Michael on this one, the only times I saw this pattern\n>> was to comply with some company internal policy for minimal coverage\n>> numbers.\n> \n> Ok, skipped that.\n\nJust to close the loop here. I thought that I had sent a patch on the\nlists that made use of these markers, but it looks like that's not the\ncase. The only thread I've found is this one:\nhttps://www.postgresql.org/message-id/d8f6bdd536df403b9b33816e9f7e0b9d@G08CNEXMBPEKD05.g08.fujitsu.local\n\n(FWIW, I'm still skeptic about the idea of painting more backend code\nwith these outside the parsing areas, but I'm OK to be outnumbered.)\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:31:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:\n> Ok, I have committed these two patches.\n\nPlease note that the buildfarm has turned red, as in:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stagxe_log.pl?nm=pipit&dt=2023-12-31%2001%3A12%3A22&stg=misc-check\n\npg_stat_statements's regression.diffs holds more details:\nSELECT query FROM pg_stat_statements WHERE query LIKE '%t001%' OR query LIKE '%t098%' ORDER BY query;\n query\n --------------------\n- select * from t001\n select * from t098\n-(2 rows)\n+(1 row) \n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 15:28:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 2:28 PM Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:\n> > Ok, I have committed these two patches.\n>\n> Please note that the buildfarm has turned red, as in:\n> https://buildfarm.postgresql.org/cgi-bin/show_stagxe_log.pl?nm=pipit&dt=2023-12-31%2001%3A12%3A22&stg=misc-check\n>\n> pg_stat_statements's regression.diffs holds more details:\n> SELECT query FROM pg_stat_statements WHERE query LIKE '%t001%' OR query LIKE '%t098%' ORDER BY query;\n> query\n> --------------------\n> - select * from t001\n> select * from t098\n> -(2 rows)\n> +(1 row)\n\nThat's surprising. I wanted to see if there was any specific\nconfiguration but I get a 403. I'm wondering if this is only due to\nother tests being run concurrently evicting an entry earlier than\nplanned.\n\n\n",
"msg_date": "Sun, 31 Dec 2023 17:26:50 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "On 31.12.23 10:26, Julien Rouhaud wrote:\n> On Sun, Dec 31, 2023 at 2:28 PM Michael Paquier <[email protected]> wrote:\n>>\n>> On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:\n>>> Ok, I have committed these two patches.\n>>\n>> Please note that the buildfarm has turned red, as in:\n>> https://buildfarm.postgresql.org/cgi-bin/show_stagxe_log.pl?nm=pipit&dt=2023-12-31%2001%3A12%3A22&stg=misc-check\n>>\n>> pg_stat_statements's regression.diffs holds more details:\n>> SELECT query FROM pg_stat_statements WHERE query LIKE '%t001%' OR query LIKE '%t098%' ORDER BY query;\n>> query\n>> --------------------\n>> - select * from t001\n>> select * from t098\n>> -(2 rows)\n>> +(1 row)\n> \n> That's surprising. I wanted to see if there was any specific\n> configuration but I get a 403. I'm wondering if this is only due to\n> other tests being run concurrently evicting an entry earlier than\n> planned.\n\nThese tests are run in a separate instance and serially, so I don't \nthink concurrency is an issue.\n\nIt looks like the failing configurations are exactly all the big-endian \nones: s390x, sparc, powerpc. So it's possible that this is actually a \nbug? But unless someone can reproduce this locally and debug it, we \nshould probably revert this for now.\n\n\n\n",
"msg_date": "Sun, 31 Dec 2023 12:00:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It looks like the failing configurations are exactly all the big-endian \n> ones: s390x, sparc, powerpc. So it's possible that this is actually a \n> bug? But unless someone can reproduce this locally and debug it, we \n> should probably revert this for now.\n\nI see it failed on my animal mamba, so I should be able to reproduce\nit there. Will look.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Dec 2023 10:58:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> It looks like the failing configurations are exactly all the big-endian \n> ones: s390x, sparc, powerpc. So it's possible that this is actually a \n> bug? But unless someone can reproduce this locally and debug it, we \n> should probably revert this for now.\n\nThe reason for the platform-dependent behavior is that we're dealing\nwith a bunch of entries with identical \"usage\", so it's just about\nrandom which ones actually get deleted. I do not think our qsort()\nhas platform-dependent behavior --- but the initial contents of\nentry_dealloc's array are filled in hash_seq_search order, and that\n*is* platform-dependent.\n\nNow, the test script realizes that hazard. The bug seems to be that\nit's wrong about how many competing usage-count-1 entries there are.\nInstrumenting the calls to entry_alloc (which'll call entry_dealloc\nwhen we hit 100 entries), I see\n\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max LOG: calling entry_alloc for 'SELECT pg_stat_statements_reset() IS NOT', cur hash size 0\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max LOG: calling entry_alloc for '$1', cur hash size 1\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max CONTEXT: SQL expression \"1\"\n\tPL/pgSQL function inline_code_block line 3 at FOR with integer loop variable\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max LOG: calling entry_alloc for 'format($3, lpad(i::text, $4, $5))', cur hash size 2\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max CONTEXT: SQL expression \"format('select * from t%s', lpad(i::text, 3, '0'))\"\n\tPL/pgSQL function inline_code_block line 4 at EXECUTE\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max LOG: calling entry_alloc for 'select * from t001', cur hash size 3\n2023-12-31 13:21:39.160 EST client backend[3764867] pg_regress/max CONTEXT: SQL statement \"select * from t001\"\n\tPL/pgSQL function inline_code_block line 4 at EXECUTE\n...\n2023-12-31 13:21:39.165 EST client backend[3764867] pg_regress/max LOG: calling entry_alloc for 'select * from t098', cur hash size 100\n2023-12-31 13:21:39.165 EST client backend[3764867] pg_regress/max CONTEXT: SQL statement \"select * from t098\"\n\tPL/pgSQL function inline_code_block line 4 at EXECUTE\n2023-12-31 13:21:39.165 EST client backend[3764867] pg_regress/max LOG: entry_dealloc: zapping 10 of 100 victims\n\nSo the dealloc happens sooner than the script expects, and it's pure\nchance that the test appeared to work anyway.\n\nThe test case needs to be rewritten to allow for more competing\nusage-count-1 entries than it currently does. Backing \"98\" down to\n\"95\" might be enough, but I've not experimented (and I'd recommend\nleaving more than the minimum amount of headroom, in case plpgsql\nchanges behavior about how many subexpressions get put into the\ntable).\n\nStrongly recommend that while fixing the test, you stick in some\ndebugging elogs to verify when the dealloc calls actually happen\nrather than assuming you predicted it correctly. I did it as\nattached.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 31 Dec 2023 13:42:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements: more test coverage"
}
] |
[
{
"msg_contents": "I have recently, once again for the umpteenth time, been involved in \ndiscussions around (paraphrasing) \"why does Postgres leak the passwords \ninto the logs when they are changed\". I know well that the canonical \nadvice is something like \"use psql with \\password if you care about that\".\n\nAnd while that works, it is a deeply unsatisfying answer for me to give \nand for the OP to receive.\n\nThe alternative is something like \"...well if you don't like that, use \nPQencryptPasswordConn() to roll your own solution that meets your \nsecurity needs\".\n\nAgain, not a spectacular answer IMHO. It amounts to \"here is a \ndo-it-yourself kit, go put it together\". It occurred to me that we can, \nand really should, do better.\n\nThe attached patch set moves the guts of \\password from psql into the \nlibpq client side -- PQchangePassword() (patch 0001).\n\nThe usage in psql serves as a ready built-in test for the libpq function \n(patch 0002). Docs included too (patch 0003).\n\nOne thing I have not done but, considered, is adding an additional \noptional parameter to allow \"VALID UNTIL\" to be set. Seems like it would \nbe useful to be able to set an expiration when setting a new password.\n\nI will register this in the upcoming commitfest, but meantime \nthought/comments/etc. would be gratefully received.\n\nThanks,\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 23 Dec 2023 10:13:58 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Password leakage avoidance"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> The attached patch set moves the guts of \\password from psql into the \n> libpq client side -- PQchangePassword() (patch 0001).\n\nHaven't really read the patch, just looked at the docs, but here's\na bit of bikeshedding:\n\n* This seems way too eager to promote the use of md5. Surely the\ndefault ought to be SCRAM, full stop. I question whether we even\nneed an algorithm parameter. Perhaps it's a good idea for\nfuture-proofing, but we could also plan that the function would\nmake its own decisions based on noting the server's version.\n(libpq is far more likely to be au courant about what to do than\nthe calling application, IMO.)\n\n* Parameter order seems a bit odd: to me it'd be natural to write\nuser before password.\n\n* Docs claim return type is char *, but then say bool (which is\nalso what the code says). We do not use bool in libpq's API;\nthe admittedly-hoary convention is to use int with 1/0 values.\nRather than quibble about that, though, I wonder if we should\nmake the result be the PGresult from the command, which the\ncaller would be responsible to free. That would document error\nconditions much more flexibly. On the downside, client-side\nerrors would be harder to report, particularly OOM, but I think\nwe have solutions for that in existing functions.\n\n* The whole business of nonblock mode seems fairly messy here,\nand I wonder whether it's worth the trouble to support. If we\ndo want to claim it works then it ought to be tested.\n\n> One thing I have not done but, considered, is adding an additional \n> optional parameter to allow \"VALID UNTIL\" to be set. Seems like it would \n> be useful to be able to set an expiration when setting a new password.\n\nNo strong opinion about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Dec 2023 11:00:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Dave Cramer\nwww.postgres.rocks\n\n\nOn Sat, 23 Dec 2023 at 11:00, Tom Lane <[email protected]> wrote:\n\n> Joe Conway <[email protected]> writes:\n> > The attached patch set moves the guts of \\password from psql into the\n> > libpq client side -- PQchangePassword() (patch 0001).\n>\n> Haven't really read the patch, just looked at the docs, but here's\n> a bit of bikeshedding:\n>\n> * This seems way too eager to promote the use of md5. Surely the\n> default ought to be SCRAM, full stop. I question whether we even\n> need an algorithm parameter. Perhaps it's a good idea for\n> future-proofing, but we could also plan that the function would\n> make its own decisions based on noting the server's version.\n> (libpq is far more likely to be au courant about what to do than\n> the calling application, IMO.)\n>\n\nUsing the server version has some issues. It's quite possible to encrypt a\nuser password with md5 when the server version is scram. So if you change\nthe encryption then pg_hba.conf would have to be updated to allow the user\nto log back in.\n\nDave\n\nDave Cramerwww.postgres.rocksOn Sat, 23 Dec 2023 at 11:00, Tom Lane <[email protected]> wrote:Joe Conway <[email protected]> writes:\n> The attached patch set moves the guts of \\password from psql into the \n> libpq client side -- PQchangePassword() (patch 0001).\n\nHaven't really read the patch, just looked at the docs, but here's\na bit of bikeshedding:\n\n* This seems way too eager to promote the use of md5. Surely the\ndefault ought to be SCRAM, full stop. I question whether we even\nneed an algorithm parameter. Perhaps it's a good idea for\nfuture-proofing, but we could also plan that the function would\nmake its own decisions based on noting the server's version.\n(libpq is far more likely to be au courant about what to do than\nthe calling application, IMO.)Using the server version has some issues. It's quite possible to encrypt a user password with md5 when the server version is scram. So if you change the encryption then pg_hba.conf would have to be updated to allow the user to log back in. Dave",
"msg_date": "Sun, 24 Dec 2023 07:13:13 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/23/23 11:00, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> The attached patch set moves the guts of \\password from psql into the \n>> libpq client side -- PQchangePassword() (patch 0001).\n> \n> Haven't really read the patch, just looked at the docs, but here's\n> a bit of bikeshedding:\n\nThanks!\n\n> * This seems way too eager to promote the use of md5. Surely the\n> default ought to be SCRAM, full stop. I question whether we even\n> need an algorithm parameter. Perhaps it's a good idea for\n> future-proofing, but we could also plan that the function would\n> make its own decisions based on noting the server's version.\n> (libpq is far more likely to be au courant about what to do than\n> the calling application, IMO.)\n\n> * Parameter order seems a bit odd: to me it'd be natural to write\n> user before password.\n\n> * Docs claim return type is char *, but then say bool (which is\n> also what the code says). We do not use bool in libpq's API;\n> the admittedly-hoary convention is to use int with 1/0 values.\n> Rather than quibble about that, though, I wonder if we should\n> make the result be the PGresult from the command, which the\n> caller would be responsible to free. That would document error\n> conditions much more flexibly. On the downside, client-side\n> errors would be harder to report, particularly OOM, but I think\n> we have solutions for that in existing functions.\n\n> * The whole business of nonblock mode seems fairly messy here,\n> and I wonder whether it's worth the trouble to support. If we\n> do want to claim it works then it ought to be tested.\n\nAll of these (except for the doc \"char *\" cut-n-pasteo) were due to \ntrying to stay close to the same interface as PQencryptPasswordConn().\n\nBut I agree with your assessment and the attached patch series addresses \nall of them.\n\nThe original use of PQencryptPasswordConn() in psql passed a NULL for \nthe algorithm, so I dropped that argument entirely. I also swapped user \nand password arguments because as you pointed out that is more natural.\n\nThis version returns PGresult. As far as special handling for \nclient-side errors like OOM, I don't see anything beyond returning a \nNULL to signify fatal error, e,g,:\n\n8<--------------\nPGresult *\nPQmakeEmptyPGresult(PGconn *conn, ExecStatusType status)\n{\n\tPGresult *result;\n\n\tresult = (PGresult *) malloc(sizeof(PGresult));\n\tif (!result)\n\t\treturn NULL;\n8<--------------\n\nThat is the approach I took.\n\n>> One thing I have not done but, considered, is adding an additional \n>> optional parameter to allow \"VALID UNTIL\" to be set. Seems like it would \n>> be useful to be able to set an expiration when setting a new password.\n> \n> No strong opinion about that.\n\nThanks -- hopefully others will weigh in on that.\n\nCompletely unrelated process bikeshedding:\nI changed the naming scheme I used for the split patch-set this time. I \ndon't know if we have a settled/documented pattern for such naming, but \nthe original pattern which I borrowed from someone else's patches was \n\"vX-NNNN-description.patch\".\n\nThe problems I have with that are 1/ there may well be more that 10 \nversions of a patch-set, 2/ there are probably never going to be more \nthan 2 digits worth of files in a patch-set, and 3/ the description \ncoming after the version and file identifiers causes the patches in my \nlocal directory to sort poorly, intermingling several unrelated patch-sets.\n\nThe new pattern I picked is \"description-vXXX-NN.patch\" which fixes all \nof those issues. Does that bother anyone? *Should* we try to agree on a \ndesired pattern (assuming there is not one already)?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 24 Dec 2023 10:14:27 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/24/23 10:14 AM, Joe Conway wrote:\r\n> On 12/23/23 11:00, Tom Lane wrote:\r\n>> Joe Conway <[email protected]> writes:\r\n>>> The attached patch set moves the guts of \\password from psql into the \r\n>>> libpq client side -- PQchangePassword() (patch 0001).\r\n>>\r\n>> Haven't really read the patch, just looked at the docs, but here's\r\n>> a bit of bikeshedding:\r\n> \r\n> Thanks!\r\n\r\nPrior to bikeshedding -- thanks for putting this together. This should \r\ngenerally helpful, as it allows libpq-based drivers to adopt this method \r\nand provide a safer mechanism for setting/changing passwords! (which we \r\nshould also promote once availbale).\r\n\r\n>> * This seems way too eager to promote the use of md5. Surely the\r\n>> default ought to be SCRAM, full stop. I question whether we even\r\n>> need an algorithm parameter. Perhaps it's a good idea for\r\n>> future-proofing, but we could also plan that the function would\r\n>> make its own decisions based on noting the server's version.\r\n>> (libpq is far more likely to be au courant about what to do than\r\n>> the calling application, IMO.)\r\n\r\nWe're likely to have new algorithms in the future, as there is a draft \r\nRFC for updating the SCRAM hashes, and already some regulatory bodies \r\nare looking to deprecate SHA256. My concern with relying on the \r\n\"encrypted_password\" GUC (which is why PQencryptPasswordConn takes \r\n\"conn\") makes it any easier for users to choose the algorithm, or if \r\nthey need to rely on the server/session setting.\r\n\r\nI guess in its current state, it does, and drivers could mask some of \r\nthe complexity.\r\n\r\n>>> One thing I have not done but, considered, is adding an additional \r\n>>> optional parameter to allow \"VALID UNTIL\" to be set. Seems like it \r\n>>> would be useful to be able to set an expiration when setting a new \r\n>>> password.\r\n>>\r\n>> No strong opinion about that.\r\n> \r\n> Thanks -- hopefully others will weigh in on that.\r\n\r\nI think this is reasonable to add.\r\n\r\nI think this is a good start and adds something that's better than what \r\nwe have today. However, it seems like we also need something for \"CREATE \r\nROLE\", otherwise we're either asking users to set passwords in two \r\nsteps, or allowing for the unencrypted password to leak to the logs via \r\nCREATE ROLE.\r\n\r\nMaybe we need a PQcreaterole that provide the mechanism to set passwords \r\nsafely. It'd likely need to take all the options need for creating a \r\nrole, but that would at least give the underlying mechanism to ensure \r\nwe're always sending a hashed password to the server.\r\n\r\nJonathan",
"msg_date": "Sun, 24 Dec 2023 12:06:03 -0500",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "\"Jonathan S. Katz\" <[email protected]> writes:\n> I think this is a good start and adds something that's better than what \n> we have today. However, it seems like we also need something for \"CREATE \n> ROLE\", otherwise we're either asking users to set passwords in two \n> steps, or allowing for the unencrypted password to leak to the logs via \n> CREATE ROLE.\n\n> Maybe we need a PQcreaterole that provide the mechanism to set passwords \n> safely. It'd likely need to take all the options need for creating a \n> role, but that would at least give the underlying mechanism to ensure \n> we're always sending a hashed password to the server.\n\nI'm kind of down on that, because it seems like it'd be quite hard to\ndesign an easy-to-use C API that doesn't break the next time somebody\nadds another option to CREATE USER. What's so wrong with suggesting\nthat the password be set in a separate step? (For comparison, typical\nUnix utilities like useradd(8) also tell you to set the password\nseparately.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Dec 2023 12:15:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> Completely unrelated process bikeshedding:\n> I changed the naming scheme I used for the split patch-set this time. I \n> don't know if we have a settled/documented pattern for such naming, but \n> the original pattern which I borrowed from someone else's patches was \n> \"vX-NNNN-description.patch\".\n\nAs far as that goes, that filename pattern is what is generated by\n\"git format-patch\". I agree that the digit-count choices are a tad\nodd, but they're not so awful as to be worth trying to override.\n\n> The new pattern I picked is \"description-vXXX-NN.patch\" which fixes all \n> of those issues.\n\nOnly if you use the same \"description\" for all patches of a series,\nwhich seems kind of not the point. In any case, \"git format-patch\"\nis considered best practice for a multi-patch series AFAIK, so we\nhave to cope with its ideas about how to name the files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Dec 2023 12:22:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/24/23 12:22, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> Completely unrelated process bikeshedding:\n>> I changed the naming scheme I used for the split patch-set this time. I \n>> don't know if we have a settled/documented pattern for such naming, but \n>> the original pattern which I borrowed from someone else's patches was \n>> \"vX-NNNN-description.patch\".\n> \n> As far as that goes, that filename pattern is what is generated by\n> \"git format-patch\". I agree that the digit-count choices are a tad\n> odd, but they're not so awful as to be worth trying to override.\n\n\nAh, knew it was something like that. I am still a curmudgeon doing \nthings the old way ¯\\_(ツ)_/¯\n\n\n>> The new pattern I picked is \"description-vXXX-NN.patch\" which fixes all \n>> of those issues.\n> \n> Only if you use the same \"description\" for all patches of a series,\n> which seems kind of not the point. In any case, \"git format-patch\"\n> is considered best practice for a multi-patch series AFAIK, so we\n> have to cope with its ideas about how to name the files.\nEven if I wanted some differentiating name for the individual patches in \na set, I still like them to be grouped because it is one unit of work \nfrom my perspective.\n\nOh well, I guess I will get with the program and put every patch-set \ninto its own directory.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sun, 24 Dec 2023 13:02:01 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> Oh well, I guess I will get with the program and put every patch-set \n> into its own directory.\n\nYeah, that's what I've started doing too. It does have some\nadvantages, in that you can squirrel away other related files\nin the same subdirectory.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Dec 2023 13:11:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 23.12.23 16:13, Joe Conway wrote:\n> I have recently, once again for the umpteenth time, been involved in \n> discussions around (paraphrasing) \"why does Postgres leak the passwords \n> into the logs when they are changed\". I know well that the canonical \n> advice is something like \"use psql with \\password if you care about that\".\n> \n> And while that works, it is a deeply unsatisfying answer for me to give \n> and for the OP to receive.\n> \n> The alternative is something like \"...well if you don't like that, use \n> PQencryptPasswordConn() to roll your own solution that meets your \n> security needs\".\n> \n> Again, not a spectacular answer IMHO. It amounts to \"here is a \n> do-it-yourself kit, go put it together\". It occurred to me that we can, \n> and really should, do better.\n> \n> The attached patch set moves the guts of \\password from psql into the \n> libpq client side -- PQchangePassword() (patch 0001).\n> \n> The usage in psql serves as a ready built-in test for the libpq function \n> (patch 0002). Docs included too (patch 0003).\n\nI don't follow how you get from the problem statement to this solution. \nThis proposal doesn't avoid password leakage, does it? It just provides \na different way to phrase the existing solution. Who is a potential \nuser of this solution? Right now it just saves a dozen lines in psql, \nbut it's not clear how it improves anything else.\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 21:39:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/27/23 15:39, Peter Eisentraut wrote:\n> On 23.12.23 16:13, Joe Conway wrote:\n>> I have recently, once again for the umpteenth time, been involved in \n>> discussions around (paraphrasing) \"why does Postgres leak the passwords \n>> into the logs when they are changed\". I know well that the canonical \n>> advice is something like \"use psql with \\password if you care about that\".\n>> \n>> And while that works, it is a deeply unsatisfying answer for me to give \n>> and for the OP to receive.\n>> \n>> The alternative is something like \"...well if you don't like that, use \n>> PQencryptPasswordConn() to roll your own solution that meets your \n>> security needs\".\n>> \n>> Again, not a spectacular answer IMHO. It amounts to \"here is a \n>> do-it-yourself kit, go put it together\". It occurred to me that we can, \n>> and really should, do better.\n>> \n>> The attached patch set moves the guts of \\password from psql into the \n>> libpq client side -- PQchangePassword() (patch 0001).\n>> \n>> The usage in psql serves as a ready built-in test for the libpq function \n>> (patch 0002). Docs included too (patch 0003).\n> \n> I don't follow how you get from the problem statement to this solution.\n> This proposal doesn't avoid password leakage, does it?\n\nYes, it most certainly does. The plaintext password would never be seen \nby the server and therefore never logged. This is exactly why the \nfeature already existed in psql.\n\n> It just provides a different way to phrase the existing solution.\n\nYes, a fully built one that is convenient to use, and does not ask \neveryone to roll their own.\n\n> Who is a potential user of this solution? \n\nLiterally every company that has complained that Postgres pollutes their \nlogs with plaintext passwords. I have heard the request to provide a \nbetter solution many times, over many years, while working for three \ndifferent companies.\n\n> Right now it just saves a dozen lines in psql, but it's not clear how\n> it improves anything else.\n\nIt is to me, and so far no one else has complained about that. More \nopinions would be welcomed of course.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 15:53:35 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> On 12/27/23 15:39, Peter Eisentraut wrote:\n>> On 23.12.23 16:13, Joe Conway wrote:\n>>> The attached patch set moves the guts of \\password from psql into the \n>>> libpq client side -- PQchangePassword() (patch 0001).\n\n>> I don't follow how you get from the problem statement to this solution.\n>> This proposal doesn't avoid password leakage, does it?\n>> It just provides a different way to phrase the existing solution.\n\n> Yes, a fully built one that is convenient to use, and does not ask \n> everyone to roll their own.\n\nIt's convenient for users of libpq, I guess, but it doesn't help\nanyone not writing C code directly atop libpq. If this is the\nway forward then we need to also press JDBC and other client\nlibraries to implement comparable functionality. That's within\nthe realm of sanity surely, and having a well-thought-through\nreference implementation in libpq would help those authors.\nSo I don't think this is a strike against the patch; but the answer\nto Peter's question has to be that this is just part of the solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 16:09:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Wed, 27 Dec 2023 at 16:10, Tom Lane <[email protected]> wrote:\n\n> Joe Conway <[email protected]> writes:\n> > On 12/27/23 15:39, Peter Eisentraut wrote:\n> >> On 23.12.23 16:13, Joe Conway wrote:\n> >>> The attached patch set moves the guts of \\password from psql into the\n> >>> libpq client side -- PQchangePassword() (patch 0001).\n>\n> >> I don't follow how you get from the problem statement to this solution.\n> >> This proposal doesn't avoid password leakage, does it?\n> >> It just provides a different way to phrase the existing solution.\n>\n> > Yes, a fully built one that is convenient to use, and does not ask\n> > everyone to roll their own.\n>\n> It's convenient for users of libpq, I guess, but it doesn't help\n> anyone not writing C code directly atop libpq. If this is the\n> way forward then we need to also press JDBC and other client\n> libraries to implement comparable functionality. That's within\n> the realm of sanity surely, and having a well-thought-through\n> reference implementation in libpq would help those authors.\n> So I don't think this is a strike against the patch; but the answer\n> to Peter's question has to be that this is just part of the solution.\n>\n\nAlready have one in the works for JDBC, actually predates this.\nhttps://github.com/pgjdbc/pgjdbc/pull/3067\n\nDave\n\nOn Wed, 27 Dec 2023 at 16:10, Tom Lane <[email protected]> wrote:Joe Conway <[email protected]> writes:\n> On 12/27/23 15:39, Peter Eisentraut wrote:\n>> On 23.12.23 16:13, Joe Conway wrote:\n>>> The attached patch set moves the guts of \\password from psql into the \n>>> libpq client side -- PQchangePassword() (patch 0001).\n\n>> I don't follow how you get from the problem statement to this solution.\n>> This proposal doesn't avoid password leakage, does it?\n>> It just provides a different way to phrase the existing solution.\n\n> Yes, a fully built one that is convenient to use, and does not ask \n> everyone to roll their own.\n\nIt's convenient for users of libpq, I guess, but it doesn't help\nanyone not writing C code directly atop libpq. If this is the\nway forward then we need to also press JDBC and other client\nlibraries to implement comparable functionality. That's within\nthe realm of sanity surely, and having a well-thought-through\nreference implementation in libpq would help those authors.\nSo I don't think this is a strike against the patch; but the answer\nto Peter's question has to be that this is just part of the solution.Already have one in the works for JDBC, actually predates this. https://github.com/pgjdbc/pgjdbc/pull/3067Dave",
"msg_date": "Wed, 27 Dec 2023 16:11:17 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/27/23 16:09, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> On 12/27/23 15:39, Peter Eisentraut wrote:\n>>> On 23.12.23 16:13, Joe Conway wrote:\n>>>> The attached patch set moves the guts of \\password from psql into the \n>>>> libpq client side -- PQchangePassword() (patch 0001).\n> \n>>> I don't follow how you get from the problem statement to this solution.\n>>> This proposal doesn't avoid password leakage, does it?\n>>> It just provides a different way to phrase the existing solution.\n> \n>> Yes, a fully built one that is convenient to use, and does not ask \n>> everyone to roll their own.\n> \n> It's convenient for users of libpq, I guess, but it doesn't help\n> anyone not writing C code directly atop libpq. If this is the\n> way forward then we need to also press JDBC and other client\n> libraries to implement comparable functionality. That's within\n> the realm of sanity surely, and having a well-thought-through\n> reference implementation in libpq would help those authors.\n> So I don't think this is a strike against the patch; but the answer\n> to Peter's question has to be that this is just part of the solution.\n\nAs mentioned downthread by Dave Cramer, JDBC is already onboard.\n\nAnd as Jonathan said in an adjacent part of the thread:\n> This should generally helpful, as it allows libpq-based drivers to\n> adopt this method and provide a safer mechanism for setting/changing\n> passwords! (which we should also promote once availbale).\n\nWhich is definitely something I have had in mind all along.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 16:26:22 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/24/23 12:15 PM, Tom Lane wrote:\r\n\r\n>> Maybe we need a PQcreaterole that provide the mechanism to set passwords\r\n>> safely. It'd likely need to take all the options need for creating a\r\n>> role, but that would at least give the underlying mechanism to ensure\r\n>> we're always sending a hashed password to the server.\r\n> \r\n> I'm kind of down on that, because it seems like it'd be quite hard to\r\n> design an easy-to-use C API that doesn't break the next time somebody\r\n> adds another option to CREATE USER. What's so wrong with suggesting\r\n> that the password be set in a separate step? (For comparison, typical\r\n> Unix utilities like useradd(8) also tell you to set the password\r\n> separately.)\r\n\r\nModern development frameworks tend to reduce things down to one-step, \r\neven fairly complex operations. Additionally, a lot of these frameworks \r\ndon't even require a developer to build backend applications that \r\ninvolve doing actually work on the backend (e.g. UNIX), so the approach \r\nof useradd(8) et al. are not familiar. Adding the additional step would \r\nlead to errors, e.g. the developer not calling the \"change password\" \r\nfunction to create the obfuscated password. Granted, we can push the \r\nproblem down to driver authors to \"be better\" and simplify the process \r\nfor their end users, but that still can be error prone, having seen this \r\nwith driver authors implementing PostgreSQL SCRAM and having made \r\n(correctable) mistakes that could have lead to security issues.\r\n\r\nThat said, I see why trying to keep track of all of the \"CREATE ROLE\" \r\nattributes from a C API can be cumbersome, so perhaps we could end up \r\nadding an API that just does \"create-user-with-password\" and applies a \r\nsimilar method to Joe's patch. That would also align with the developer \r\nexperience above, as in those cases users tend to just be created with a \r\npassword w/o any of the additional role options.\r\n\r\nAlso open to punting this to a different thread as we can at least make \r\nthings better with the \"change password\" approach.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 27 Dec 2023 15:31:21 -0600",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 10:31 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> On 12/24/23 12:15 PM, Tom Lane wrote:\n>\n> >> Maybe we need a PQcreaterole that provide the mechanism to set passwords\n> >> safely. It'd likely need to take all the options need for creating a\n> >> role, but that would at least give the underlying mechanism to ensure\n> >> we're always sending a hashed password to the server.\n> >\n> > I'm kind of down on that, because it seems like it'd be quite hard to\n> > design an easy-to-use C API that doesn't break the next time somebody\n> > adds another option to CREATE USER. What's so wrong with suggesting\n> > that the password be set in a separate step? (For comparison, typical\n> > Unix utilities like useradd(8) also tell you to set the password\n> > separately.)\n>\n> Modern development frameworks tend to reduce things down to one-step,\n> even fairly complex operations. Additionally, a lot of these frameworks\n> don't even require a developer to build backend applications that\n> involve doing actually work on the backend (e.g. UNIX), so the approach\n> of useradd(8) et al. are not familiar. Adding the additional step would\n> lead to errors, e.g. the developer not calling the \"change password\"\n> function to create the obfuscated password. Granted, we can push the\n> problem down to driver authors to \"be better\" and simplify the process\n> for their end users, but that still can be error prone, having seen this\n> with driver authors implementing PostgreSQL SCRAM and having made\n> (correctable) mistakes that could have lead to security issues.\n\nThis seems to confuse \"driver\" with \"framework\".\n\nI would say the \"two step\" approach is perfectly valid for a driver\nwhereas as you say most people building say webapps or similar on top\nof a framework will expect it to handle things for them. But that's\nmore of a framework thing than a driver thing, depending on\nterminology. E.g. it would be up to the \"Postgres support driver\" in\ndjango/rails/whatnot to reduce it down to one step, not to a low level\ndriver like libpq (or other low level drivers).\n\nNone of those frameworks are likely to want to require direct driver\naccess anyway, they *want* to take control of that process in my\nexperience.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 31 Dec 2023 15:50:05 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/31/23 9:50 AM, Magnus Hagander wrote:\r\n> On Wed, Dec 27, 2023 at 10:31 PM Jonathan S. Katz <[email protected]> wrote:\r\n>>\r\n>> On 12/24/23 12:15 PM, Tom Lane wrote:\r\n>>\r\n>>>> Maybe we need a PQcreaterole that provide the mechanism to set passwords\r\n>>>> safely. It'd likely need to take all the options need for creating a\r\n>>>> role, but that would at least give the underlying mechanism to ensure\r\n>>>> we're always sending a hashed password to the server.\r\n>>>\r\n>>> I'm kind of down on that, because it seems like it'd be quite hard to\r\n>>> design an easy-to-use C API that doesn't break the next time somebody\r\n>>> adds another option to CREATE USER. What's so wrong with suggesting\r\n>>> that the password be set in a separate step? (For comparison, typical\r\n>>> Unix utilities like useradd(8) also tell you to set the password\r\n>>> separately.)\r\n>>\r\n>> Modern development frameworks tend to reduce things down to one-step,\r\n>> even fairly complex operations. Additionally, a lot of these frameworks\r\n>> don't even require a developer to build backend applications that\r\n>> involve doing actually work on the backend (e.g. UNIX), so the approach\r\n>> of useradd(8) et al. are not familiar. Adding the additional step would\r\n>> lead to errors, e.g. the developer not calling the \"change password\"\r\n>> function to create the obfuscated password. Granted, we can push the\r\n>> problem down to driver authors to \"be better\" and simplify the process\r\n>> for their end users, but that still can be error prone, having seen this\r\n>> with driver authors implementing PostgreSQL SCRAM and having made\r\n>> (correctable) mistakes that could have lead to security issues.\r\n> \r\n> This seems to confuse \"driver\" with \"framework\".\r\n> \r\n> I would say the \"two step\" approach is perfectly valid for a driver\r\n> whereas as you say most people building say webapps or similar on top\r\n> of a framework will expect it to handle things for them. But that's\r\n> more of a framework thing than a driver thing, depending on\r\n> terminology. E.g. it would be up to the \"Postgres support driver\" in\r\n> django/rails/whatnot to reduce it down to one step, not to a low level\r\n> driver like libpq (or other low level drivers).\r\n> \r\n> None of those frameworks are likely to want to require direct driver\r\n> access anyway, they *want* to take control of that process in my\r\n> experience.\r\n\r\nFair point on the framework/driver comparison, but the above still \r\napplies to drivers. As mentioned above, non-libpq drivers did have \r\nmistakes that could have lead to security issues while implementing \r\nPostgreSQL SCRAM. Additionally, CVE-2021-23222[1] presented itself in \r\nboth libpq/non-libpq drivers, either through the issue itself, or \r\nthrough implementing the protocol step in a way similar to libpq.\r\n\r\nKeeping the implementation surface area simpler for driver maintainers \r\ndoes generally help mitigate further issues, though I'd defer to the \r\ndriver maintainers if they agree with that statement.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/support/security/CVE-2021-23222/",
"msg_date": "Sun, 31 Dec 2023 10:20:53 -0500",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Having worked on and just about wrapped up the JDBC driver patch for this,\ncouple thoughts:\n\n1. There's two sets of defaults, the client program's default and the\nserver's default. Need to pick one for each implemented function. They\ndon't need to be the same across the board.\n2. Password encoding should be split out and made available as its own\nfunctions. Not just as part of a wider \"create _or_ alter a user's\npassword\" function attached to a connection. We went a step further and\nadded an intermediate function that generates the \"ALTER USER ... PASSWORD\"\nSQL.\n3. We only added an \"ALTER USER ... PASSWORD\" function, not anything to\ncreate a user. There's way too many options for that and keeping this\ntargeted at just assigning passwords makes it much easier to test.\n4. RE:defaults, the PGJDBC approach is that the encoding-only function uses\nthe driver's default (SCRAM). The \"alterUserPassword(...)\" uses the\nserver's default (again usually SCRAM for modern installs but could be\nsomething else). It's kind of odd that they're different but the use cases\nare different as well.\n5. Our SCRAM specific function allows for customizing the algo iteration\nand salt parameters. That topic came up on hackers previously[1]. Our high\nlevel \"alterUserPassword(...)\" function does not have those options but it\nis available as part of our PasswordUtil SCRAM API for advanced users who\nwant to leverage it. The higher level functions have defaults for iteration\ncounts (4096) and salt size (16-bytes).\n6. Getting the verbiage right for the PGJDBC version was kind of annoying\nas we wanted to match the server's wording, e.g. \"password_encryption\", but\nit's clearly hashing, not encryption. We settled on \"password encoding\" for\ndescribing the overall task with the comments referencing the server's\nusage of the term \"password_encryption\". Found a similar topic[2] on\nchanging that recently as well but looks like that's not going anywhere.\n\n[1]:\nhttps://www.postgresql.org/message-id/1d669d97-86b3-a5dc-9f02-c368bca911f6%40iki.fi\n[2]:\nhttps://www.postgresql.org/message-id/flat/ZV149Fd2JG_OF7CM%40momjian.us#cc97d20ff357a9e9264d4ae14e96e566\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nHaving worked on and just about wrapped up the JDBC driver patch for this, couple thoughts:1. There's two sets of defaults, the client program's default and the server's default. Need to pick one for each implemented function. They don't need to be the same across the board.2. Password encoding should be split out and made available as its own functions. Not just as part of a wider \"create _or_ alter a user's password\" function attached to a connection. We went a step further and added an intermediate function that generates the \"ALTER USER ... PASSWORD\" SQL.3. We only added an \"ALTER USER ... PASSWORD\" function, not anything to create a user. There's way too many options for that and keeping this targeted at just assigning passwords makes it much easier to test.4. RE:defaults, the PGJDBC approach is that the encoding-only function uses the driver's default (SCRAM). The \"alterUserPassword(...)\" uses the server's default (again usually SCRAM for modern installs but could be something else). It's kind of odd that they're different but the use cases are different as well.5. Our SCRAM specific function allows for customizing the algo iteration and salt parameters. That topic came up on hackers previously[1]. Our high level \"alterUserPassword(...)\" function does not have those options but it is available as part of our PasswordUtil SCRAM API for advanced users who want to leverage it. The higher level functions have defaults for iteration counts (4096) and salt size (16-bytes).6. Getting the verbiage right for the PGJDBC version was kind of annoying as we wanted to match the server's wording, e.g. \"password_encryption\", but it's clearly hashing, not encryption. We settled on \"password encoding\" for describing the overall task with the comments referencing the server's usage of the term \"password_encryption\". Found a similar topic[2] on changing that recently as well but looks like that's not going anywhere.[1]: https://www.postgresql.org/message-id/1d669d97-86b3-a5dc-9f02-c368bca911f6%40iki.fi[2]: https://www.postgresql.org/message-id/flat/ZV149Fd2JG_OF7CM%40momjian.us#cc97d20ff357a9e9264d4ae14e96e566Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Tue, 2 Jan 2024 07:23:31 -0500",
"msg_from": "Sehrope Sarkuni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Sun, Dec 24, 2023 at 12:06 PM Jonathan S. Katz <[email protected]> wrote:\n> We're likely to have new algorithms in the future, as there is a draft\n> RFC for updating the SCRAM hashes, and already some regulatory bodies\n> are looking to deprecate SHA256. My concern with relying on the\n> \"encrypted_password\" GUC (which is why PQencryptPasswordConn takes\n> \"conn\") makes it any easier for users to choose the algorithm, or if\n> they need to rely on the server/session setting.\n\nYeah, I agree. It doesn't make much sense to me to propose that a GUC,\nwhich is a server-side setting, should control client-side behavior.\n\nAlso, +1 for the general idea. I don't think this is a whole answer to\nthe problem of passwords appearing in log files because (1) you have\nto be using libpq in order to make use of this and (2) you have to\nactually use it instead of just doing something else and complaining\nabout the problem. But neither of those things is a reason not to have\nit. There's no reason why a sophisticated user who goes through libpq\nshouldn't have an easy way to do this instead of being asked to\nreimplement it if they want the functionality.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Jan 2024 08:53:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Wed, 3 Jan 2024 at 08:53, Robert Haas <[email protected]> wrote:\n\n> On Sun, Dec 24, 2023 at 12:06 PM Jonathan S. Katz <[email protected]>\n> wrote:\n> > We're likely to have new algorithms in the future, as there is a draft\n> > RFC for updating the SCRAM hashes, and already some regulatory bodies\n> > are looking to deprecate SHA256. My concern with relying on the\n> > \"encrypted_password\" GUC (which is why PQencryptPasswordConn takes\n> > \"conn\") makes it any easier for users to choose the algorithm, or if\n> > they need to rely on the server/session setting.\n>\n> Yeah, I agree. It doesn't make much sense to me to propose that a GUC,\n> which is a server-side setting, should control client-side behavior.\n>\n> Also, +1 for the general idea. I don't think this is a whole answer to\n> the problem of passwords appearing in log files because (1) you have\n> to be using libpq in order to make use of this\n\n\nJDBC has it as of yesterday. I would imagine other clients will implement\nit.\nDave Cramer\n\n>\n>\n\nOn Wed, 3 Jan 2024 at 08:53, Robert Haas <[email protected]> wrote:On Sun, Dec 24, 2023 at 12:06 PM Jonathan S. Katz <[email protected]> wrote:\n> We're likely to have new algorithms in the future, as there is a draft\n> RFC for updating the SCRAM hashes, and already some regulatory bodies\n> are looking to deprecate SHA256. My concern with relying on the\n> \"encrypted_password\" GUC (which is why PQencryptPasswordConn takes\n> \"conn\") makes it any easier for users to choose the algorithm, or if\n> they need to rely on the server/session setting.\n\nYeah, I agree. It doesn't make much sense to me to propose that a GUC,\nwhich is a server-side setting, should control client-side behavior.\n\nAlso, +1 for the general idea. I don't think this is a whole answer to\nthe problem of passwords appearing in log files because (1) you have\nto be using libpq in order to make use of thisJDBC has it as of yesterday. I would imagine other clients will implement it.Dave Cramer",
"msg_date": "Wed, 3 Jan 2024 08:59:50 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 1/2/24 7:23 AM, Sehrope Sarkuni wrote:\r\n> Having worked on and just about wrapped up the JDBC driver patch for \r\n> this, couple thoughts:\r\n\r\n> 2. Password encoding should be split out and made available as its own \r\n> functions. Not just as part of a wider \"create _or_ alter a user's \r\n> password\" function attached to a connection. We went a step further and \r\n> added an intermediate function that generates the \"ALTER USER ... \r\n> PASSWORD\" SQL.\r\n\r\nI agree with this. It's been a minute, but I had done some refactoring \r\non the backend-side to support the \"don't need a connection\" case for \r\nSCRAM secret generation functions on the server-side[1]. But I think in \r\ngeneral we should split out the password generation functions, which \r\nleads to:\r\n\r\n> 5. Our SCRAM specific function allows for customizing the algo iteration \r\n> and salt parameters. That topic came up on hackers previously[1]. Our \r\n> high level \"alterUserPassword(...)\" function does not have those options \r\n> but it is available as part of our PasswordUtil SCRAM API for advanced \r\n> users who want to leverage it. The higher level functions have defaults \r\n> for iteration counts (4096) and salt size (16-bytes).\r\n\r\nThis seems like a good approach -- the regular function just has the \r\ndefaults (which can be aligned to the PostgreSQL defaults) (or inherit \r\nfrom the server configuration, which then requires the connection to be \r\npresent) and then have a more advanced API available.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/3a9b7126-01a0-7e1a-1b2a-a76df6176725%40postgresql.org",
"msg_date": "Wed, 3 Jan 2024 10:33:38 -0500",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "\n\n\n\n\nOn 1/3/24 7:53 AM, Robert Haas wrote:\n\n\nAlso, +1 for the general idea. I don't think this is a whole answer to\nthe problem of passwords appearing in log files because (1) you have\nto be using libpq in order to make use of this and (2) you have to\nactually use it instead of just doing something else and complaining\nabout the problem. But neither of those things is a reason not to have\nit. There's no reason why a sophisticated user who goes through libpq\nshouldn't have an easy way to do this instead of being asked to\nreimplement it if they want the functionality.\n\nISTM the only way to really move the needle (short of removing\n all SQL support for changing passwords) would be a GUC that allows\n disabling the use of SQL for setting passwords. While that doesn't\n prevent leakage, it does at least force users to use a secure\n method of setting passwords.\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n",
"msg_date": "Wed, 3 Jan 2024 16:43:51 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 1/2/24 07:23, Sehrope Sarkuni wrote:\n> 1. There's two sets of defaults, the client program's default and the \n> server's default. Need to pick one for each implemented function. They \n> don't need to be the same across the board.\n\nIs there a concrete recommendation here?\n\n> 2. Password encoding should be split out and made available as its own \n> functions. Not just as part of a wider \"create _or_ alter a user's \n> password\" function attached to a connection.\n\nIt already is split out in libpq[1], unless I don't understand you \ncorrectly.\n\n[1] \nhttps://www.postgresql.org/docs/current/libpq-misc.html#LIBPQ-PQENCRYPTPASSWORDCONN\n\n> We went a step further and added an intermediate function that\n> generates the \"ALTER USER ... PASSWORD\" SQL.\n\nI don't think we want that in libpq, but in any case separate \npatch/discussion IMHO.\n\n> 3. We only added an \"ALTER USER ... PASSWORD\" function, not anything to \n> create a user. There's way too many options for that and keeping this \n> targeted at just assigning passwords makes it much easier to test.\n\n+1\n\nAlso separate patch/discussion, but I don't think the CREATE version is \nneeded.\n\n> 4. RE:defaults, the PGJDBC approach is that the encoding-only function \n> uses the driver's default (SCRAM). The \"alterUserPassword(...)\" uses the \n> server's default (again usually SCRAM for modern installs but could be \n> something else). It's kind of odd that they're different but the use \n> cases are different as well.\n\nSince PQencryptPasswordConn() already exists, and psql's \"\\password\" \nused it with its defaults, I don't think we want to change the behavior. \nThe patch as written behaves in a backward compatible way.\n\n> 5. Our SCRAM specific function allows for customizing the algo iteration \n> and salt parameters. That topic came up on hackers previously[1]. Our \n> high level \"alterUserPassword(...)\" function does not have those options \n> but it is available as part of our PasswordUtil SCRAM API for advanced \n> users who want to leverage it. The higher level functions have defaults \n> for iteration counts (4096) and salt size (16-bytes).\n\nAgain separate patch/discussion, IMHO.\n\n> 6. Getting the verbiage right for the PGJDBC version was kind of \n> annoying as we wanted to match the server's wording, e.g. \n> \"password_encryption\", but it's clearly hashing, not encryption. We \n> settled on \"password encoding\" for describing the overall task with the \n> comments referencing the server's usage of the term \n> \"password_encryption\". Found a similar topic[2] on changing that \n> recently as well but looks like that's not going anywhere.\n\nReally this is irrelevant to this discussion, because the new function \nis called PQchangePassword().\n\nThe function PQencryptPasswordConn() has been around for a while and the \nhorse is out of the gate on that one.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 6 Jan 2024 11:53:24 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 12/24/23 10:14, Joe Conway wrote:\n> On 12/23/23 11:00, Tom Lane wrote:\n>> Joe Conway <[email protected]> writes:\n>>> The attached patch set moves the guts of \\password from psql into the \n>>> libpq client side -- PQchangePassword() (patch 0001).\n>> \n>> Haven't really read the patch, just looked at the docs, but here's\n>> a bit of bikeshedding:\n> \n> Thanks!\n> \n>> * This seems way too eager to promote the use of md5. Surely the\n>> default ought to be SCRAM, full stop. I question whether we even\n>> need an algorithm parameter. Perhaps it's a good idea for\n>> future-proofing, but we could also plan that the function would\n>> make its own decisions based on noting the server's version.\n>> (libpq is far more likely to be au courant about what to do than\n>> the calling application, IMO.)\n> \n>> * Parameter order seems a bit odd: to me it'd be natural to write\n>> user before password.\n> \n>> * Docs claim return type is char *, but then say bool (which is\n>> also what the code says). We do not use bool in libpq's API;\n>> the admittedly-hoary convention is to use int with 1/0 values.\n>> Rather than quibble about that, though, I wonder if we should\n>> make the result be the PGresult from the command, which the\n>> caller would be responsible to free. That would document error\n>> conditions much more flexibly. On the downside, client-side\n>> errors would be harder to report, particularly OOM, but I think\n>> we have solutions for that in existing functions.\n> \n>> * The whole business of nonblock mode seems fairly messy here,\n>> and I wonder whether it's worth the trouble to support. If we\n>> do want to claim it works then it ought to be tested.\n> \n> All of these (except for the doc \"char *\" cut-n-pasteo) were due to\n> trying to stay close to the same interface as PQencryptPasswordConn().\n> \n> But I agree with your assessment and the attached patch series addresses\n> all of them.\n> \n> The original use of PQencryptPasswordConn() in psql passed a NULL for\n> the algorithm, so I dropped that argument entirely. I also swapped user\n> and password arguments because as you pointed out that is more natural.\n> \n> This version returns PGresult. As far as special handling for\n> client-side errors like OOM, I don't see anything beyond returning a\n> NULL to signify fatal error, e,g,:\n> \n> 8<--------------\n> PGresult *\n> PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status)\n> {\n> \tPGresult *result;\n> \n> \tresult = (PGresult *) malloc(sizeof(PGresult));\n> \tif (!result)\n> \t\treturn NULL;\n> 8<--------------\n> \n> That is the approach I took.\n> \n>>> One thing I have not done but, considered, is adding an additional \n>>> optional parameter to allow \"VALID UNTIL\" to be set. Seems like it would \n>>> be useful to be able to set an expiration when setting a new password.\n>> \n>> No strong opinion about that.\n> \n> Thanks -- hopefully others will weigh in on that.\n\n\nI just read through the thread and my conclusion is that, specifically \nrelated to this patch set (i.e. notwithstanding suggestions for related \nfeatures), there is consensus in favor of adding this feature.\n\nThe only code specific comments were Tom's above, which have been \naddressed. If there are no serious objections I plan to commit this \nrelatively soon.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 6 Jan 2024 12:39:17 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Sat, Jan 6, 2024 at 11:53 AM Joe Conway <[email protected]> wrote:\n\n> On 1/2/24 07:23, Sehrope Sarkuni wrote:\n> > 1. There's two sets of defaults, the client program's default and the\n> > server's default. Need to pick one for each implemented function. They\n> > don't need to be the same across the board.\n>\n> Is there a concrete recommendation here?\n>\n\nBetween writing that and now, we scrapped the \"driver picks\" variant of\nthis. Now we only have explicit functions for each of the encoding types\n(i.e. one for md5 and one for SCRAM-SHA-256) and an alterUserPassword(...)\nmethod that uses the default for the database via reading the\npassword_encrypotion GUC. We also have some javadoc comments on the\nencoding functions to strongly suggest using the SCRAM functions and only\nuse the md5 directly for legacy servers.\n\nThe \"driver picks\" one was removed to prevent a situation where an end user\npicks the driver default and it's not compatible with their server. The\nrationale was if the driver's SCRAM-SHA-256 default is ever replaced with\nsomething else (e.g. SCRAM-SHA-512) we'd end up with an interim state where\nan upgraded driver application would try to use that newer encryption\nmethod on an old DB. If a user is going to do that, they would have to be\nexplicit with their choice of encoding functions (hence removing the\n\"driver picks\" variant).\n\nSo the recommendation is to have explicit functions for each variant and\nhave the end-to-end change password code read from the DB.\n\nMy understanding of this patch is that it does exactly that.\n\n\n> > 2. Password encoding should be split out and made available as its own\n> > functions. Not just as part of a wider \"create _or_ alter a user's\n> > password\" function attached to a connection.\n>\n> It already is split out in libpq[1], unless I don't understand you\n> correctly.\n>\n\nSorry for the confusion. My original list wasn't any specific contrasts\nwith what libpq is doing. Was more of a summary of thoughts having just\nconcluded implementing the same type of password change stuff in PGJDBC.\n\n From what I've seen in this patch, it either aligns with how we did things\nin PGJDBC or it's something that isn't as relevant in this context (e.g.\ngenerating the SQL text as a public function).\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Sat, Jan 6, 2024 at 11:53 AM Joe Conway <[email protected]> wrote:On 1/2/24 07:23, Sehrope Sarkuni wrote:\n> 1. There's two sets of defaults, the client program's default and the \n> server's default. Need to pick one for each implemented function. They \n> don't need to be the same across the board.\n\nIs there a concrete recommendation here?Between writing that and now, we scrapped the \"driver picks\" variant of this. Now we only have explicit functions for each of the encoding types (i.e. one for md5 and one for SCRAM-SHA-256) and an alterUserPassword(...) method that uses the default for the database via reading the password_encrypotion GUC. We also have some javadoc comments on the encoding functions to strongly suggest using the SCRAM functions and only use the md5 directly for legacy servers.The \"driver picks\" one was removed to prevent a situation where an end user picks the driver default and it's not compatible with their server. The rationale was if the driver's SCRAM-SHA-256 default is ever replaced with something else (e.g. SCRAM-SHA-512) we'd end up with an interim state where an upgraded driver application would try to use that newer encryption method on an old DB. If a user is going to do that, they would have to be explicit with their choice of encoding functions (hence removing the \"driver picks\" variant).So the recommendation is to have explicit functions for each variant and have the end-to-end change password code read from the DB.My understanding of this patch is that it does exactly that. \n> 2. Password encoding should be split out and made available as its own \n> functions. Not just as part of a wider \"create _or_ alter a user's \n> password\" function attached to a connection.\n\nIt already is split out in libpq[1], unless I don't understand you \ncorrectly.Sorry for the confusion. My original list wasn't any specific contrasts with what libpq is doing. Was more of a summary of thoughts having just concluded implementing the same type of password change stuff in PGJDBC.From what I've seen in this patch, it either aligns with how we did things in PGJDBC or it's something that isn't as relevant in this context (e.g. generating the SQL text as a public function).Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Sat, 6 Jan 2024 13:00:32 -0500",
"msg_from": "Sehrope Sarkuni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On Sat, Jan 6, 2024 at 12:39 PM Joe Conway <[email protected]> wrote:\n\n> The only code specific comments were Tom's above, which have been\n> addressed. If there are no serious objections I plan to commit this\n> relatively soon.\n>\n\nOne more thing that we do in pgjdbc is to zero out the input password args\nso that they don't remain in memory even after being freed. It's kind of\nodd in Java as it makes the input interface a char[] and we have to convert\nthem to garbage collected Strings internally (which kind of defeats the\npurpose of the exercise).\n\nBut in libpq could be done via something like:\n\nmemset(pw1, 0, strlen(pw1));\nmemset(pw2, 0, strlen(pw2));\n\nThere was some debate on our end of where to do that and we settled on\ndoing it inside the encoding functions to ensure it always happens. So the\ninput password char[] always gets wiped regardless of how the encoding\nfunctions are invoked.\n\nEven if it's not added to the password encoding functions (as that kind of\nchanges the after effects if anything was relying on the password still\nhaving the password), I think it'd be good to add it to the command.c stuff\nthat has the two copies of the password prior to freeing them.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Sat, Jan 6, 2024 at 12:39 PM Joe Conway <[email protected]> wrote:The only code specific comments were Tom's above, which have been \naddressed. If there are no serious objections I plan to commit this \nrelatively soon.One more thing that we do in pgjdbc is to zero out the input password args so that they don't remain in memory even after being freed. It's kind of odd in Java as it makes the input interface a char[] and we have to convert them to garbage collected Strings internally (which kind of defeats the purpose of the exercise).But in libpq could be done via something like:memset(pw1, 0, strlen(pw1));memset(pw2, 0, strlen(pw2));There was some debate on our end of where to do that and we settled on doing it inside the encoding functions to ensure it always happens. So the input password char[] always gets wiped regardless of how the encoding functions are invoked.Even if it's not added to the password encoding functions (as that kind of changes the after effects if anything was relying on the password still having the password), I think it'd be good to add it to the command.c stuff that has the two copies of the password prior to freeing them.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Sat, 6 Jan 2024 13:16:34 -0500",
"msg_from": "Sehrope Sarkuni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Scratch that, rather than memset(...) should be explicit_bzero(...) so it\ndoesn't get optimized out. Same idea though.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\n>\n\nScratch that, rather than memset(...) should be explicit_bzero(...) so it doesn't get optimized out. Same idea though.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Sat, 6 Jan 2024 13:18:29 -0500",
"msg_from": "Sehrope Sarkuni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 1/6/24 13:16, Sehrope Sarkuni wrote:\n> On Sat, Jan 6, 2024 at 12:39 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> The only code specific comments were Tom's above, which have been\n> addressed. If there are no serious objections I plan to commit this\n> relatively soon.\n> \n> \n> One more thing that we do in pgjdbc is to zero out the input password \n> args so that they don't remain in memory even after being freed. It's \n> kind of odd in Java as it makes the input interface a char[] and we have \n> to convert them to garbage collected Strings internally (which kind of \n> defeats the purpose of the exercise).\n> \n> But in libpq could be done via something like:\n> \n> memset(pw1, 0, strlen(pw1));\n> memset(pw2, 0, strlen(pw2));\n\n\nThat part is in psql not libpq\n\n> There was some debate on our end of where to do that and we settled on \n> doing it inside the encoding functions to ensure it always happens. So \n> the input password char[] always gets wiped regardless of how the \n> encoding functions are invoked.\n> \n> Even if it's not added to the password encoding functions (as that kind \n> of changes the after effects if anything was relying on the password \n> still having the password), I think it'd be good to add it to the \n> command.c stuff that has the two copies of the password prior to freeing \n> them.\n\nWhile that change might or might not be worthwhile, I see it as \nindependent of this patch.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 6 Jan 2024 13:31:22 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> The only code specific comments were Tom's above, which have been \n> addressed. If there are no serious objections I plan to commit this \n> relatively soon.\n\nI had not actually read this patchset before, but now I have, and\nI have a few minor suggestions:\n\n* The API comment for PQchangePassword should specify that encryption\nis done according to the server's password_encryption setting, and\nprobably the SGML docs should too. You shouldn't have to read the\ncode to discover that.\n\n* I don't especially care for the useless initializations of\nencrypted_password, fmtpw, and fmtuser. In all those cases the\ninitial NULL is immediately replaced by a valid value. Probably\nthe compiler will figure out that the initializations are useless,\nbut human readers have to do so as well. Moreover, I think this\nstyle is more bug-prone not less so, because if you ever change\nthe logic in a way that causes some code paths to fail to set\nthe variables, you won't get use-of-possibly-uninitialized-value\nwarnings from the compiler.\n\n* Perhaps move the declaration of \"buf\" to the inner block where\nit's actually used?\n\n* This could be shortened to just \"return res\":\n\n+ \t\t\t\tif (!res)\n+ \t\t\t\t\treturn NULL;\n+ \t\t\t\telse\n+ \t\t\t\t\treturn res;\n\n* I'd make the SGML documentation a bit more explicit about the\nreturn value, say\n\n+ Returns a <structname>PGresult</structname> pointer representing\n+ the result of the <literal>ALTER USER</literal> command, or\n+ a null pointer if the routine failed before issuing any command.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Jan 2024 15:10:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Password leakage avoidance"
},
{
"msg_contents": "On 1/6/24 15:10, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> The only code specific comments were Tom's above, which have been \n>> addressed. If there are no serious objections I plan to commit this \n>> relatively soon.\n> \n> I had not actually read this patchset before, but now I have, and\n> I have a few minor suggestions:\n\nMany thanks!\n\n> * The API comment for PQchangePassword should specify that encryption\n> is done according to the server's password_encryption setting, and\n> probably the SGML docs should too. You shouldn't have to read the\n> code to discover that.\n\nCheck\n\n> * I don't especially care for the useless initializations of\n> encrypted_password, fmtpw, and fmtuser. In all those cases the\n> initial NULL is immediately replaced by a valid value. Probably\n> the compiler will figure out that the initializations are useless,\n> but human readers have to do so as well. Moreover, I think this\n> style is more bug-prone not less so, because if you ever change\n> the logic in a way that causes some code paths to fail to set\n> the variables, you won't get use-of-possibly-uninitialized-value\n> warnings from the compiler.\n> \n> * Perhaps move the declaration of \"buf\" to the inner block where\n> it's actually used?\n\nMakes sense -- fixed\n\n> * This could be shortened to just \"return res\":\n> \n> + \t\t\t\tif (!res)\n> + \t\t\t\t\treturn NULL;\n> + \t\t\t\telse\n> + \t\t\t\t\treturn res;\n\nHeh, apparently I needed more coffee at this point :-)\n\n> * I'd make the SGML documentation a bit more explicit about the\n> return value, say\n> \n> + Returns a <structname>PGresult</structname> pointer representing\n> + the result of the <literal>ALTER USER</literal> command, or\n> + a null pointer if the routine failed before issuing any command.\n\nFixed.\n\nI also ran pgindent. I was kind of surprised/unhappy when it made me \nchange this (which aligned the two var names):\n8<------------\n<tab><tab><tab><tab>PQExpBufferData<tab>buf;\n<tab><tab><tab><tab>PGresult<tab><sp><sp><sp>*res;\n8<------------\n\nto this (which leaves the var names unaligned):\n8<------------\n<tab><tab><tab><tab>PQExpBufferData<sp>buf;\n<tab><tab><tab><tab>PGresult<sp><sp><sp>*res;\n8<------------\n\nAnyway, the resulting adjustments attached.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 7 Jan 2024 13:51:50 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Password leakage avoidance"
}
] |
[
{
"msg_contents": "I happened to notice that information_schema._pg_expandarray(),\nwhich has the nigh-unreadable definition\n\n AS 'select $1[s],\n s operator(pg_catalog.-) pg_catalog.array_lower($1,1) operator(pg_catalog.+) 1\n from pg_catalog.generate_series(pg_catalog.array_lower($1,1),\n pg_catalog.array_upper($1,1),\n 1) as g(s)';\n\ncan now be implemented using unnest():\n\n AS 'SELECT * FROM pg_catalog.unnest($1) WITH ORDINALITY';\n\nIt seems to be slightly more efficient this way, but the main point\nis to make it more readable.\n\nI then realized that we could also borrow unnest's infrastructure\nfor rowcount estimation:\n\n ROWS 100 SUPPORT pg_catalog.array_unnest_support\n\nbecause array_unnest_support just looks at the array argument and\ndoesn't have any hard dependency on the function being specifically\nunnest(). I'm not sure that any of its uses in information_schema\ncan benefit from that right now, but surely it can't hurt.\n\nOne minor annoyance is that psql.sql is using _pg_expandarray\nas a test case for \\sf[+]. While we could keep doing so, I think\nthe main point of that test case is to exercise \\sf+'s line\nnumbering ability, so the new one-line body is not a great test.\nI changed that test to use _pg_index_position instead.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 23 Dec 2023 13:18:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving information_schema._pg_expandarray()"
},
{
"msg_contents": "so 23. 12. 2023 v 19:18 odesílatel Tom Lane <[email protected]> napsal:\n\n> I happened to notice that information_schema._pg_expandarray(),\n> which has the nigh-unreadable definition\n>\n> AS 'select $1[s],\n> s operator(pg_catalog.-) pg_catalog.array_lower($1,1)\n> operator(pg_catalog.+) 1\n> from pg_catalog.generate_series(pg_catalog.array_lower($1,1),\n> pg_catalog.array_upper($1,1),\n> 1) as g(s)';\n>\n> can now be implemented using unnest():\n>\n> AS 'SELECT * FROM pg_catalog.unnest($1) WITH ORDINALITY';\n>\n> It seems to be slightly more efficient this way, but the main point\n> is to make it more readable.\n>\n> I then realized that we could also borrow unnest's infrastructure\n> for rowcount estimation:\n>\n> ROWS 100 SUPPORT pg_catalog.array_unnest_support\n>\n> because array_unnest_support just looks at the array argument and\n> doesn't have any hard dependency on the function being specifically\n> unnest(). I'm not sure that any of its uses in information_schema\n> can benefit from that right now, but surely it can't hurt.\n>\n> One minor annoyance is that psql.sql is using _pg_expandarray\n> as a test case for \\sf[+]. While we could keep doing so, I think\n> the main point of that test case is to exercise \\sf+'s line\n> numbering ability, so the new one-line body is not a great test.\n> I changed that test to use _pg_index_position instead.\n>\n\n+1\n\nregards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nso 23. 12. 2023 v 19:18 odesílatel Tom Lane <[email protected]> napsal:I happened to notice that information_schema._pg_expandarray(),\nwhich has the nigh-unreadable definition\n\n AS 'select $1[s],\n s operator(pg_catalog.-) pg_catalog.array_lower($1,1) operator(pg_catalog.+) 1\n from pg_catalog.generate_series(pg_catalog.array_lower($1,1),\n pg_catalog.array_upper($1,1),\n 1) as g(s)';\n\ncan now be implemented using unnest():\n\n AS 'SELECT * FROM pg_catalog.unnest($1) WITH ORDINALITY';\n\nIt seems to be slightly more efficient this way, but the main point\nis to make it more readable.\n\nI then realized that we could also borrow unnest's infrastructure\nfor rowcount estimation:\n\n ROWS 100 SUPPORT pg_catalog.array_unnest_support\n\nbecause array_unnest_support just looks at the array argument and\ndoesn't have any hard dependency on the function being specifically\nunnest(). I'm not sure that any of its uses in information_schema\ncan benefit from that right now, but surely it can't hurt.\n\nOne minor annoyance is that psql.sql is using _pg_expandarray\nas a test case for \\sf[+]. While we could keep doing so, I think\nthe main point of that test case is to exercise \\sf+'s line\nnumbering ability, so the new one-line body is not a great test.\nI changed that test to use _pg_index_position instead.+1regardsPavel\n\n regards, tom lane",
"msg_date": "Sat, 23 Dec 2023 19:27:18 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving information_schema._pg_expandarray()"
},
{
"msg_contents": "[ I got distracted while writing this follow-up and only just found it\n in my list of unsent Gnus buffers, and now it's probably too late to\n make it for 17, but here it is anyway while I remember. ]\n\nTom Lane <[email protected]> writes:\n\n> I happened to notice that information_schema._pg_expandarray(),\n> which has the nigh-unreadable definition\n>\n> AS 'select $1[s],\n> s operator(pg_catalog.-) pg_catalog.array_lower($1,1) operator(pg_catalog.+) 1\n> from pg_catalog.generate_series(pg_catalog.array_lower($1,1),\n> pg_catalog.array_upper($1,1),\n> 1) as g(s)';\n>\n> can now be implemented using unnest():\n>\n> AS 'SELECT * FROM pg_catalog.unnest($1) WITH ORDINALITY';\n>\n> It seems to be slightly more efficient this way, but the main point\n> is to make it more readable.\n\nI didn't spot this until it got committed, but it got me wondering what\neliminating the wrapper function completely would look like, so I\nwhipped up the attached. It instead calls UNNEST() laterally in the\nqueries, which has the side benefit of getting rid of several\nsubselects, one of which was particularly confusing. In one place the\nlateral form eliminated the need for WITH ORDINALITY as well.\n\n\n- ilmari",
"msg_date": "Mon, 13 May 2024 17:56:07 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving information_schema._pg_expandarray()"
}
] |
[
{
"msg_contents": "I've got a small question about marking functions working with decimal\nnumber types as either IMMUTABLE or STABLE. Below are a pair of trivial\nfunctions that show what I'm guessing. An int8/int8[] seems like it's going\nto be immutable forever. However, decimal types aren't quite so crisp and\nconsistent. Does this mean that I need to mark such a function as\nSTABLE instead\nof IMMUTABLE, like below?\n\nI'm a bit hazy on exactly when some operations shift from IMMUTABLE to\nSTABLE. For example, it seems fair that many time/date operations are not\nIMMUTABLE because they vary based on the current time zone. Likewise, I\nthink that text operations are generally not IMMUTABLE since collations\nvary across versions and platforms.\n\nAny clarification would be appreciated. I've been googling around and\nchecking the archives, but haven't found these specific details addressed,\nso far.\n\nAh, and I have no clue how much difference it even makes to mark a function\nas IMMUTABLE instead of STABLE. If the difference is more theoretical than\npractical, I can feel comfortable using STABLE, when unclear.\n\nThank you!\n\n-----------------------------------\n-- array_sum(int8[]) : int8\n-----------------------------------\nCREATE OR REPLACE FUNCTION tools.array_sum(array_in int8[])\nRETURNS int8 AS\n\n$BODY$\n\n SELECT SUM(element) AS result\n FROM UNNEST(array_in) AS element;\n\n$BODY$\nLANGUAGE sql\nIMMUTABLE;\n\n-- Add a comment to describe the function\nCOMMENT ON FUNCTION tools.array_sum(int8[]) IS\n'Sum an int8[] array.';\n\n-- Set the function's owner to USER_BENDER\nALTER FUNCTION tools.array_sum(int8[]) OWNER TO user_bender;\n\n-----------------------------------\n-- array_sum(real[]]) : real\n-----------------------------------\nCREATE OR REPLACE FUNCTION tools.array_sum(array_in real[])\nRETURNS real AS\n\n$BODY$\n\n SELECT SUM(element) AS result\n FROM UNNEST(array_in) AS element;\n\n$BODY$\nLANGUAGE sql\nSTABLE; -- Decimal number types seem to change across versions and chips?\n\n-- Add a comment to describe the function\nCOMMENT ON FUNCTION tools.array_sum(real[]) IS\n'Sum an real[] array.';\n\n-- Set the function's owner to USER_BENDER\nALTER FUNCTION tools.array_sum(real[]) OWNER TO user_bender;\n\nI've got a small question about marking functions working with decimal number types as either IMMUTABLE or STABLE. Below are a pair of trivial functions that show what I'm guessing. An int8/int8[] seems like it's going to be immutable forever. However, decimal types aren't quite so crisp and consistent. Does this mean that I need to mark such a function as STABLE instead of IMMUTABLE, like below?I'm a bit hazy on exactly when some operations shift from IMMUTABLE to STABLE. For example, it seems fair that many time/date operations are not IMMUTABLE because they vary based on the current time zone. Likewise, I think that text operations are generally not IMMUTABLE since collations vary across versions and platforms.Any clarification would be appreciated. I've been googling around and checking the archives, but haven't found these specific details addressed, so far.Ah, and I have no clue how much difference it even makes to mark a function as IMMUTABLE instead of STABLE. If the difference is more theoretical than practical, I can feel comfortable using STABLE, when unclear.Thank you!------------------------------------- array_sum(int8[]) : int8-----------------------------------CREATE OR REPLACE FUNCTION tools.array_sum(array_in int8[])RETURNS int8 AS $BODY$ SELECT SUM(element) AS result FROM UNNEST(array_in) AS element; $BODY$ LANGUAGE sqlIMMUTABLE;-- Add a comment to describe the functionCOMMENT ON FUNCTION tools.array_sum(int8[]) IS'Sum an int8[] array.';-- Set the function's owner to USER_BENDERALTER FUNCTION tools.array_sum(int8[]) OWNER TO user_bender;------------------------------------- array_sum(real[]]) : real-----------------------------------CREATE OR REPLACE FUNCTION tools.array_sum(array_in real[])RETURNS real AS $BODY$ SELECT SUM(element) AS result FROM UNNEST(array_in) AS element; $BODY$ LANGUAGE sqlSTABLE; -- Decimal number types seem to change across versions and chips?-- Add a comment to describe the functionCOMMENT ON FUNCTION tools.array_sum(real[]) IS'Sum an real[] array.';-- Set the function's owner to USER_BENDERALTER FUNCTION tools.array_sum(real[]) OWNER TO user_bender;",
"msg_date": "Sun, 24 Dec 2023 06:56:35 +1100",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are operations on real values IMMUTABLE or STABLE?"
},
{
"msg_contents": "Morris de Oryx <[email protected]> writes:\n> I've got a small question about marking functions working with decimal\n> number types as either IMMUTABLE or STABLE. Below are a pair of trivial\n> functions that show what I'm guessing. An int8/int8[] seems like it's going\n> to be immutable forever. However, decimal types aren't quite so crisp and\n> consistent. Does this mean that I need to mark such a function as\n> STABLE instead\n> of IMMUTABLE, like below?\n\nI think you're overthinking it. We have no hesitation about marking\nbuilt-in floating-point functions as immutable, so if you're worried\nabout some other machine hypothetically delivering different results,\nyou're in trouble anyway. (In practice, the whole world is supposedly\ncompliant with IEEE float arithmetic, so such cases shouldn't arise.)\n\n> Ah, and I have no clue how much difference it even makes to mark a function\n> as IMMUTABLE instead of STABLE. If the difference is more theoretical than\n> practical, I can feel comfortable using STABLE, when unclear.\n\nIt's entirely not theoretical. The system won't let you use a\nnon-IMMUTABLE function in an index definition or generated column,\nand there are significant query-optimization implications as well.\nSo generally people tend to err on the side of marking things\nIMMUTABLE if it's at all plausible to do so. In the worst case\nyou might end up having to reindex, or rebuild generated columns,\nshould the function's behavior actually change. Frequently that\nrisk is well worth taking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Dec 2023 17:48:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are operations on real values IMMUTABLE or STABLE?"
},
{
"msg_contents": ">\n>\n> I think you're overthinking it.\n>\n\n*Moi*? Never happens ;-)\n\nFantastic answer, thanks very much for giving me all of these details.\nComing from you, I'll take it as authoritative and run with it.\n\nI think you're overthinking it. Moi? Never happens ;-)Fantastic answer, thanks very much for giving me all of these details. Coming from you, I'll take it as authoritative and run with it.",
"msg_date": "Sun, 24 Dec 2023 13:11:07 +1100",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are operations on real values IMMUTABLE or STABLE?"
}
] |
[
{
"msg_contents": "Coverity whinged this morning about the following bit in\nthe new pg_combinebackup code:\n\n644 unsigned rb;\n645 \n646 /* Read the block from the correct source, except if dry-run. */\n647 rb = pg_pread(s->fd, buffer, BLCKSZ, offsetmap[i]);\n648 if (rb != BLCKSZ)\n649 {\n>>> CID 1559912: Control flow issues (NO_EFFECT)\n>>> This less-than-zero comparison of an unsigned value is never true. \"rb < 0U\".\n650 if (rb < 0)\n651 pg_fatal(\"could not read file \\\"%s\\\": %m\", s->filename);\n\nIt's dead right to complain of course. (I kind of think that the\nmajority of places where reconstruct.c is using \"unsigned\" variables\nare poorly-thought-through; many of them look like they should be\nsize_t, and I suspect some other ones beside this one are flat wrong\nor at least unnecessarily fragile. But I digress.)\n\nWhile looking around for other places that might've made comparable\nmistakes, I noted that we have places that are storing the result of\npg_pread[v]/pg_pwrite[v] into an \"int\" variable even though they are\npassing a size_t count argument that there is no obvious reason to\nbelieve must fit in int. This seems like trouble waiting to happen,\nso I fixed some of these in the attached. The major remaining place\nthat I think we ought to change is the newly-minted\nFileRead[V]/FileWrite[V] functions, which are declared to return int\nbut really should be returning ssize_t IMO. I didn't do that here\nthough.\n\nWe could go further by insisting that *all* uses of pg_pread/pg_pwrite\nuse ssize_t result variables. I think that's probably overkill --- in\nthe example above, which is only asking to write BLCKSZ worth of data,\nsurely an int is sufficient. But you could argue that allowing this\npattern at all creates risk of copy/paste errors.\n\nOf course the real elephant in the room is that plain old read(2)\nand write(2) also return ssize_t. I've not attempted to vet every\ncall of those, and I think it'd likely be a waste of effort, as\nwe're unlikely to ever try to shove more than INT_MAX worth of\ndata through them. But it's a bit harder to make that argument\nfor the iovec-based file APIs. I think we ought to try to keep\nour uses of those functions clean on this point.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 24 Dec 2023 13:09:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pread, pwrite, etc return ssize_t not int"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 7:09 AM Tom Lane <[email protected]> wrote:\n> Coverity whinged this morning about the following bit in\n> the new pg_combinebackup code:\n>\n> 644 unsigned rb;\n> 645\n> 646 /* Read the block from the correct source, except if dry-run. */\n> 647 rb = pg_pread(s->fd, buffer, BLCKSZ, offsetmap[i]);\n> 648 if (rb != BLCKSZ)\n> 649 {\n> >>> CID 1559912: Control flow issues (NO_EFFECT)\n> >>> This less-than-zero comparison of an unsigned value is never true. \"rb < 0U\".\n> 650 if (rb < 0)\n> 651 pg_fatal(\"could not read file \\\"%s\\\": %m\", s->filename);\n>\n> It's dead right to complain of course. (I kind of think that the\n> majority of places where reconstruct.c is using \"unsigned\" variables\n> are poorly-thought-through; many of them look like they should be\n> size_t, and I suspect some other ones beside this one are flat wrong\n> or at least unnecessarily fragile. But I digress.)\n\nYeah.\n\n> While looking around for other places that might've made comparable\n> mistakes, I noted that we have places that are storing the result of\n> pg_pread[v]/pg_pwrite[v] into an \"int\" variable even though they are\n> passing a size_t count argument that there is no obvious reason to\n> believe must fit in int. This seems like trouble waiting to happen,\n> so I fixed some of these in the attached. The major remaining place\n> that I think we ought to change is the newly-minted\n> FileRead[V]/FileWrite[V] functions, which are declared to return int\n> but really should be returning ssize_t IMO. I didn't do that here\n> though.\n\nAgreed in theory. Note that we've only been using size_t in fd.c\nfunctions since:\n\ncommit 2d4f1ba6cfc2f0a977f1c30bda9848041343e248\nAuthor: Peter Eisentraut <[email protected]>\nDate: Thu Dec 8 08:51:38 2022 +0100\n\n Update types in File API\n\n Make the argument types of the File API match stdio better:\n\n - Change the data buffer to void *, from char *.\n - Change FileWrite() data buffer to const on top of that.\n - Change amounts to size_t, from int.\n\nI guess it was an oversight not to change the return type to match at\nthe same time. That said, I think that would only be for tidiness.\n\nSome assorted observations:\n\n1. We don't yet require \"large file\" support, meaning that we use\noff_t our fd.c and lseek()/p*() replacement functions, but we know it\nis only 32 bits on Windows, and we avoid creating large files. I\nthink that means that a hypothetical very large write would break that\nassumption, creating data whose position cannot be named in those\ncalls. We could fix that on Windows by adjusting our wrappers, either\nto work with pgoff_t instead off off_t (yuck) or redefining off_t\n(yuck), but given that both options are gross, so far we have imagined\nthat we should move towards using large files only conditionally, on\nsizeof(off_t) >= 8.\n\n2. Windows' native read() and write() functions still have prototypes\nlike 1988 POSIX, eg int read(int filedes, void *buf, unsigned int\nnbyte). It was 2001 POSIX that changed them to ssize_t read(int\nfiledesc, void *buf, size_t nbytes). I doubt there is much that can\nbe done about that, except perhaps adding a wrapper that caps it.\nSeems like overkill for a hypothetical sort of a problem...\n\n3. Windows' native ReadFile() and WriteFile() functions, which our\n'positionified' pg_p*() wrapper functions use, work in terms of DWORD\n= unsigned long which is 32 bit on that cursed ABI. Our wrappers\nshould probably cap.\n\nI think a number of Unixoid systems implemented the POSIX interface\nchange by capping internally, which doesn't matter much in practice\nbecause no one really tries to transfer gigabytes at once, and any\nnon-trivial transfer size probably requires handling short transfers.\nFor example, man read on Linux:\n\n On Linux, read() (and similar system calls) will transfer at most\n 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes\n actually transferred. (This is true on both 32-bit and 64-bit\n systems.)\n\n> We could go further by insisting that *all* uses of pg_pread/pg_pwrite\n> use ssize_t result variables. I think that's probably overkill --- in\n> the example above, which is only asking to write BLCKSZ worth of data,\n> surely an int is sufficient. But you could argue that allowing this\n> pattern at all creates risk of copy/paste errors.\n\nYeah.\n\n> Of course the real elephant in the room is that plain old read(2)\n> and write(2) also return ssize_t. I've not attempted to vet every\n> call of those, and I think it'd likely be a waste of effort, as\n> we're unlikely to ever try to shove more than INT_MAX worth of\n> data through them. But it's a bit harder to make that argument\n> for the iovec-based file APIs. I think we ought to try to keep\n> our uses of those functions clean on this point.\n\nYeah I think it's OK for a caller that knows it's passing in an int\nvalue to (implicitly) cast the return to int. But it'd be nice to\nmake our I/O functions look and feel like standard functions and\nreturn ssize_t.\n\n\n",
"msg_date": "Tue, 26 Dec 2023 13:23:31 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pread, pwrite, etc return ssize_t not int"
},
{
"msg_contents": "Patches attached.\n\nPS Correction to my earlier statement about POSIX: the traditional K&R\ninterfaces were indeed in the original POSIX.1 1988 but it was the\n1990 edition (approximately coinciding with standard C) that adopted\nvoid, size_t, const and invented ssize_t.",
"msg_date": "Wed, 28 Feb 2024 00:21:45 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pread, pwrite, etc return ssize_t not int"
},
{
"msg_contents": "On 27.02.24 12:21, Thomas Munro wrote:\n> Patches attached.\n> \n> PS Correction to my earlier statement about POSIX: the traditional K&R\n> interfaces were indeed in the original POSIX.1 1988 but it was the\n> 1990 edition (approximately coinciding with standard C) that adopted\n> void, size_t, const and invented ssize_t.\n\n0001-Return-ssize_t-in-fd.c-I-O-functions.patch\n\nThis patch looks correct to me.\n\n0002-Fix-theoretical-overflow-in-Windows-pg_pread-pg_pwri.patch\n\nI have two comments on that:\n\nFor the overflow of the input length (size_t -> DWORD), I don't think we \nactually need to do anything. The size argument would be truncated, but \nthe callers would just repeat the calls with the remaining size, so in \neffect they will read the data in chunks of rest + N * DWORD_MAX. The \npatch just changes this to chunks of N * 1GB + rest.\n\nThe other issue, the possible overflow of size_t -> ssize_t is not \nspecific to Windows. We could install some protection against that on \nsome other layer, but it's unclear how widespread that issue is or what \nthe appropriate fix is. POSIX says that passing in a size larger than \nSSIZE_MAX has implementation-defined effect. The FreeBSD man page says \nthat this will result in an EINVAL error. So if we here truncate \ninstead of error, we'd introduce a divergence.\n\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:12:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pread, pwrite, etc return ssize_t not int"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 3:12 AM Peter Eisentraut <[email protected]> wrote:\n> 0001-Return-ssize_t-in-fd.c-I-O-functions.patch\n>\n> This patch looks correct to me.\n\nThanks, I'll push this one.\n\n> 0002-Fix-theoretical-overflow-in-Windows-pg_pread-pg_pwri.patch\n>\n> I have two comments on that:\n>\n> For the overflow of the input length (size_t -> DWORD), I don't think we\n> actually need to do anything. The size argument would be truncated, but\n> the callers would just repeat the calls with the remaining size, so in\n> effect they will read the data in chunks of rest + N * DWORD_MAX. The\n> patch just changes this to chunks of N * 1GB + rest.\n\nBut implicit conversion size_t -> DWORD doesn't convert large numbers\nto DWORD_MAX, it just cuts off the high bits, and that might leave you\nwith zero. Zero has a special meaning (if we assume that kernel\ndoesn't reject a zero size argument outright, I dunno): if returned by\nreads it indicates EOF, and if returned by writes a typical caller\nwould either loop forever making no progress or (in some of our code)\nconjure up a fake ENOSPC. Hence desire to impose a cap.\n\nI'm on the fence about whether it's worth wasting any more energy on\nthis, I mean we aren't really going to read/write 4GB, so I'd be OK if\nwe just left this as an observation in the archives...\n\n> The other issue, the possible overflow of size_t -> ssize_t is not\n> specific to Windows. We could install some protection against that on\n> some other layer, but it's unclear how widespread that issue is or what\n> the appropriate fix is. POSIX says that passing in a size larger than\n> SSIZE_MAX has implementation-defined effect. The FreeBSD man page says\n> that this will result in an EINVAL error. So if we here truncate\n> instead of error, we'd introduce a divergence.\n\nYeah, right, that's the caller's job to worry about on all platforms\nso I was wrong to mention ssize_t in the comment.\n\n\n",
"msg_date": "Sat, 2 Mar 2024 10:23:37 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pread, pwrite, etc return ssize_t not int"
},
{
"msg_contents": "On 01.03.24 22:23, Thomas Munro wrote:\n>> For the overflow of the input length (size_t -> DWORD), I don't think we\n>> actually need to do anything. The size argument would be truncated, but\n>> the callers would just repeat the calls with the remaining size, so in\n>> effect they will read the data in chunks of rest + N * DWORD_MAX. The\n>> patch just changes this to chunks of N * 1GB + rest.\n> \n> But implicit conversion size_t -> DWORD doesn't convert large numbers\n> to DWORD_MAX, it just cuts off the high bits, and that might leave you\n> with zero. Zero has a special meaning (if we assume that kernel\n> doesn't reject a zero size argument outright, I dunno): if returned by\n> reads it indicates EOF, and if returned by writes a typical caller\n> would either loop forever making no progress or (in some of our code)\n> conjure up a fake ENOSPC. Hence desire to impose a cap.\n\nRight, my thinko. Your patch is correct then.\n\n\n\n",
"msg_date": "Sat, 2 Mar 2024 06:16:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pread, pwrite, etc return ssize_t not int"
}
] |
[
{
"msg_contents": "Hello hackers!\n\nI am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job,\nI find a pain that should be fixed.\n\nAs you know, we can use the UP arrow key to get the previous command to\navoid extra typing. This is a wonderful feature to save the lives of every\nDBA. However, if I type the commands like this sequence: A, B, B, B, B, B,\nB, as you can see, B is the last command I execute.\n\nBut if I try to get command A, I have to press the UP key 7 times. I think\nthe best way is: when you press the UP key, plsql should show the command\nthat is different from the previous command, so the recall sequence should\nbe B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to\nget command A.\n\nI think this should change little code in psql, but it will make all DBA's\nlives much easier. This is a strong requirement from the real DBA. Hope to\nget some feedback on this.\n\nAnother requirement is: could we use / to repeat executing the last command\nin plsql just like sqlplus in Oracle?\n\nI will try to learn how to fix it sooner or later, but if some\nproficient hacker focuses on this, it can be fixed quickly, I guess.\n\nThoughts?\n\nRegards,\n\nKevin\n\nHello hackers!I am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job, I find a pain that should be fixed.As you know, we can use the UP arrow key to get the previous command to avoid extra typing. This is a wonderful feature to save the lives of every DBA. However, if I type the commands like this sequence: A, B, B, B, B, B, B, as you can see, B is the last command I execute. But if I try to get command A, I have to press the UP key 7 times. I think the best way is: when you press the UP key, plsql should show the command that is different from the previous command, so the recall sequence should be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to get command A.I think this should change little code in psql, but it will make all DBA's lives much easier. This is a strong requirement from the real DBA. Hope to get some feedback on this.Another requirement is: could we use / to repeat executing the last command in plsql just like sqlplus in Oracle?I will try to learn how to fix it sooner or later, but if some proficient hacker focuses on this, it can be fixed quickly, I guess.Thoughts?Regards,Kevin",
"msg_date": "Sun, 24 Dec 2023 13:17:38 -0500",
"msg_from": "Kevin Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "A tiny improvement of psql"
},
{
"msg_contents": "Kevin Wang <[email protected]> writes:\n> As you know, we can use the UP arrow key to get the previous command to\n> avoid extra typing. This is a wonderful feature to save the lives of every\n> DBA. However, if I type the commands like this sequence: A, B, B, B, B, B,\n> B, as you can see, B is the last command I execute.\n\n> But if I try to get command A, I have to press the UP key 7 times. I think\n> the best way is: when you press the UP key, plsql should show the command\n> that is different from the previous command, so the recall sequence should\n> be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to\n> get command A.\n\nThis is driven by libreadline, not anything we control. I have\nseen the behavior you describe in some other programs, so I wonder\nwhether it's configurable.\n\n> Another requirement is: could we use / to repeat executing the last command\n> in plsql just like sqlplus in Oracle?\n\nI'm pretty certain you can configure this for yourself with readline.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Dec 2023 11:36:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A tiny improvement of psql"
},
{
"msg_contents": "On 12/26/23 17:36, Tom Lane wrote:\n> Kevin Wang <[email protected]> writes:\n>> As you know, we can use the UP arrow key to get the previous command to\n>> avoid extra typing. This is a wonderful feature to save the lives of every\n>> DBA. However, if I type the commands like this sequence: A, B, B, B, B, B,\n>> B, as you can see, B is the last command I execute.\n> \n>> But if I try to get command A, I have to press the UP key 7 times. I think\n>> the best way is: when you press the UP key, plsql should show the command\n>> that is different from the previous command, so the recall sequence should\n>> be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to\n>> get command A.\n> \n> This is driven by libreadline, not anything we control. I have\n> seen the behavior you describe in some other programs, so I wonder\n> whether it's configurable.\n\nIt is kind of something we control. Per the psql docs, setting\n\n HISTCONTROL=ignoredups\n\nwill do the trick.\n\nhttps://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES-HISTCONTROL\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 26 Dec 2023 22:45:31 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A tiny improvement of psql"
},
{
"msg_contents": "On Tue, 26 Dec 2023 at 22:45, Vik Fearing <[email protected]> wrote:\n> It is kind of something we control. Per the psql docs, setting\n>\n> HISTCONTROL=ignoredups\n>\n> will do the trick.\n\nYeah, the easiest \"fix\" (that I know of) for a user is to set\nHISTCONTROL in ~/.psqlrc to ignoredups using:\n\n\\set HISTCONTROL ignoredups\n\nBut honestly, I think that should probably be made the default. I\ncan't really think of a reason who would actually want the current\ndefault of \"none\". And while we're at it maybe there are some other\ndefaults in psql that are worth changing. The main ones from my psqlrc\nthat seem like good defaults for pretty much everyone:\n\n\\x auto\n\\pset linestyle unicode\n\nAnd maybe fixing the major pitfall I always run into with psql: Having\nON_ERROR_STOP default to on when a script is passed in using -f/--file\n\n\n",
"msg_date": "Wed, 27 Dec 2023 00:33:38 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A tiny improvement of psql"
},
{
"msg_contents": "On repeating the execution of last command in psql, we can always use below\ncommand to send current query buffer to server.\n\n\\g\n\\gx (with expanded output mode, that always come handy.)\n\nOn Tue, Dec 26, 2023 at 9:56 PM Kevin Wang <[email protected]> wrote:\n\n> Hello hackers!\n>\n> I am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job,\n> I find a pain that should be fixed.\n>\n> As you know, we can use the UP arrow key to get the previous command to\n> avoid extra typing. This is a wonderful feature to save the lives of every\n> DBA. However, if I type the commands like this sequence: A, B, B, B, B, B,\n> B, as you can see, B is the last command I execute.\n>\n> But if I try to get command A, I have to press the UP key 7 times. I\n> think the best way is: when you press the UP key, plsql should show the\n> command that is different from the previous command, so the recall sequence\n> should be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2\n> times to get command A.\n>\n> I think this should change little code in psql, but it will make all DBA's\n> lives much easier. This is a strong requirement from the real DBA. Hope to\n> get some feedback on this.\n>\n> Another requirement is: could we use / to repeat executing the last\n> command in plsql just like sqlplus in Oracle?\n>\n> I will try to learn how to fix it sooner or later, but if some\n> proficient hacker focuses on this, it can be fixed quickly, I guess.\n>\n> Thoughts?\n>\n> Regards,\n>\n> Kevin\n>\n\nOn repeating the execution of last command in psql, we can always use below command to send current query buffer to server.\\g\\gx (with expanded output mode, that always come handy.)On Tue, Dec 26, 2023 at 9:56 PM Kevin Wang <[email protected]> wrote:Hello hackers!I am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job, I find a pain that should be fixed.As you know, we can use the UP arrow key to get the previous command to avoid extra typing. This is a wonderful feature to save the lives of every DBA. However, if I type the commands like this sequence: A, B, B, B, B, B, B, as you can see, B is the last command I execute. But if I try to get command A, I have to press the UP key 7 times. I think the best way is: when you press the UP key, plsql should show the command that is different from the previous command, so the recall sequence should be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to get command A.I think this should change little code in psql, but it will make all DBA's lives much easier. This is a strong requirement from the real DBA. Hope to get some feedback on this.Another requirement is: could we use / to repeat executing the last command in plsql just like sqlplus in Oracle?I will try to learn how to fix it sooner or later, but if some proficient hacker focuses on this, it can be fixed quickly, I guess.Thoughts?Regards,Kevin",
"msg_date": "Wed, 27 Dec 2023 12:05:31 +0530",
"msg_from": "Deepak M <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A tiny improvement of psql"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 11:26 AM Kevin Wang <[email protected]> wrote:\n\n> Hello hackers!\n>\n> I am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job,\n> I find a pain that should be fixed.\n>\n> As you know, we can use the UP arrow key to get the previous command to\n> avoid extra typing. This is a wonderful feature to save the lives of every\n> DBA. However, if I type the commands like this sequence: A, B, B, B, B, B,\n> B, as you can see, B is the last command I execute.\n>\n> But if I try to get command A, I have to press the UP key 7 times. I\n> think the best way is: when you press the UP key, plsql should show the\n> command that is different from the previous command, so the recall sequence\n> should be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2\n> times to get command A.\n>\n> I think this should change little code in psql, but it will make all DBA's\n> lives much easier. This is a strong requirement from the real DBA. Hope to\n> get some feedback on this.\n>\n> Kevin,\n with readline, I use ctrl-r (incremental search backwards).\nbut if you are willing to modify your .inputrc you can enable the \"windows\ncmd F5/F8 keys... Search Fwd/Bwd\".\nwhere you type B<F8>\n\ninputrc:\n# Map F8 (back) F5(forward) search like CMD\n\"\\e[19~\": history-search-backward\n\"\\e[15~\": history-search-forward\n\nThere are commented out lines tying them to Page Up/Page Down... But 30\nyrs in a CMD prompt...\n\nThe upside is that this works in bash and other programs as well...\n\nHTH\n\nKirk Out!\n\n>\n\nOn Tue, Dec 26, 2023 at 11:26 AM Kevin Wang <[email protected]> wrote:Hello hackers!I am an Oracle/PostgreSQL DBA, I am not a PG hacker. During my daily job, I find a pain that should be fixed.As you know, we can use the UP arrow key to get the previous command to avoid extra typing. This is a wonderful feature to save the lives of every DBA. However, if I type the commands like this sequence: A, B, B, B, B, B, B, as you can see, B is the last command I execute. But if I try to get command A, I have to press the UP key 7 times. I think the best way is: when you press the UP key, plsql should show the command that is different from the previous command, so the recall sequence should be B -> A, not B -> B -> ... -> A. Then I only press the UP key 2 times to get command A.I think this should change little code in psql, but it will make all DBA's lives much easier. This is a strong requirement from the real DBA. Hope to get some feedback on this.Kevin, with readline, I use ctrl-r (incremental search backwards).but if you are willing to modify your .inputrc you can enable the \"windows cmd F5/F8 keys... Search Fwd/Bwd\".where you type B<F8> inputrc:# Map F8 (back) F5(forward) search like CMD\"\\e[19~\": history-search-backward\"\\e[15~\": history-search-forwardThere are commented out lines tying them to Page Up/Page Down... But 30 yrs in a CMD prompt...The upside is that this works in bash and other programs as well...HTHKirk Out!",
"msg_date": "Wed, 27 Dec 2023 02:00:04 -0500",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A tiny improvement of psql"
}
] |
[
{
"msg_contents": "I came across the 'missing braces' warning again when building master\n(0a93f803f4) on old GCC (4.8.5).\n\nblkreftable.c: In function ‘BlockRefTableSetLimitBlock’:\nblkreftable.c:268:2: warning: missing braces around initializer\n[-Wmissing-braces]\n BlockRefTableKey key = {0}; /* make sure any padding is zero */\n ^\n\nThis has popped up a few times in the past, and it seems to be GCC bug\n53119. We previously used the {{...}} approach to suppress it. Should\nwe do the same here, like attached?\n\nFWIW, in the same file we initialize BlockRefTableSerializedEntry\nvariables also with {{0}}.\n\nThanks\nRichard",
"msg_date": "Mon, 25 Dec 2023 10:42:43 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Erroneous -Werror=missing-braces on old GCC"
},
{
"msg_contents": "\nOn Mon, 25 Dec 2023 at 10:42, Richard Guo <[email protected]> wrote:\n> I came across the 'missing braces' warning again when building master\n> (0a93f803f4) on old GCC (4.8.5).\n>\n> blkreftable.c: In function ‘BlockRefTableSetLimitBlock’:\n> blkreftable.c:268:2: warning: missing braces around initializer\n> [-Wmissing-braces]\n> BlockRefTableKey key = {0}; /* make sure any padding is zero */\n> ^\n>\n\nI doubt if `key = {0}` equals `key = {{0}}`, since the second only\ninitialize the first field in `key`, it may depend on compiler to\ninitialize other fields (include padding).\n\n> This has popped up a few times in the past, and it seems to be GCC bug\n> 53119. We previously used the {{...}} approach to suppress it. Should\n> we do the same here, like attached?\n>\n> FWIW, in the same file we initialize BlockRefTableSerializedEntry\n> variables also with {{0}}.\n>\n> Thanks\n> Richard\n\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Mon, 25 Dec 2023 11:48:37 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erroneous -Werror=missing-braces on old GCC"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> I came across the 'missing braces' warning again when building master\n> (0a93f803f4) on old GCC (4.8.5).\n\nOn the one hand, it's probably pointless to worry about buggy\nwarnings from ancient compilers ...\n\n> This has popped up a few times in the past, and it seems to be GCC bug\n> 53119. We previously used the {{...}} approach to suppress it. Should\n> we do the same here, like attached?\n> FWIW, in the same file we initialize BlockRefTableSerializedEntry\n> variables also with {{0}}.\n\n... but there is a lot to be said for maintaining stylistic consistency.\nGiven that we're doing it this way elsewhere, we should do it in these\nspots too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Dec 2023 22:57:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erroneous -Werror=missing-braces on old GCC"
},
{
"msg_contents": "I wrote:\n> Richard Guo <[email protected]> writes:\n>> I came across the 'missing braces' warning again when building master\n>> (0a93f803f4) on old GCC (4.8.5).\n\n> On the one hand, it's probably pointless to worry about buggy\n> warnings from ancient compilers ...\n\nActually, after checking the buildfarm, I see that\n\narowana\nayu\nbatfish\nboa\nburi\ndemoiselle\ndhole\ndragonet\nidiacanthus\nlapwing\nmantid\npetalura\nrhinoceros\nshelduck\nsiskin\ntanager\ntopminnow\nxenodermus\n\nare all bitching about this (with a couple different spellings\nof the warning). So this is absolutely something to fix, and\nI'm rather surprised that nobody noticed it during the development\nof 174c48050.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Dec 2023 23:23:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Erroneous -Werror=missing-braces on old GCC"
}
] |
[
{
"msg_contents": "Hello.\n\npg_basebackup.c: got the following message lines:\n\n>\tprintf(_(\" -i, --incremental=OLDMANIFEST\\n\"));\n>\tprintf(_(\" take incremental backup\\n\"));\n\nI'd suggest merging these lines as follows (and the attached patch).\n\n> +\tprintf(_(\" -i, --incremental=OLDMANIFEST\\n\"\n> +\t\t\t \" take incremental backup\\n\"));\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 25 Dec 2023 13:47:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup has an accidentaly separated help message"
},
{
"msg_contents": "> >\tprintf(_(\" -i, --incremental=OLDMANIFEST\\n\"));\n> >\tprintf(_(\" take incremental backup\\n\"));\n> \n> I'd suggest merging these lines as follows (and the attached patch).\n> \n> > +\tprintf(_(\" -i, --incremental=OLDMANIFEST\\n\"\n> > +\t\t\t \" take incremental backup\\n\"));\n\nSorry, but I found another instance of this.\n\n>\tprintf(_(\" -T, --tablespace-mapping=OLDDIR=NEWDIR\\n\"));\n>\tprintf(_(\" relocate tablespace in OLDDIR to NEWDIR\\n\"));\n\nThe attached patch contains both of the above fixes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 25 Dec 2023 14:39:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup has an accidentaly separated help message"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 02:39:16PM +0900, Kyotaro Horiguchi wrote:\n> The attached patch contains both of the above fixes.\n\nGood catches, let's fix them. You have noticed that while translating\nthese new messages, I guess?\n--\nMichael",
"msg_date": "Mon, 25 Dec 2023 15:42:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup has an accidentaly separated help message"
},
{
"msg_contents": "At Mon, 25 Dec 2023 15:42:41 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Dec 25, 2023 at 02:39:16PM +0900, Kyotaro Horiguchi wrote:\n> > The attached patch contains both of the above fixes.\n> \n> Good catches, let's fix them. You have noticed that while translating\n> these new messages, I guess?\n\nYes. So, it turns out that they're found after they have been\ncommitted.\n\nBecause handling a large volume of translations all at once is\ndaunting, I am maintaining translations locally to avoid that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 25 Dec 2023 17:07:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup has an accidentaly separated help message"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 05:07:28PM +0900, Kyotaro Horiguchi wrote:\n> Yes. So, it turns out that they're found after they have been\n> committed.\n\nNo problem. I've just applied what you had. I hope this makes your\nlife a bit easier ;)\n--\nMichael",
"msg_date": "Tue, 26 Dec 2023 19:04:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup has an accidentaly separated help message"
},
{
"msg_contents": "At Tue, 26 Dec 2023 19:04:53 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Dec 25, 2023 at 05:07:28PM +0900, Kyotaro Horiguchi wrote:\n> > Yes. So, it turns out that they're found after they have been\n> > committed.\n> \n> No problem. I've just applied what you had. I hope this makes your\n> life a bit easier ;)\n\nThanks for committing this!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Sun, 31 Dec 2023 18:36:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup has an accidentaly separated help message"
}
] |
[
{
"msg_contents": "Hi PostgreSQL Community,\nRecently I have been working on pg_dump regarding my project and wanted to\nexclude an extension from the dump generated. I wonder why it doesn't have\n--exclude-extension type of support whereas --extension exists!\nSince I needed that support, I took the initiative to contribute to the\ncommunity by adding the --exclude-extension flag.\nAttached is the patch for the same. Looking forward to your feedback.\n\nRegards\nAyush Vatsa\nAmazon Web services (AWS)",
"msg_date": "Mon, 25 Dec 2023 15:48:24 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "Added a CF entry for the same https://commitfest.postgresql.org/46/4721/\n\nRegards\nAyush Vatsa\nAmazon Web Services (AWS)\n\nOn Mon, 25 Dec 2023 at 15:48, Ayush Vatsa <[email protected]> wrote:\n\n> Hi PostgreSQL Community,\n> Recently I have been working on pg_dump regarding my project and wanted to\n> exclude an extension from the dump generated. I wonder why it doesn't have\n> --exclude-extension type of support whereas --extension exists!\n> Since I needed that support, I took the initiative to contribute to the\n> community by adding the --exclude-extension flag.\n> Attached is the patch for the same. Looking forward to your feedback.\n>\n> Regards\n> Ayush Vatsa\n> Amazon Web services (AWS)\n>\n\nAdded a CF entry for the same https://commitfest.postgresql.org/46/4721/RegardsAyush VatsaAmazon Web Services (AWS)On Mon, 25 Dec 2023 at 15:48, Ayush Vatsa <[email protected]> wrote:Hi PostgreSQL Community,Recently I have been working on pg_dump regarding my project and wanted to exclude an extension from the dump generated. I wonder why it doesn't have --exclude-extension type of support whereas --extension exists!Since I needed that support, I took the initiative to contribute to the community by adding the --exclude-extension flag.Attached is the patch for the same. Looking forward to your feedback.RegardsAyush VatsaAmazon Web services (AWS)",
"msg_date": "Mon, 25 Dec 2023 15:52:02 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "Hi\n\nOn Mon, Dec 25, 2023 at 6:22 PM Ayush Vatsa <[email protected]> wrote:\n>\n> Added a CF entry for the same https://commitfest.postgresql.org/46/4721/\n>\n> Regards\n> Ayush Vatsa\n> Amazon Web Services (AWS)\n>\n> On Mon, 25 Dec 2023 at 15:48, Ayush Vatsa <[email protected]> wrote:\n>>\n>> Hi PostgreSQL Community,\n>> Recently I have been working on pg_dump regarding my project and wanted to exclude an extension from the dump generated. I wonder why it doesn't have --exclude-extension type of support whereas --extension exists!\n>> Since I needed that support, I took the initiative to contribute to the community by adding the --exclude-extension flag.\n>> Attached is the patch for the same. Looking forward to your feedback.\n>>\n>> Regards\n>> Ayush Vatsa\n>> Amazon Web services (AWS)\n\n printf(_(\" -e, --extension=PATTERN dump the specified\nextension(s) only\\n\"));\n+ printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\nextension(s)\\n\"));\n printf(_(\" -E, --encoding=ENCODING dump the data in encoding\nENCODING\\n\"));\n\nlong options should not mess with short options, does the following\nmake sense to you?\n\n printf(_(\" --enable-row-security enable row security (dump only\ncontent user has\\n\"\n \" access to)\\n\"));\n+ printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\nextension(s)\\n\"));\n printf(_(\" --exclude-table-and-children=PATTERN\\n\"\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 25 Dec 2023 18:57:51 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "Hi,\n> long options should not mess with short options, does the following\n> make sense to you?\nYes that makes sense, a reason to keep them together is that they are of\nthe same kind\nBut I will update the patch accordingly.\n\nOne more thing I wanted to ask is, Should I separate them in the\npg_dump.sgml file too? Like writing documentation of --exclude-extension\nwith other long options?\n\nAfter your feedback I will update on both places in a single patch\n\nRegards,\nAyush Vatsa\nAmazon Web Services (AWS)\n\nOn Mon, 25 Dec 2023 at 16:28, Junwang Zhao <[email protected]> wrote:\n\n> Hi\n>\n> On Mon, Dec 25, 2023 at 6:22 PM Ayush Vatsa <[email protected]>\n> wrote:\n> >\n> > Added a CF entry for the same https://commitfest.postgresql.org/46/4721/\n> >\n> > Regards\n> > Ayush Vatsa\n> > Amazon Web Services (AWS)\n> >\n> > On Mon, 25 Dec 2023 at 15:48, Ayush Vatsa <[email protected]>\n> wrote:\n> >>\n> >> Hi PostgreSQL Community,\n> >> Recently I have been working on pg_dump regarding my project and wanted\n> to exclude an extension from the dump generated. I wonder why it doesn't\n> have --exclude-extension type of support whereas --extension exists!\n> >> Since I needed that support, I took the initiative to contribute to the\n> community by adding the --exclude-extension flag.\n> >> Attached is the patch for the same. Looking forward to your feedback.\n> >>\n> >> Regards\n> >> Ayush Vatsa\n> >> Amazon Web services (AWS)\n>\n> printf(_(\" -e, --extension=PATTERN dump the specified\n> extension(s) only\\n\"));\n> + printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\n> extension(s)\\n\"));\n> printf(_(\" -E, --encoding=ENCODING dump the data in encoding\n> ENCODING\\n\"));\n>\n> long options should not mess with short options, does the following\n> make sense to you?\n>\n> printf(_(\" --enable-row-security enable row security (dump only\n> content user has\\n\"\n> \" access to)\\n\"));\n> + printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\n> extension(s)\\n\"));\n> printf(_(\" --exclude-table-and-children=PATTERN\\n\"\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nHi,> long options should not mess with short options, does the following> make sense to you?Yes that makes sense, a reason to keep them together is that they are of the same kindBut I will update the patch accordingly.One more thing I wanted to ask is, Should I separate them in the pg_dump.sgml file too? Like writing documentation of --exclude-extension with other long options?After your feedback I will update on both places in a single patchRegards,Ayush VatsaAmazon Web Services (AWS)On Mon, 25 Dec 2023 at 16:28, Junwang Zhao <[email protected]> wrote:Hi\n\nOn Mon, Dec 25, 2023 at 6:22 PM Ayush Vatsa <[email protected]> wrote:\n>\n> Added a CF entry for the same https://commitfest.postgresql.org/46/4721/\n>\n> Regards\n> Ayush Vatsa\n> Amazon Web Services (AWS)\n>\n> On Mon, 25 Dec 2023 at 15:48, Ayush Vatsa <[email protected]> wrote:\n>>\n>> Hi PostgreSQL Community,\n>> Recently I have been working on pg_dump regarding my project and wanted to exclude an extension from the dump generated. I wonder why it doesn't have --exclude-extension type of support whereas --extension exists!\n>> Since I needed that support, I took the initiative to contribute to the community by adding the --exclude-extension flag.\n>> Attached is the patch for the same. Looking forward to your feedback.\n>>\n>> Regards\n>> Ayush Vatsa\n>> Amazon Web services (AWS)\n\n printf(_(\" -e, --extension=PATTERN dump the specified\nextension(s) only\\n\"));\n+ printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\nextension(s)\\n\"));\n printf(_(\" -E, --encoding=ENCODING dump the data in encoding\nENCODING\\n\"));\n\nlong options should not mess with short options, does the following\nmake sense to you?\n\n printf(_(\" --enable-row-security enable row security (dump only\ncontent user has\\n\"\n \" access to)\\n\"));\n+ printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\nextension(s)\\n\"));\n printf(_(\" --exclude-table-and-children=PATTERN\\n\"\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Mon, 25 Dec 2023 17:10:40 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 3:48 PM Ayush Vatsa <[email protected]> wrote:\n>\n> Hi PostgreSQL Community,\n> Recently I have been working on pg_dump regarding my project and wanted to exclude an extension from the dump generated. I wonder why it doesn't have --exclude-extension type of support whereas --extension exists!\n> Since I needed that support, I took the initiative to contribute to the community by adding the --exclude-extension flag.\n\nAren't extensions excluded by default? That's why we have --extension.\nWhy do we need to explicitly exclude extensions?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 1 Jan 2024 18:18:09 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "Hi,\n> Aren't extensions excluded by default? That's why we have --extension.\nAccording to the documentation of pg_dump when the --extension option is\nnot specified, all non-system extensions in the target database will get\ndumped.\n> Why do we need to explicitly exclude extensions?\nHence to include only a few we use --extension, but to exclude a few I am\nproposing --exclude-extension.\n\nOn Mon, 1 Jan 2024 at 18:18, Ashutosh Bapat <[email protected]>\nwrote:\n\n> On Mon, Dec 25, 2023 at 3:48 PM Ayush Vatsa <[email protected]>\n> wrote:\n> >\n> > Hi PostgreSQL Community,\n> > Recently I have been working on pg_dump regarding my project and wanted\n> to exclude an extension from the dump generated. I wonder why it doesn't\n> have --exclude-extension type of support whereas --extension exists!\n> > Since I needed that support, I took the initiative to contribute to the\n> community by adding the --exclude-extension flag.\n>\n> Aren't extensions excluded by default? That's why we have --extension.\n> Why do we need to explicitly exclude extensions?\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nHi,> Aren't extensions excluded by default? That's why we have --extension.According to the documentation of pg_dump when the --extension option is not specified, all non-system extensions in the target database will get dumped.> Why do we need to explicitly exclude extensions?Hence to include only a few we use --extension, but to exclude a few I am proposing --exclude-extension.On Mon, 1 Jan 2024 at 18:18, Ashutosh Bapat <[email protected]> wrote:On Mon, Dec 25, 2023 at 3:48 PM Ayush Vatsa <[email protected]> wrote:\n>\n> Hi PostgreSQL Community,\n> Recently I have been working on pg_dump regarding my project and wanted to exclude an extension from the dump generated. I wonder why it doesn't have --exclude-extension type of support whereas --extension exists!\n> Since I needed that support, I took the initiative to contribute to the community by adding the --exclude-extension flag.\n\nAren't extensions excluded by default? That's why we have --extension.\nWhy do we need to explicitly exclude extensions?\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 1 Jan 2024 18:58:14 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Mon, 1 Jan 2024 at 13:28, Ayush Vatsa <[email protected]> wrote:\n>\n> According to the documentation of pg_dump when the --extension option is not specified, all non-system extensions in the target database will get dumped.\n> > Why do we need to explicitly exclude extensions?\n> Hence to include only a few we use --extension, but to exclude a few I am proposing --exclude-extension.\n>\n\nThanks for working on this. It seems like a useful feature to have.\nThe code changes look good, and it appears to work as expected.\n\nIn my opinion the order of options in pg_dump.sgml and the --help\noutput is fine. Keeping this new option together with -e/--extension\nmakes it easier to see, while otherwise it would get lost much further\ndown. Other opinions on that might differ though.\n\nThere are a couple of things missing from the patch, that should be added:\n\n1). The --filter option should be extended to support \"exclude\nextension pattern\" lines in the filter file. That syntax is already\naccepted, but it throws a not-supported error, but it's hopefully not\ntoo hard to make that work now.\n\n2). It ought to have some tests in the test script.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:33:57 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "Hi,\n\n> In my opinion the order of options in pg_dump.sgml and the --help\n\n> output is fine. Keeping this new option together with -e/--extension\n\n> makes it easier to see, while otherwise it would get lost much further\n\n> down.\n\nI agree with your suggestion, so I'll maintain the original order as\nproposed.\n\n\n\n> The --filter option should be extended to support \"exclude\n\n> extension pattern\" lines in the filter file. That syntax is already\n\n> accepted, but it throws a not-supported error, but it's hopefully not\n\n> too hard to make that work now.\n\nWhile proposing the --exclude-extension flag in pg_dump, the --filter\noption\n\nwasn't there, so it got overlooked. But now that I've familiarized myself\nwith it\n\nand have tried to enhance its functionality with exclude-extension in the\n\nprovided patch. Thank you for bringing this to my attention.\n\n\n\n> It ought to have some tests in the test script.\n\nCorrect, I must have added it in the first patch itself.\n\nAs a newcomer to the database, I explored how testing works and tried\n\nto include test cases for the newly added flag. I've confirmed that all\ntest\n\ncases, including the ones I added, are passing.\n\n\n\nAttached is the complete patch with all the required code changes.\n\nLooking forward to your review and feedback.\n\n\nOn Wed, 6 Mar 2024 at 16:04, Dean Rasheed <[email protected]> wrote:\n\n> On Mon, 1 Jan 2024 at 13:28, Ayush Vatsa <[email protected]> wrote:\n> >\n> > According to the documentation of pg_dump when the --extension option is\n> not specified, all non-system extensions in the target database will get\n> dumped.\n> > > Why do we need to explicitly exclude extensions?\n> > Hence to include only a few we use --extension, but to exclude a few I\n> am proposing --exclude-extension.\n> >\n>\n> Thanks for working on this. It seems like a useful feature to have.\n> The code changes look good, and it appears to work as expected.\n>\n> In my opinion the order of options in pg_dump.sgml and the --help\n> output is fine. Keeping this new option together with -e/--extension\n> makes it easier to see, while otherwise it would get lost much further\n> down. Other opinions on that might differ though.\n>\n> There are a couple of things missing from the patch, that should be added:\n>\n> 1). The --filter option should be extended to support \"exclude\n> extension pattern\" lines in the filter file. That syntax is already\n> accepted, but it throws a not-supported error, but it's hopefully not\n> too hard to make that work now.\n>\n> 2). It ought to have some tests in the test script.\n>\n> Regards,\n> Dean\n>",
"msg_date": "Sat, 16 Mar 2024 23:06:31 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Sat, 16 Mar 2024 at 17:36, Ayush Vatsa <[email protected]> wrote:\n>\n> Attached is the complete patch with all the required code changes.\n> Looking forward to your review and feedback.\n>\n\nThis looks good to me. I tested it and everything worked as expected.\n\nI ran it through pgindent to fix some whitespace issues and added\nanother test for the filter option, based on the test case you added.\n\nI'm marking this ready-for-commit (which I'll probably do myself in a\nday or two, unless anyone else claims it first).\n\nRegards,\nDean",
"msg_date": "Tue, 19 Mar 2024 11:19:25 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "> On 19 Mar 2024, at 12:19, Dean Rasheed <[email protected]> wrote:\n\n> I'm marking this ready-for-commit (which I'll probably do myself in a\n> day or two, unless anyone else claims it first).\n\nLGTM too from a read through. I did notice a few mistakes in the --filter\ndocumentation portion for other keywords but that's unrelated to this patch,\nwill fix them once this is in to avoid conflicts.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 12:53:46 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "> I ran it through pgindent to fix some whitespace issues and added\n> another test for the filter option, based on the test case you added.\n\nThank you for addressing those whitespaces issues and adding more tests. I\nappreciate your attention to detail and will certainly be more vigilant in\nfuture.\n\n\n> I'm marking this ready-for-commit (which I'll probably do myself in a\n> day or two, unless anyone else claims it first).\n\nThank you very much, this marks my first contribution to the open-source\ncommunity, and I'm enthusiastic about making further meaningful\ncontributions to PostgreSQL in the future.\n\n\n> LGTM too from a read through. I did notice a few mistakes in the --filter\n> documentation portion for other keywords but that's unrelated to this\npatch,\n> will fix them once this is in to avoid conflicts.\n\nThanks Daniel for your review. It's gratifying to see that my patch not\nonly introduced the intended feature but also brought other minor mistakes\nto light.\n\nRegards\nAyush Vatsa\nAmazon Web services (AWS)\n\n\nOn Tue, 19 Mar 2024 at 17:23, Daniel Gustafsson <[email protected]> wrote:\n\n> > On 19 Mar 2024, at 12:19, Dean Rasheed <[email protected]> wrote:\n>\n> > I'm marking this ready-for-commit (which I'll probably do myself in a\n> > day or two, unless anyone else claims it first).\n>\n> LGTM too from a read through. I did notice a few mistakes in the --filter\n> documentation portion for other keywords but that's unrelated to this\n> patch,\n> will fix them once this is in to avoid conflicts.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\n> I ran it through pgindent to fix some whitespace issues and added> another test for the filter option, based on the test case you added.Thank you for addressing those whitespaces issues and adding more tests. I appreciate your attention to detail and will certainly be more vigilant in future.> I'm marking this ready-for-commit (which I'll probably do myself in a> day or two, unless anyone else claims it first).Thank you very much, this marks my first contribution to the open-source community, and I'm enthusiastic about making further meaningful contributions to PostgreSQL in the future.> LGTM too from a read through. I did notice a few mistakes in the --filter> documentation portion for other keywords but that's unrelated to this patch,> will fix them once this is in to avoid conflicts.Thanks Daniel for your review. It's gratifying to see that my patch not only introduced the intended feature but also brought other minor mistakes to light.RegardsAyush VatsaAmazon Web services (AWS)On Tue, 19 Mar 2024 at 17:23, Daniel Gustafsson <[email protected]> wrote:> On 19 Mar 2024, at 12:19, Dean Rasheed <[email protected]> wrote:\n\n> I'm marking this ready-for-commit (which I'll probably do myself in a\n> day or two, unless anyone else claims it first).\n\nLGTM too from a read through. I did notice a few mistakes in the --filter\ndocumentation portion for other keywords but that's unrelated to this patch,\nwill fix them once this is in to avoid conflicts.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 20 Mar 2024 00:47:39 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 19:17, Ayush Vatsa <[email protected]> wrote:\n>\n> > I'm marking this ready-for-commit (which I'll probably do myself in a\n> > day or two, unless anyone else claims it first).\n>\n> Thank you very much, this marks my first contribution to the open-source community, and I'm enthusiastic about making further meaningful contributions to PostgreSQL in the future.\n>\n\nCommitted. Congratulations on your first contribution to PostgreSQL!\nMay it be the first of many to come.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 20 Mar 2024 08:17:52 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 11:53, Daniel Gustafsson <[email protected]> wrote:\n>\n> I did notice a few mistakes in the --filter\n> documentation portion for other keywords but that's unrelated to this patch,\n> will fix them once this is in to avoid conflicts.\n>\n\nAttached is a patch for the --filter docs, covering the omissions I can see.\n\nRegards,\nDean",
"msg_date": "Fri, 7 Jun 2024 11:20:44 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "> Attached is a patch for the --filter docs, covering the omissions I can\nsee.\nThanks Dean for working on this.\nI have reviewed the changes and they look good to me.\n\nRegards,\nAyush Vatsa\nAmazon Web services (AWS)\n\nOn Fri, 7 Jun 2024 at 15:50, Dean Rasheed <[email protected]> wrote:\n\n> On Tue, 19 Mar 2024 at 11:53, Daniel Gustafsson <[email protected]> wrote:\n> >\n> > I did notice a few mistakes in the --filter\n> > documentation portion for other keywords but that's unrelated to this\n> patch,\n> > will fix them once this is in to avoid conflicts.\n> >\n>\n> Attached is a patch for the --filter docs, covering the omissions I can\n> see.\n>\n> Regards,\n> Dean\n>\n\n> Attached is a patch for the --filter docs, covering the omissions I can see.Thanks Dean for working on this.I have reviewed the changes and they look good to me.Regards,Ayush VatsaAmazon Web services (AWS)On Fri, 7 Jun 2024 at 15:50, Dean Rasheed <[email protected]> wrote:On Tue, 19 Mar 2024 at 11:53, Daniel Gustafsson <[email protected]> wrote:\n>\n> I did notice a few mistakes in the --filter\n> documentation portion for other keywords but that's unrelated to this patch,\n> will fix them once this is in to avoid conflicts.\n>\n\nAttached is a patch for the --filter docs, covering the omissions I can see.\n\nRegards,\nDean",
"msg_date": "Sun, 9 Jun 2024 00:08:48 +0530",
"msg_from": "Ayush Vatsa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
},
{
"msg_contents": "On Sat, 8 Jun 2024 at 19:39, Ayush Vatsa <[email protected]> wrote:\n>\n> > Attached is a patch for the --filter docs, covering the omissions I can see.\n> Thanks Dean for working on this.\n> I have reviewed the changes and they look good to me.\n>\n\nThanks for checking. I have committed this now.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 10 Jun 2024 15:21:35 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to include --exclude-extension Flag in pg_dump"
}
] |
[
{
"msg_contents": "Since dc2123400, any \"make\" in src/bin/pg_combinebackup whines\n\ncat: ./po/LINGUAS: No such file or directory\n\nif you have selected --enable-nls. This is evidently from\n\nAVAIL_LANGUAGES := $(shell cat $(srcdir)/po/LINGUAS)\n\nI don't particularly care to see that warning until whenever\nit is that the translations first get populated. Perhaps\nwe should hack up nls-global.mk to hide the warning, but\nthat might bite somebody someday. I'm inclined to just\nadd an empty LINGUAS file as a temporary measure.\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Dec 2023 12:48:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "No LINGUAS file yet for pg_combinebackup"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 12:48:17PM -0500, Tom Lane wrote:\n> I don't particularly care to see that warning until whenever\n> it is that the translations first get populated. Perhaps\n> we should hack up nls-global.mk to hide the warning, but\n> that might bite somebody someday. I'm inclined to just\n> add an empty LINGUAS file as a temporary measure.\n> Thoughts?\n\nI've noticed this noise as well, so +1 for the temporary empty file.\n--\nMichael",
"msg_date": "Tue, 26 Dec 2023 21:18:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No LINGUAS file yet for pg_combinebackup"
},
{
"msg_contents": "On 26.12.23 13:18, Michael Paquier wrote:\n> On Mon, Dec 25, 2023 at 12:48:17PM -0500, Tom Lane wrote:\n>> I don't particularly care to see that warning until whenever\n>> it is that the translations first get populated. Perhaps\n>> we should hack up nls-global.mk to hide the warning, but\n>> that might bite somebody someday. I'm inclined to just\n>> add an empty LINGUAS file as a temporary measure.\n>> Thoughts?\n> \n> I've noticed this noise as well, so +1 for the temporary empty file.\n\nI checked that meson also complains about the missing file, except that \nwe hadn't equipped pg_combinebackup with NLS support in meson yet. So I \nadded that as well. So having the empty file is the correct solution \nfor both build systems. Tom has already added the empty file.\n\n\n\n",
"msg_date": "Tue, 26 Dec 2023 21:38:47 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No LINGUAS file yet for pg_combinebackup"
}
] |
[
{
"msg_contents": "Hi,\n\nWorking on Asymmetric Join, I found slight inconsistency in the \ndescription of SpecialJoinInfo: join type JOIN_ANTI can be accompanied \nby a zero value of the ojrelid if this join was created by the \ntransformation of the NOT EXISTS subquery.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Tue, 26 Dec 2023 16:37:51 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specify description of the SpecialJoinInfo structure"
}
] |
[
{
"msg_contents": "While reviewing Heikki's Omit-junk-columns patchset[1], I noticed that\nroot->upper_targets[] is used to set target for partial_distinct_rel,\nwhich is not great because root->upper_targets[] is not supposed to be\nused by the core code. The comment in grouping_planner() says:\n\n * Save the various upper-rel PathTargets we just computed into\n * root->upper_targets[]. The core code doesn't use this, but it\n * provides a convenient place for extensions to get at the info.\n\nThen while fixing this issue, I noticed an opportunity for improvement\nin how we generate Gather/GatherMerge paths for the two-phase DISTINCT.\nThe Gather/GatherMerge paths are added by generate_gather_paths(), which\ndoes not consider ordering that might be useful above the GatherMerge\nnode. This can be improved by using generate_useful_gather_paths()\ninstead. With this change I can see query plan improvement from the\nregression test \"select_distinct.sql\". For instance,\n\n-- Test parallel DISTINCT\nSET parallel_tuple_cost=0;\nSET parallel_setup_cost=0;\nSET min_parallel_table_scan_size=0;\nSET max_parallel_workers_per_gather=2;\n\n-- Ensure we get a parallel plan\nEXPLAIN (costs off)\nSELECT DISTINCT four FROM tenk1;\n\n-- on master\nEXPLAIN (costs off)\nSELECT DISTINCT four FROM tenk1;\n QUERY PLAN\n----------------------------------------------------\n Unique\n -> Sort\n Sort Key: four\n -> Gather\n Workers Planned: 2\n -> HashAggregate\n Group Key: four\n -> Parallel Seq Scan on tenk1\n(8 rows)\n\n-- on patched\nEXPLAIN (costs off)\nSELECT DISTINCT four FROM tenk1;\n QUERY PLAN\n----------------------------------------------------\n Unique\n -> Gather Merge\n Workers Planned: 2\n -> Sort\n Sort Key: four\n -> HashAggregate\n Group Key: four\n -> Parallel Seq Scan on tenk1\n(8 rows)\n\nI believe the second plan is better.\n\nAttached is a patch that includes this change and also eliminates the\nusage of root->upper_targets[] in the core code. It also makes some\ntweaks for the comment.\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/2ca5865b-4693-40e5-8f78-f3b45d5378fb%40iki.fi\n\nThanks\nRichard",
"msg_date": "Tue, 26 Dec 2023 19:23:02 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Wed, 27 Dec 2023 at 00:23, Richard Guo <[email protected]> wrote:\n> -- on master\n> EXPLAIN (costs off)\n> SELECT DISTINCT four FROM tenk1;\n> QUERY PLAN\n> ----------------------------------------------------\n> Unique\n> -> Sort\n> Sort Key: four\n> -> Gather\n> Workers Planned: 2\n> -> HashAggregate\n> Group Key: four\n> -> Parallel Seq Scan on tenk1\n> (8 rows)\n>\n> -- on patched\n> EXPLAIN (costs off)\n> SELECT DISTINCT four FROM tenk1;\n> QUERY PLAN\n> ----------------------------------------------------\n> Unique\n> -> Gather Merge\n> Workers Planned: 2\n> -> Sort\n> Sort Key: four\n> -> HashAggregate\n> Group Key: four\n> -> Parallel Seq Scan on tenk1\n> (8 rows)\n>\n> I believe the second plan is better.\n\nI wonder if this change is worthwhile. The sort is only required at\nall because the planner opted to HashAggregate in phase1, of which the\nrows are output unordered. If phase1 was done by Group Aggregate, then\nno sorting would be needed. The only reason the planner didn't Hash\nAggregate for phase2 is because of the order we generate the distinct\npaths and because of STD_FUZZ_FACTOR.\n\nLook at the costs of the above plan:\n\nUnique (cost=397.24..397.28 rows=4 width=4)\n\nif I enable_sort=0; then I get a cheaper plan:\n\n HashAggregate (cost=397.14..397.18 rows=4 width=4)\n\nIf we add more rows then the cost of sorting will grow faster than the\ncost of hash aggregate due to the O(N log2 N) part of our sort\ncosting.\n\nIf I drop the index on tenk1(hundred), I only need to go to the\n\"hundred\" column to have it switch to Hash Aggregate on the 2nd phase.\nThis is because the number of distinct groups costs the paths for\nGroup Aggregate and Hash Aggregate more than STD_FUZZ_FACTOR apart.\nAdjusting the STD_FUZZ_FACTOR with the following means Hash Aggregate\nis used for both phases.\n\n-#define STD_FUZZ_FACTOR 1.01\n+#define STD_FUZZ_FACTOR 1.0000001\n\nIn light of this, do you still think it's worthwhile making this change?\n\nFor me, I think all it's going to result in is extra planner work\nwithout any performance gains.\n\n> Attached is a patch that includes this change and also eliminates the\n> usage of root->upper_targets[] in the core code. It also makes some\n> tweaks for the comment.\n\nWe should fix that. We can consider it independently from the other\nchange you're proposing.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Feb 2024 16:26:18 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 11:26 AM David Rowley <[email protected]> wrote:\n\n> In light of this, do you still think it's worthwhile making this change?\n>\n> For me, I think all it's going to result in is extra planner work\n> without any performance gains.\n\n\nHmm, with the query below, I can see that the new plan is cheaper than\nthe old plan, and the cost difference exceeds STD_FUZZ_FACTOR.\n\ncreate table t (a int, b int);\ninsert into t select i%100000, i from generate_series(1,10000000)i;\nanalyze t;\n\n-- on master\nexplain (costs on) select distinct a from t order by a limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Limit (cost=120188.50..120188.51 rows=1 width=4)\n -> Sort (cost=120188.50..120436.95 rows=99379 width=4)\n Sort Key: a\n -> HashAggregate (cost=118697.82..119691.61 rows=99379 width=4)\n Group Key: a\n -> Gather (cost=97331.33..118200.92 rows=198758 width=4)\n Workers Planned: 2\n -> HashAggregate (cost=96331.33..97325.12 rows=99379\nwidth=4)\n Group Key: a\n -> Parallel Seq Scan on t (cost=0.00..85914.67\nrows=4166667 width=4)\n(10 rows)\n\n-- on patched\nexplain (costs on) select distinct a from t order by a limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Limit (cost=106573.93..106574.17 rows=1 width=4)\n -> Unique (cost=106573.93..130260.88 rows=99379 width=4)\n -> Gather Merge (cost=106573.93..129763.98 rows=198758 width=4)\n Workers Planned: 2\n -> Sort (cost=105573.91..105822.35 rows=99379 width=4)\n Sort Key: a\n -> HashAggregate (cost=96331.33..97325.12 rows=99379\nwidth=4)\n Group Key: a\n -> Parallel Seq Scan on t (cost=0.00..85914.67\nrows=4166667 width=4)\n(9 rows)\n\nIt seems that including a LIMIT clause can potentially favor the new\nplan.\n\nThanks\nRichard\n\nOn Fri, Feb 2, 2024 at 11:26 AM David Rowley <[email protected]> wrote:\nIn light of this, do you still think it's worthwhile making this change?\n\nFor me, I think all it's going to result in is extra planner work\nwithout any performance gains.Hmm, with the query below, I can see that the new plan is cheaper thanthe old plan, and the cost difference exceeds STD_FUZZ_FACTOR.create table t (a int, b int);insert into t select i%100000, i from generate_series(1,10000000)i;analyze t;-- on masterexplain (costs on) select distinct a from t order by a limit 1; QUERY PLAN-------------------------------------------------------------------------------------------------- Limit (cost=120188.50..120188.51 rows=1 width=4) -> Sort (cost=120188.50..120436.95 rows=99379 width=4) Sort Key: a -> HashAggregate (cost=118697.82..119691.61 rows=99379 width=4) Group Key: a -> Gather (cost=97331.33..118200.92 rows=198758 width=4) Workers Planned: 2 -> HashAggregate (cost=96331.33..97325.12 rows=99379 width=4) Group Key: a -> Parallel Seq Scan on t (cost=0.00..85914.67 rows=4166667 width=4)(10 rows)-- on patchedexplain (costs on) select distinct a from t order by a limit 1; QUERY PLAN-------------------------------------------------------------------------------------------------- Limit (cost=106573.93..106574.17 rows=1 width=4) -> Unique (cost=106573.93..130260.88 rows=99379 width=4) -> Gather Merge (cost=106573.93..129763.98 rows=198758 width=4) Workers Planned: 2 -> Sort (cost=105573.91..105822.35 rows=99379 width=4) Sort Key: a -> HashAggregate (cost=96331.33..97325.12 rows=99379 width=4) Group Key: a -> Parallel Seq Scan on t (cost=0.00..85914.67 rows=4166667 width=4)(9 rows)It seems that including a LIMIT clause can potentially favor the newplan.ThanksRichard",
"msg_date": "Fri, 2 Feb 2024 15:46:58 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Fri, 2 Feb 2024 at 20:47, Richard Guo <[email protected]> wrote:\n>\n>\n> On Fri, Feb 2, 2024 at 11:26 AM David Rowley <[email protected]> wrote:\n>>\n>> In light of this, do you still think it's worthwhile making this change?\n>>\n>> For me, I think all it's going to result in is extra planner work\n>> without any performance gains.\n>\n>\n> Hmm, with the query below, I can see that the new plan is cheaper than\n> the old plan, and the cost difference exceeds STD_FUZZ_FACTOR.\n>\n> create table t (a int, b int);\n> insert into t select i%100000, i from generate_series(1,10000000)i;\n> analyze t;\n>\n> explain (costs on) select distinct a from t order by a limit 1;\n\nOK, a LIMIT clause... I didn't think of that. Given the test results\nbelow, I'm pretty convinced we should make the change.\n\nPerformance testing on an AMD 3990x with work_mem=4MB and hash_mem_multiplier=2.\n\n$ cat bench.sql\nselect distinct a from t order by a limit 1;\n$ pgbench -n -T 60 -f bench.sql postgres\n\n-- Master\n\nmax_parallel_workers_per_gather=2;\nlatency average = 470.310 ms\nlatency average = 468.673 ms\nlatency average = 469.463 ms\n\nmax_parallel_workers_per_gather=4;\nlatency average = 346.012 ms\nlatency average = 346.662 ms\nlatency average = 347.591 ms\n\nmax_parallel_workers_per_gather=8; + alter table t set (parallel_workers=8);\nlatency average = 300.298 ms\nlatency average = 300.029 ms\nlatency average = 300.314 ms\n\n-- Patched\n\nmax_parallel_workers_per_gather=2;\nlatency average = 424.176 ms\nlatency average = 431.870 ms\nlatency average = 431.870 ms (9.36% faster than master)\n\nmax_parallel_workers_per_gather=4;\nlatency average = 279.837 ms\nlatency average = 280.893 ms\nlatency average = 281.518 ms (23.51% faster than master)\n\nmax_parallel_workers_per_gather=8; + alter table t set (parallel_workers=8);\nlatency average = 178.585 ms\nlatency average = 178.780 ms\nlatency average = 179.768 ms (67.68% faster than master)\n\nSo the gains increase with more parallel workers due to pushing more\nwork to the worker. Amdahl's law approves of this.\n\nI'll push the patch shortly.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Feb 2024 23:39:25 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Fri, 2 Feb 2024 at 23:39, David Rowley <[email protected]> wrote:\n> I'll push the patch shortly.\n\nI've pushed the partial path sort part.\n\nNow for the other stuff you had. I didn't really like this part:\n\n+ /*\n+ * Set target for partial_distinct_rel as generate_useful_gather_paths\n+ * requires that the input rel has a valid reltarget.\n+ */\n+ partial_distinct_rel->reltarget = cheapest_partial_path->pathtarget;\n\nI think we should just make it work the same way as\ncreate_grouping_paths(), where grouping_target is passed as a\nparameter.\n\nI've done it that way in the attached.\n\nDavid",
"msg_date": "Sat, 3 Feb 2024 00:35:52 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 6:39 PM David Rowley <[email protected]> wrote:\n\n> So the gains increase with more parallel workers due to pushing more\n> work to the worker. Amdahl's law approves of this.\n>\n> I'll push the patch shortly.\n\n\nThanks for the detailed testing and pushing the patch!\n\nThanks\nRichard\n\nOn Fri, Feb 2, 2024 at 6:39 PM David Rowley <[email protected]> wrote:\nSo the gains increase with more parallel workers due to pushing more\nwork to the worker. Amdahl's law approves of this.\n\nI'll push the patch shortly.Thanks for the detailed testing and pushing the patch!ThanksRichard",
"msg_date": "Mon, 5 Feb 2024 09:36:28 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 7:36 PM David Rowley <[email protected]> wrote:\n\n> Now for the other stuff you had. I didn't really like this part:\n>\n> + /*\n> + * Set target for partial_distinct_rel as generate_useful_gather_paths\n> + * requires that the input rel has a valid reltarget.\n> + */\n> + partial_distinct_rel->reltarget = cheapest_partial_path->pathtarget;\n>\n> I think we should just make it work the same way as\n> create_grouping_paths(), where grouping_target is passed as a\n> parameter.\n>\n> I've done it that way in the attached.\n\n\nThe change looks good to me.\n\nBTW, I kind of doubt that 'create_partial_distinct_paths' is a proper\nfunction name given what it actually does. It not only generates\ndistinct paths based on input_rel's partial paths, but also adds\nGather/GatherMerge on top of these partially distinct paths, followed by\na final unique/aggregate path to ensure uniqueness of the final result.\nSo maybe 'create_parallel_distinct_paths' or something like that would\nbe better?\n\nI asked because I noticed that in create_partial_grouping_paths(), we\nonly generate partially aggregated paths, and any subsequent\nFinalizeAggregate step is called in the caller.\n\nThanks\nRichard\n\nOn Fri, Feb 2, 2024 at 7:36 PM David Rowley <[email protected]> wrote:\nNow for the other stuff you had. I didn't really like this part:\n\n+ /*\n+ * Set target for partial_distinct_rel as generate_useful_gather_paths\n+ * requires that the input rel has a valid reltarget.\n+ */\n+ partial_distinct_rel->reltarget = cheapest_partial_path->pathtarget;\n\nI think we should just make it work the same way as\ncreate_grouping_paths(), where grouping_target is passed as a\nparameter.\n\nI've done it that way in the attached.The change looks good to me.BTW, I kind of doubt that 'create_partial_distinct_paths' is a properfunction name given what it actually does. It not only generatesdistinct paths based on input_rel's partial paths, but also addsGather/GatherMerge on top of these partially distinct paths, followed bya final unique/aggregate path to ensure uniqueness of the final result.So maybe 'create_parallel_distinct_paths' or something like that wouldbe better?I asked because I noticed that in create_partial_grouping_paths(), weonly generate partially aggregated paths, and any subsequentFinalizeAggregate step is called in the caller.ThanksRichard",
"msg_date": "Mon, 5 Feb 2024 09:42:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improvement on parallel DISTINCT"
},
{
"msg_contents": "On Mon, 5 Feb 2024 at 14:42, Richard Guo <[email protected]> wrote:\n>\n>\n> On Fri, Feb 2, 2024 at 7:36 PM David Rowley <[email protected]> wrote:\n>> I think we should just make it work the same way as\n>> create_grouping_paths(), where grouping_target is passed as a\n>> parameter.\n>>\n>> I've done it that way in the attached.\n>\n>\n> The change looks good to me.\n\nI pushed the PathTarget changes.\n\n> BTW, I kind of doubt that 'create_partial_distinct_paths' is a proper\n> function name given what it actually does.\n\nI didn't make any changes here. I don't think messing with this is\nworth the trouble.\n\nDavid\n\n\n",
"msg_date": "Wed, 7 Feb 2024 21:24:08 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improvement on parallel DISTINCT"
}
] |
[
{
"msg_contents": "Hi\n\nCommit cac169d686eddb277880a0d8a760ac3007b4846a updated the default value\nof fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the docs to\nreflect this change.\n\n\nThanks!\n\nUmair Shahid | Founder\n\n\nProfessional Services for PostgreSQL\nhttps://stormatics.tech/",
"msg_date": "Tue, 26 Dec 2023 20:29:50 +0500",
"msg_from": "Umair Shahid <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update docs for default value of fdw_tuple_cost"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 12:27 AM Umair Shahid <[email protected]>\nwrote:\n\n> Commit cac169d686eddb277880a0d8a760ac3007b4846a updated the default value\n> of fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the docs to\n> reflect this change.\n>\n\n+1. Nice catch.\n\nThanks\nRichard\n\n>\n\nOn Wed, Dec 27, 2023 at 12:27 AM Umair Shahid <[email protected]> wrote:\nCommit cac169d686eddb277880a0d8a760ac3007b4846a updated the default value of fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the docs to reflect this change. +1. Nice catch.ThanksRichard",
"msg_date": "Wed, 27 Dec 2023 08:48:07 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update docs for default value of fdw_tuple_cost"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nGood catch. This is a trivial fix and so I hope we can just get it in right away.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 03 Jan 2024 08:03:44 +0000",
"msg_from": "Chris Travers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update docs for default value of fdw_tuple_cost"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 11:27 PM Umair Shahid\n<[email protected]> wrote:\n>\n> Commit cac169d686eddb277880a0d8a760ac3007b4846a updated the default value of fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the docs to reflect this change.\n\nPushed, thanks!\n\n\n",
"msg_date": "Thu, 11 Jan 2024 09:07:16 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update docs for default value of fdw_tuple_cost"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 7:07 AM John Naylor <[email protected]> wrote:\n\n> On Tue, Dec 26, 2023 at 11:27 PM Umair Shahid\n> <[email protected]> wrote:\n> >\n> > Commit cac169d686eddb277880a0d8a760ac3007b4846a updated the default\n> value of fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the\n> docs to reflect this change.\n>\n> Pushed, thanks!\n>\n\nThank you, John.\n\nAnd thank you Chris Travers for the review.\n\nOn Thu, Jan 11, 2024 at 7:07 AM John Naylor <[email protected]> wrote:On Tue, Dec 26, 2023 at 11:27 PM Umair Shahid\n<[email protected]> wrote:\n>\n> Commit cac169d686eddb277880a0d8a760ac3007b4846a updated the default value of fdw_tuple_cost from 0.01 to 0.2. The attached patch updates the docs to reflect this change.\n\nPushed, thanks!Thank you, John. And thank you Chris Travers for the review.",
"msg_date": "Thu, 11 Jan 2024 11:54:21 +0500",
"msg_from": "Umair Shahid <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update docs for default value of fdw_tuple_cost"
}
] |
[
{
"msg_contents": "Quite a long time ago Robert asked me about the possibility of an \nincremental JSON parser. I wrote one, and I've tweaked it a bit, but the \nperformance is significantly worse that that of the current Recursive \nDescent parser. Nevertheless, I'm attaching my current WIP state for it, \nand I'll add it to the next CF to keep the conversation going.\n\nOne possible use would be in parsing large manifest files for \nincremental backup. However, it struck me a few days ago that this might \nnot work all that well. The current parser and the new parser both \npalloc() space for each field name and scalar token in the JSON (unless \nthey aren't used, which is normally not the case), and they don't free \nit, so that particularly if done in frontend code this amounts to a \npossible memory leak, unless the semantic routines do the freeing \nthemselves. So while we can save some memory by not having to slurp in \nthe whole JSON in one hit, we aren't saving any of that other allocation \nof memory, which amounts to almost as much space as the raw JSON.\n\nIn any case, I've had fun so it's not a total loss come what may :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 26 Dec 2023 11:48:25 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 11:49 AM Andrew Dunstan <[email protected]> wrote:\n> Quite a long time ago Robert asked me about the possibility of an\n> incremental JSON parser. I wrote one, and I've tweaked it a bit, but the\n> performance is significantly worse that that of the current Recursive\n> Descent parser. Nevertheless, I'm attaching my current WIP state for it,\n> and I'll add it to the next CF to keep the conversation going.\n\nThanks for doing this. I think it's useful even if it's slower than\nthe current parser, although that probably necessitates keeping both,\nwhich isn't great, but I don't see a better alternative.\n\n> One possible use would be in parsing large manifest files for\n> incremental backup. However, it struck me a few days ago that this might\n> not work all that well. The current parser and the new parser both\n> palloc() space for each field name and scalar token in the JSON (unless\n> they aren't used, which is normally not the case), and they don't free\n> it, so that particularly if done in frontend code this amounts to a\n> possible memory leak, unless the semantic routines do the freeing\n> themselves. So while we can save some memory by not having to slurp in\n> the whole JSON in one hit, we aren't saving any of that other allocation\n> of memory, which amounts to almost as much space as the raw JSON.\n\nIt seems like a pretty significant savings no matter what. Suppose the\nbackup_manifest file is 2GB, and instead of creating a 2GB buffer, you\ncreate an 1MB buffer and feed the data to the parser in 1MB chunks.\nWell, that saves 2GB less 1MB, full stop. Now if we address the issue\nyou raise here in some way, we can potentially save even more memory,\nwhich is great, but even if we don't, we still saved a bunch of memory\nthat could not have been saved in any other way.\n\nAs far as addressing that other issue, we could address the issue\neither by having the semantic routines free the memory if they don't\nneed it, or alternatively by having the parser itself free the memory\nafter invoking any callbacks to which it might be passed. The latter\napproach feels more conceptually pure, but the former might be the\nmore practical approach. I think what really matters here is that we\ndocument who must or may do which things. When a callback gets passed\na pointer, we can document either that (1) it's a palloc'd chunk that\nthe calllback can free if they want or (2) that it's a palloc'd chunk\nthat the caller must not free or (3) that it's not a palloc'd chunk.\nWe can further document the memory context in which the chunk will be\nallocated, if applicable, and when/if the parser will free it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 10:14:16 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-01-02 Tu 10:14, Robert Haas wrote:\n> On Tue, Dec 26, 2023 at 11:49 AM Andrew Dunstan <[email protected]> wrote:\n>> Quite a long time ago Robert asked me about the possibility of an\n>> incremental JSON parser. I wrote one, and I've tweaked it a bit, but the\n>> performance is significantly worse that that of the current Recursive\n>> Descent parser. Nevertheless, I'm attaching my current WIP state for it,\n>> and I'll add it to the next CF to keep the conversation going.\n> Thanks for doing this. I think it's useful even if it's slower than\n> the current parser, although that probably necessitates keeping both,\n> which isn't great, but I don't see a better alternative.\n>\n>> One possible use would be in parsing large manifest files for\n>> incremental backup. However, it struck me a few days ago that this might\n>> not work all that well. The current parser and the new parser both\n>> palloc() space for each field name and scalar token in the JSON (unless\n>> they aren't used, which is normally not the case), and they don't free\n>> it, so that particularly if done in frontend code this amounts to a\n>> possible memory leak, unless the semantic routines do the freeing\n>> themselves. So while we can save some memory by not having to slurp in\n>> the whole JSON in one hit, we aren't saving any of that other allocation\n>> of memory, which amounts to almost as much space as the raw JSON.\n> It seems like a pretty significant savings no matter what. Suppose the\n> backup_manifest file is 2GB, and instead of creating a 2GB buffer, you\n> create an 1MB buffer and feed the data to the parser in 1MB chunks.\n> Well, that saves 2GB less 1MB, full stop. Now if we address the issue\n> you raise here in some way, we can potentially save even more memory,\n> which is great, but even if we don't, we still saved a bunch of memory\n> that could not have been saved in any other way.\n>\n> As far as addressing that other issue, we could address the issue\n> either by having the semantic routines free the memory if they don't\n> need it, or alternatively by having the parser itself free the memory\n> after invoking any callbacks to which it might be passed. The latter\n> approach feels more conceptually pure, but the former might be the\n> more practical approach. I think what really matters here is that we\n> document who must or may do which things. When a callback gets passed\n> a pointer, we can document either that (1) it's a palloc'd chunk that\n> the calllback can free if they want or (2) that it's a palloc'd chunk\n> that the caller must not free or (3) that it's not a palloc'd chunk.\n> We can further document the memory context in which the chunk will be\n> allocated, if applicable, and when/if the parser will free it.\n\n\nYeah. One idea I had yesterday was to stash the field names, which in \nlarge JSON docs tent to be pretty repetitive, in a hash table instead of \npstrduping each instance. The name would be valid until the end of the \nparse, and would only need to be duplicated by the callback function if \nit were needed beyond that. That's not the case currently with the \nparse_manifest code. I'll work on using a hash table.\n\nThe parse_manifest code does seem to pfree the scalar values it no \nlonger needs fairly well, so maybe we don't need to to anything there.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 06:57:57 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 6:57 AM Andrew Dunstan <[email protected]> wrote:\n> Yeah. One idea I had yesterday was to stash the field names, which in\n> large JSON docs tent to be pretty repetitive, in a hash table instead of\n> pstrduping each instance. The name would be valid until the end of the\n> parse, and would only need to be duplicated by the callback function if\n> it were needed beyond that. That's not the case currently with the\n> parse_manifest code. I'll work on using a hash table.\n\nIMHO, this is not a good direction. Anybody who is parsing JSON\nprobably wants to discard the duplicated labels and convert other\nheavily duplicated strings to enum values or something. (e.g. if every\nrecord has {\"color\":\"red\"} or {\"color\":\"green\"}). So the hash table\nlookups will cost but won't really save anything more than just\nfreeing the memory not needed, but will probably be more expensive.\n\n> The parse_manifest code does seem to pfree the scalar values it no\n> longer needs fairly well, so maybe we don't need to to anything there.\n\nHmm. This makes me wonder if you've measured how much actual leakage there is?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Jan 2024 08:45:06 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-01-03 We 08:45, Robert Haas wrote:\n> On Wed, Jan 3, 2024 at 6:57 AM Andrew Dunstan <[email protected]> wrote:\n>> Yeah. One idea I had yesterday was to stash the field names, which in\n>> large JSON docs tent to be pretty repetitive, in a hash table instead of\n>> pstrduping each instance. The name would be valid until the end of the\n>> parse, and would only need to be duplicated by the callback function if\n>> it were needed beyond that. That's not the case currently with the\n>> parse_manifest code. I'll work on using a hash table.\n> IMHO, this is not a good direction. Anybody who is parsing JSON\n> probably wants to discard the duplicated labels and convert other\n> heavily duplicated strings to enum values or something. (e.g. if every\n> record has {\"color\":\"red\"} or {\"color\":\"green\"}). So the hash table\n> lookups will cost but won't really save anything more than just\n> freeing the memory not needed, but will probably be more expensive.\n\n\nI don't quite follow.\n\nSay we have a document with an array 1m objects, each with a field \ncalled \"color\". As it stands we'll allocate space for that field name 1m \ntimes. Using a hash table we'd allocated space for it once. And \nallocating the memory isn't free, although it might be cheaper than \ndoing hash lookups.\n\nI guess we can benchmark it and see what the performance impact of using \na hash table might be.\n\nAnother possibility would be simply to have the callback free the field \nname after use. for the parse_manifest code that could be a one-line \naddition to the code at the bottom of json_object_manifest_field_start().\n\n\n>> The parse_manifest code does seem to pfree the scalar values it no\n>> longer needs fairly well, so maybe we don't need to to anything there.\n> Hmm. This makes me wonder if you've measured how much actual leakage there is?\n\n\nNo I haven't. I have simply theorized about how much memory we might \nconsume if nothing were done by the callers to free the memory.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 09:59:37 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 9:59 AM Andrew Dunstan <[email protected]> wrote:\n> Say we have a document with an array 1m objects, each with a field\n> called \"color\". As it stands we'll allocate space for that field name 1m\n> times. Using a hash table we'd allocated space for it once. And\n> allocating the memory isn't free, although it might be cheaper than\n> doing hash lookups.\n>\n> I guess we can benchmark it and see what the performance impact of using\n> a hash table might be.\n>\n> Another possibility would be simply to have the callback free the field\n> name after use. for the parse_manifest code that could be a one-line\n> addition to the code at the bottom of json_object_manifest_field_start().\n\nYeah. So I'm arguing that allocating the memory each time and then\nfreeing it sounds cheaper than looking it up in the hash table every\ntime, discovering it's there, and thus skipping the allocate/free.\n\nI might be wrong about that. It's just that allocating and freeing a\nsmall chunk of memory should boil down to popping it off of a linked\nlist and then pushing it back on. And that sounds cheaper than hashing\nthe string and looking for it in a hash bucket.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Jan 2024 10:12:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-01-03 We 10:12, Robert Haas wrote:\n> On Wed, Jan 3, 2024 at 9:59 AM Andrew Dunstan <[email protected]> wrote:\n>> Say we have a document with an array 1m objects, each with a field\n>> called \"color\". As it stands we'll allocate space for that field name 1m\n>> times. Using a hash table we'd allocated space for it once. And\n>> allocating the memory isn't free, although it might be cheaper than\n>> doing hash lookups.\n>>\n>> I guess we can benchmark it and see what the performance impact of using\n>> a hash table might be.\n>>\n>> Another possibility would be simply to have the callback free the field\n>> name after use. for the parse_manifest code that could be a one-line\n>> addition to the code at the bottom of json_object_manifest_field_start().\n> Yeah. So I'm arguing that allocating the memory each time and then\n> freeing it sounds cheaper than looking it up in the hash table every\n> time, discovering it's there, and thus skipping the allocate/free.\n>\n> I might be wrong about that. It's just that allocating and freeing a\n> small chunk of memory should boil down to popping it off of a linked\n> list and then pushing it back on. And that sounds cheaper than hashing\n> the string and looking for it in a hash bucket.\n\n\n\nOK, cleaning up in the client code will be much simpler, so let's go \nwith that for now and revisit it later if necessary.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 11:55:47 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 10:14:16AM -0500, Robert Haas wrote:\n> It seems like a pretty significant savings no matter what. Suppose the\n> backup_manifest file is 2GB, and instead of creating a 2GB buffer, you\n> create an 1MB buffer and feed the data to the parser in 1MB chunks.\n> Well, that saves 2GB less 1MB, full stop. Now if we address the issue\n> you raise here in some way, we can potentially save even more memory,\n> which is great, but even if we don't, we still saved a bunch of memory\n> that could not have been saved in any other way.\n\nYou could also build a streaming incremental parser. That is, one that\noutputs a path and a leaf value (where leaf values are scalar values,\n`null`, `true`, `false`, numbers, and strings). Then if the caller is\ndoing something JSONPath-like then the caller can probably immediately\nfree almost all allocations and even terminate the parse early.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:36:45 -0600",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 6:36 PM Nico Williams <[email protected]> wrote:\n> On Tue, Jan 02, 2024 at 10:14:16AM -0500, Robert Haas wrote:\n> > It seems like a pretty significant savings no matter what. Suppose the\n> > backup_manifest file is 2GB, and instead of creating a 2GB buffer, you\n> > create an 1MB buffer and feed the data to the parser in 1MB chunks.\n> > Well, that saves 2GB less 1MB, full stop. Now if we address the issue\n> > you raise here in some way, we can potentially save even more memory,\n> > which is great, but even if we don't, we still saved a bunch of memory\n> > that could not have been saved in any other way.\n>\n> You could also build a streaming incremental parser. That is, one that\n> outputs a path and a leaf value (where leaf values are scalar values,\n> `null`, `true`, `false`, numbers, and strings). Then if the caller is\n> doing something JSONPath-like then the caller can probably immediately\n> free almost all allocations and even terminate the parse early.\n\nI think our current parser is event-based rather than this ... but it\nseems like this could easily be built on top of it, if someone wanted\nto.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Jan 2024 10:06:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 8:49 AM Andrew Dunstan <[email protected]> wrote:\n> Quite a long time ago Robert asked me about the possibility of an\n> incremental JSON parser. I wrote one, and I've tweaked it a bit, but the\n> performance is significantly worse that that of the current Recursive\n> Descent parser.\n\nThe prediction stack is neat. It seems like the main loop is hit so\nmany thousands of times that micro-optimization would be necessary...\nI attached a sample diff to get rid of the strlen calls during\npush_prediction(), which speeds things up a bit (8-15%, depending on\noptimization level) on my machines.\n\nMaybe it's possible to condense some of those productions down, and\nreduce the loop count? E.g. does every \"scalar\" production need to go\nthree times through the loop/stack, or can the scalar semantic action\njust peek at the next token prediction and do all the callback work at\nonce?\n\n> + case JSON_SEM_SCALAR_CALL:\n> + {\n> + json_scalar_action sfunc = sem->scalar;\n> +\n> + if (sfunc != NULL)\n> + (*sfunc) (sem->semstate, scalar_val, scalar_tok);\n> + }\n\nIs it safe to store state (scalar_val/scalar_tok) on the stack, or\ndoes it disappear if the parser hits an incomplete token?\n\n> One possible use would be in parsing large manifest files for\n> incremental backup.\n\nI'm keeping an eye on this thread for OAuth, since the clients have to\nparse JSON as well. Those responses tend to be smaller, though, so\nyou'd have to really be hurting for resources to need this.\n\n--Jacob",
"msg_date": "Tue, 9 Jan 2024 10:46:17 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-01-09 Tu 13:46, Jacob Champion wrote:\n> On Tue, Dec 26, 2023 at 8:49 AM Andrew Dunstan <[email protected]> wrote:\n>> Quite a long time ago Robert asked me about the possibility of an\n>> incremental JSON parser. I wrote one, and I've tweaked it a bit, but the\n>> performance is significantly worse that that of the current Recursive\n>> Descent parser.\n> The prediction stack is neat. It seems like the main loop is hit so\n> many thousands of times that micro-optimization would be necessary...\n> I attached a sample diff to get rid of the strlen calls during\n> push_prediction(), which speeds things up a bit (8-15%, depending on\n> optimization level) on my machines.\n\n\nThanks for looking! I've been playing around with a similar idea, but \nyours might be better.\n\n\n\n>\n> Maybe it's possible to condense some of those productions down, and\n> reduce the loop count? E.g. does every \"scalar\" production need to go\n> three times through the loop/stack, or can the scalar semantic action\n> just peek at the next token prediction and do all the callback work at\n> once?\n\n\nAlso a good suggestion. Will look and see. IIRC I had trouble with this bit.\n\n\n>\n>> + case JSON_SEM_SCALAR_CALL:\n>> + {\n>> + json_scalar_action sfunc = sem->scalar;\n>> +\n>> + if (sfunc != NULL)\n>> + (*sfunc) (sem->semstate, scalar_val, scalar_tok);\n>> + }\n> Is it safe to store state (scalar_val/scalar_tok) on the stack, or\n> does it disappear if the parser hits an incomplete token?\n\n\nGood point. In fact it might be responsible for the error I'm currently \ntrying to get to the bottom of.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 10 Jan 2024 06:27:24 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-09 Tu 13:46, Jacob Champion wrote:\n> On Tue, Dec 26, 2023 at 8:49 AM Andrew Dunstan <[email protected]> wrote:\n>> Quite a long time ago Robert asked me about the possibility of an\n>> incremental JSON parser. I wrote one, and I've tweaked it a bit, but the\n>> performance is significantly worse that that of the current Recursive\n>> Descent parser.\n> The prediction stack is neat. It seems like the main loop is hit so\n> many thousands of times that micro-optimization would be necessary...\n> I attached a sample diff to get rid of the strlen calls during\n> push_prediction(), which speeds things up a bit (8-15%, depending on\n> optimization level) on my machines.\n>\n> Maybe it's possible to condense some of those productions down, and\n> reduce the loop count? E.g. does every \"scalar\" production need to go\n> three times through the loop/stack, or can the scalar semantic action\n> just peek at the next token prediction and do all the callback work at\n> once?\n>\n>> + case JSON_SEM_SCALAR_CALL:\n>> + {\n>> + json_scalar_action sfunc = sem->scalar;\n>> +\n>> + if (sfunc != NULL)\n>> + (*sfunc) (sem->semstate, scalar_val, scalar_tok);\n>> + }\n> Is it safe to store state (scalar_val/scalar_tok) on the stack, or\n> does it disappear if the parser hits an incomplete token?\n>\n>> One possible use would be in parsing large manifest files for\n>> incremental backup.\n> I'm keeping an eye on this thread for OAuth, since the clients have to\n> parse JSON as well. Those responses tend to be smaller, though, so\n> you'd have to really be hurting for resources to need this.\n>\n\nI've incorporated your suggestion, and fixed the bug you identified.\n\n\nThe attached also adds support for incrementally parsing backup \nmanifests, and uses that in the three places we call the manifest parser.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 15 Jan 2024 10:48:00 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4725/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4725\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:29:17 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-22 Mo 01:29, Peter Smith wrote:\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\n\nThanks.\n\nLet's see if the attached does better.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 22 Jan 2024 14:16:07 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-22 Mo 14:16, Andrew Dunstan wrote:\n>\n> On 2024-01-22 Mo 01:29, Peter Smith wrote:\n>> 2024-01 Commitfest.\n>>\n>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>> there were CFbot test failures last time it was run [2]. Please have a\n>> look and post an updated version if necessary.\n>\n>\n> Thanks.\n>\n> Let's see if the attached does better.\n\n\n\nThis time for sure! (Watch me pull a rabbit out of my hat!)\n\n\nIt turns out that NO_TEMP_INSTALL=1 can do ugly things, so I removed it, \nand I think the test will now pass.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 22 Jan 2024 18:01:20 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-22 Mo 18:01, Andrew Dunstan wrote:\n>\n> On 2024-01-22 Mo 14:16, Andrew Dunstan wrote:\n>>\n>> On 2024-01-22 Mo 01:29, Peter Smith wrote:\n>>> 2024-01 Commitfest.\n>>>\n>>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>>> there were CFbot test failures last time it was run [2]. Please have a\n>>> look and post an updated version if necessary.\n>>\n>>\n>> Thanks.\n>>\n>> Let's see if the attached does better.\n>\n>\n>\n> This time for sure! (Watch me pull a rabbit out of my hat!)\n>\n>\n> It turns out that NO_TEMP_INSTALL=1 can do ugly things, so I removed \n> it, and I think the test will now pass.\n>\n>\n>\n\nFixed one problem but there are some others. I'm hoping this will \nsatisfy the cfbot.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 22 Jan 2024 21:02:23 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-22 Mo 21:02, Andrew Dunstan wrote:\n>\n> On 2024-01-22 Mo 18:01, Andrew Dunstan wrote:\n>>\n>> On 2024-01-22 Mo 14:16, Andrew Dunstan wrote:\n>>>\n>>> On 2024-01-22 Mo 01:29, Peter Smith wrote:\n>>>> 2024-01 Commitfest.\n>>>>\n>>>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>>>> there were CFbot test failures last time it was run [2]. Please have a\n>>>> look and post an updated version if necessary.\n>>>\n>>>\n>>> Thanks.\n>>>\n>>> Let's see if the attached does better.\n>>\n>>\n>>\n>> This time for sure! (Watch me pull a rabbit out of my hat!)\n>>\n>>\n>> It turns out that NO_TEMP_INSTALL=1 can do ugly things, so I removed \n>> it, and I think the test will now pass.\n>>\n>>\n>>\n>\n> Fixed one problem but there are some others. I'm hoping this will \n> satisfy the cfbot.\n>\n>\n>\n\nThe cfbot reports an error on a 32 bit build \n<https://api.cirrus-ci.com/v1/artifact/task/6055909135220736/testrun/build-32/testrun/pg_combinebackup/003_timeline/log/regress_log_003_timeline>:\n\n# Running: pg_basebackup -D /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2 --no-sync -cfast --incremental /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup1/backup_manifest\npg_basebackup: error: could not upload manifest: ERROR: could not parse backup manifest: file size is not an integer\npg_basebackup: removing data directory \"/tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2\"\n[02:41:07.830](0.073s) not ok 2 - incremental backup from node1\n[02:41:07.830](0.000s) # Failed test 'incremental backup from node1'\n\nI have set up a Debian 12 EC2 instance following the recipe at \n<https://raw.githubusercontent.com/anarazel/pg-vm-images/main/scripts/linux_debian_install_deps.sh>, \nand ran what I think are the same tests dozens of times, but the failure \ndid not reappear in my setup. Unfortunately, the test doesn't show the \nfailing manifest or log the failing field, so trying to divine what \nhappened here is more than difficult.\n\nNot sure how to address this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-01-22 Mo 21:02, Andrew Dunstan\n wrote:\n\n\n\n On 2024-01-22 Mo 18:01, Andrew Dunstan wrote:\n \n\n\n On 2024-01-22 Mo 14:16, Andrew Dunstan wrote:\n \n\n\n On 2024-01-22 Mo 01:29, Peter Smith wrote:\n \n2024-01 Commitfest.\n \n\n Hi, This patch has a CF status of \"Needs Review\" [1], but it\n seems\n \n there were CFbot test failures last time it was run [2].\n Please have a\n \n look and post an updated version if necessary.\n \n\n\n\n Thanks.\n \n\n Let's see if the attached does better.\n \n\n\n\n\n This time for sure! (Watch me pull a rabbit out of my hat!)\n \n\n\n It turns out that NO_TEMP_INSTALL=1 can do ugly things, so I\n removed it, and I think the test will now pass.\n \n\n\n\n\n\n Fixed one problem but there are some others. I'm hoping this will\n satisfy the cfbot.\n \n\n\n\n\n\n\nThe cfbot reports an error on a 32 bit build\n<https://api.cirrus-ci.com/v1/artifact/task/6055909135220736/testrun/build-32/testrun/pg_combinebackup/003_timeline/log/regress_log_003_timeline>:\n# Running: pg_basebackup -D /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2 --no-sync -cfast --incremental /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup1/backup_manifest\npg_basebackup: error: could not upload manifest: ERROR: could not parse backup manifest: file size is not an integer\npg_basebackup: removing data directory \"/tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2\"\n[02:41:07.830](0.073s) not ok 2 - incremental backup from node1\n[02:41:07.830](0.000s) # Failed test 'incremental backup from node1'\n\n\nI have set up a Debian 12 EC2 instance following the recipe at\n<https://raw.githubusercontent.com/anarazel/pg-vm-images/main/scripts/linux_debian_install_deps.sh>,\n and ran what I think are the same tests dozens of times, but the\n failure did not reappear in my setup. Unfortunately, the test\n doesn't show the failing manifest or log the failing field, so\n trying to divine what happened here is more than difficult.\nNot sure how to address this.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Jan 2024 10:04:34 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 10:04 AM Andrew Dunstan <[email protected]> wrote:\n> The cfbot reports an error on a 32 bit build <https://api.cirrus-ci.com/v1/artifact/task/6055909135220736/testrun/build-32/testrun/pg_combinebackup/003_timeline/log/regress_log_003_timeline>:\n>\n> # Running: pg_basebackup -D /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2 --no-sync -cfast --incremental /tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup1/backup_manifest\n> pg_basebackup: error: could not upload manifest: ERROR: could not parse backup manifest: file size is not an integer\n> pg_basebackup: removing data directory \"/tmp/cirrus-ci-build/build-32/testrun/pg_combinebackup/003_timeline/data/t_003_timeline_node1_data/backup/backup2\"\n> [02:41:07.830](0.073s) not ok 2 - incremental backup from node1\n> [02:41:07.830](0.000s) # Failed test 'incremental backup from node1'\n>\n> I have set up a Debian 12 EC2 instance following the recipe at <https://raw.githubusercontent.com/anarazel/pg-vm-images/main/scripts/linux_debian_install_deps.sh>, and ran what I think are the same tests dozens of times, but the failure did not reappear in my setup. Unfortunately, the test doesn't show the failing manifest or log the failing field, so trying to divine what happened here is more than difficult.\n>\n> Not sure how to address this.\n\nYeah, that's really odd. The backup size field is printed like this:\n\nappendStringInfo(&buf, \"\\\"Size\\\": %zu, \", size);\n\nAnd parsed like this:\n\n size = strtoul(parse->size, &ep, 10);\n if (*ep)\n json_manifest_parse_failure(parse->context,\n\n \"file size is not an integer\");\n\nI confess to bafflement -- how could the output of the first fail to\nbe parsed by the second? The manifest has to look pretty much valid in\norder not to error out before it gets to this check, with just that\none field corrupted. But I don't understand how that could happen.\n\nI agree that the error reporting could be better here, but it didn't\nseem worth spending time on when I wrote the code. I figured the only\nway we could end up with something like \"Size\": \"Walrus\" is if the\nuser was messing with us on purpose. Apparently that's not so, yet the\nmechanism eludes me. Or maybe it's not some random string, but is\nsomething like an empty string or a number with trailing garbage or a\nnumber that's out of range. But I don't see how any of those things\ncan happen either.\n\nMaybe you should adjust your patch to dump the manifests into the log\nfile with note(). Then when cfbot runs on it you can see exactly what\nthe raw file looks like. Although I wonder if it's possible that the\nmanifest itself is OK, but somehow it gets garbled when uploaded to\nthe server, either because the client code that sends it or the server\ncode that receives it does something that isn't safe in 32-bit mode.\nIf we hypothesize an off-by-one error or a buffer overrun, that could\npossibly explain how one field got garbled while the rest of the file\nis OK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jan 2024 13:08:34 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-01-24 We 13:08, Robert Haas wrote:\n>\n> Maybe you should adjust your patch to dump the manifests into the log\n> file with note(). Then when cfbot runs on it you can see exactly what\n> the raw file looks like. Although I wonder if it's possible that the\n> manifest itself is OK, but somehow it gets garbled when uploaded to\n> the server, either because the client code that sends it or the server\n> code that receives it does something that isn't safe in 32-bit mode.\n> If we hypothesize an off-by-one error or a buffer overrun, that could\n> possibly explain how one field got garbled while the rest of the file\n> is OK.\n\n\nYeah, I thought earlier today I was on the track of an off by one error, \nbut I was apparently mistaken, so here's the same patch set with an \nextra patch that logs a bunch of stuff, and might let us see what's \nupsetting the cfbot.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 26 Jan 2024 12:15:27 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-01-26 Fr 12:15, Andrew Dunstan wrote:\n>\n> On 2024-01-24 We 13:08, Robert Haas wrote:\n>>\n>> Maybe you should adjust your patch to dump the manifests into the log\n>> file with note(). Then when cfbot runs on it you can see exactly what\n>> the raw file looks like. Although I wonder if it's possible that the\n>> manifest itself is OK, but somehow it gets garbled when uploaded to\n>> the server, either because the client code that sends it or the server\n>> code that receives it does something that isn't safe in 32-bit mode.\n>> If we hypothesize an off-by-one error or a buffer overrun, that could\n>> possibly explain how one field got garbled while the rest of the file\n>> is OK.\n>\n>\n> Yeah, I thought earlier today I was on the track of an off by one \n> error, but I was apparently mistaken, so here's the same patch set \n> with an extra patch that logs a bunch of stuff, and might let us see \n> what's upsetting the cfbot.\n>\n>\n>\n\nWell, that didn't help a lot, but meanwhile the CFBot seems to have \ndecided in the last few days that it's now happy, so full steam aead! ;-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Feb 2024 23:41:07 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 2:10 PM Andrew Dunstan <[email protected]> wrote:\n> Well, that didn't help a lot, but meanwhile the CFBot seems to have\n> decided in the last few days that it's now happy, so full steam aead! ;-)\n\nI haven't been able to track down the root cause yet, but I am able to\nreproduce the failure consistently on my machine:\n\n ERROR: could not parse backup manifest: file size is not an\ninteger: '16384, \"Last-Modified\": \"2024-02-20 23:07:43 GMT\",\n\"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"66c829cd\" },\n { \"Path\": \"base/4/2606\", \"Size\": 24576, \"Last-Modified\": \"20\n\nFull log is attached.\n\n--Jacob",
"msg_date": "Tue, 20 Feb 2024 16:53:35 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-02-20 Tu 19:53, Jacob Champion wrote:\n> On Tue, Feb 20, 2024 at 2:10 PM Andrew Dunstan <[email protected]> wrote:\n>> Well, that didn't help a lot, but meanwhile the CFBot seems to have\n>> decided in the last few days that it's now happy, so full steam aead! ;-)\n> I haven't been able to track down the root cause yet, but I am able to\n> reproduce the failure consistently on my machine:\n>\n> ERROR: could not parse backup manifest: file size is not an\n> integer: '16384, \"Last-Modified\": \"2024-02-20 23:07:43 GMT\",\n> \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"66c829cd\" },\n> { \"Path\": \"base/4/2606\", \"Size\": 24576, \"Last-Modified\": \"20\n>\n> Full log is attached.\n>\n\n\n*sigh* That's weird. I wonder why you can reproduce it and I can't. Can \nyou give me details of the build? OS, compiler, path to source, build \nsetup etc.? Anything that might be remotely relevant.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 00:32:07 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 9:32 PM Andrew Dunstan <[email protected]> wrote:\n> *sigh* That's weird. I wonder why you can reproduce it and I can't. Can\n> you give me details of the build? OS, compiler, path to source, build\n> setup etc.? Anything that might be remotely relevant.\n\nSure:\n- guest VM running in UTM (QEMU 7.2) is Ubuntu 22.04 for ARM, default\ncore count, 8GB\n- host is macOS Sonoma 14.3.1, Apple Silicon (M3 Pro), 36GB\n- it's a Meson build (plus a diff for the test utilities, attached,\nbut that's hopefully not relevant to the failure)\n- buildtype=debug, optimization=g, cassert=true\n- GCC 11.4\n- build path is nested a bit (~/src/postgres/worktree-inc-json/build-dev)\n\n--Jacob",
"msg_date": "Wed, 21 Feb 2024 06:50:21 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 6:50 AM Jacob Champion\n<[email protected]> wrote:\n> On Tue, Feb 20, 2024 at 9:32 PM Andrew Dunstan <[email protected]> wrote:\n> > *sigh* That's weird. I wonder why you can reproduce it and I can't. Can\n> > you give me details of the build? OS, compiler, path to source, build\n> > setup etc.? Anything that might be remotely relevant.\n\nThis construction seems suspect, in json_lex_number():\n\n> if (lex->incremental && !lex->inc_state->is_last_chunk &&\n> len >= lex->input_length)\n> {\n> appendStringInfoString(&lex->inc_state->partial_token,\n> lex->token_start);\n> return JSON_INCOMPLETE;\n> }\n\nappendStringInfoString() isn't respecting the end of the chunk: if\nthere's extra data after the chunk boundary (as\nAppendIncrementalManifestData() does) then all of that will be stuck\nonto the end of the partial_token.\n\nI'm about to context-switch off of this for the day, but I can work on\na patch tomorrow if that'd be helpful. It looks like this is not the\nonly call to appendStringInfoString().\n\n--Jacob\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:26:51 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-02-21 We 15:26, Jacob Champion wrote:\n> On Wed, Feb 21, 2024 at 6:50 AM Jacob Champion\n> <[email protected]> wrote:\n>> On Tue, Feb 20, 2024 at 9:32 PM Andrew Dunstan <[email protected]> wrote:\n>>> *sigh* That's weird. I wonder why you can reproduce it and I can't. Can\n>>> you give me details of the build? OS, compiler, path to source, build\n>>> setup etc.? Anything that might be remotely relevant.\n> This construction seems suspect, in json_lex_number():\n>\n>> if (lex->incremental && !lex->inc_state->is_last_chunk &&\n>> len >= lex->input_length)\n>> {\n>> appendStringInfoString(&lex->inc_state->partial_token,\n>> lex->token_start);\n>> return JSON_INCOMPLETE;\n>> }\n> appendStringInfoString() isn't respecting the end of the chunk: if\n> there's extra data after the chunk boundary (as\n> AppendIncrementalManifestData() does) then all of that will be stuck\n> onto the end of the partial_token.\n>\n> I'm about to context-switch off of this for the day, but I can work on\n> a patch tomorrow if that'd be helpful. It looks like this is not the\n> only call to appendStringInfoString().\n>\n\nYeah, the issue seems to be with chunks of json that are not \nnull-terminated. We don't require that they be so this code was buggy. \nIt wasn't picked up earlier because the tests that I wrote did put a \nnull byte at the end. Patch 5 in this series fixes those issues and \nadjusts most of the tests to add some trailing junk to the pieces of \njson, so we can be sure that this is done right.\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 22 Feb 2024 04:38:32 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 1:38 AM Andrew Dunstan <[email protected]> wrote:\n> Patch 5 in this series fixes those issues and\n> adjusts most of the tests to add some trailing junk to the pieces of\n> json, so we can be sure that this is done right.\n\nThis fixes the test failure for me, thanks! I've attached my current\nmesonification diff, which just adds test_json_parser to the suite. It\nrelies on the PATH that's set up, which appears to include the build\ndirectory for both VPATH builds and Meson.\n\nAre there plans to fill out the test suite more? Since we should be\nable to control all the initial conditions, it'd be good to get fairly\ncomprehensive coverage of the new code.\n\nAs an aside, I find the behavior of need_escapes=false to be\ncompletely counterintuitive. I know the previous code did this, but it\nseems like the opposite of \"provides unescaped strings\" should be\n\"provides raw strings\", not \"all strings are now NULL\".\n\n--Jacob",
"msg_date": "Thu, 22 Feb 2024 12:29:18 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-02-22 Th 15:29, Jacob Champion wrote:\n> On Thu, Feb 22, 2024 at 1:38 AM Andrew Dunstan <[email protected]> wrote:\n>> Patch 5 in this series fixes those issues and\n>> adjusts most of the tests to add some trailing junk to the pieces of\n>> json, so we can be sure that this is done right.\n> This fixes the test failure for me, thanks! I've attached my current\n> mesonification diff, which just adds test_json_parser to the suite. It\n> relies on the PATH that's set up, which appears to include the build\n> directory for both VPATH builds and Meson.\n\n\n\nOK, thanks, will add this in the next version.\n\n\n>\n> Are there plans to fill out the test suite more? Since we should be\n> able to control all the initial conditions, it'd be good to get fairly\n> comprehensive coverage of the new code.\n\n\n\nWell, it's tested (as we know) by the backup manifest tests. During \ndevelopment, I tested by making the regular parser use the \nnon-recursive parser (see FORCE_JSON_PSTACK). That doesn't test the \nincremental piece of it, but it does check that the rest of it is doing \nthe right thing. We could probably extend the incremental test by making \nit output a stream of tokens and making sure that was correct.\n\n\n> As an aside, I find the behavior of need_escapes=false to be\n> completely counterintuitive. I know the previous code did this, but it\n> seems like the opposite of \"provides unescaped strings\" should be\n> \"provides raw strings\", not \"all strings are now NULL\".\n\n\nYes, we could possibly call it \"need_strings\" or something like that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 18:43:01 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 3:43 PM Andrew Dunstan <[email protected]> wrote:\n> > Are there plans to fill out the test suite more? Since we should be\n> > able to control all the initial conditions, it'd be good to get fairly\n> > comprehensive coverage of the new code.\n>\n> Well, it's tested (as we know) by the backup manifest tests. During\n> development, I tested by making the regular parser use the\n> non-recursive parser (see FORCE_JSON_PSTACK). That doesn't test the\n> incremental piece of it, but it does check that the rest of it is doing\n> the right thing. We could probably extend the incremental test by making\n> it output a stream of tokens and making sure that was correct.\n\nThat would also cover all the semantic callbacks (currently,\nOFIELD_END and AELEM_* are uncovered), so +1 from me.\n\nLooking at lcov, it'd be good to\n- test failure modes as well as the happy path, so we know we're\nrejecting invalid syntax correctly\n- test the prediction stack resizing code\n- provide targeted coverage of the partial token support, since at the\nmoment we're at the mercy of the manifest format and the default chunk\nsize\n\nAs a brute force example of the latter, with the attached diff I get\ntest failures at chunk sizes 1, 2, 3, 4, 6, and 12.\n\n> > As an aside, I find the behavior of need_escapes=false to be\n> > completely counterintuitive. I know the previous code did this, but it\n> > seems like the opposite of \"provides unescaped strings\" should be\n> > \"provides raw strings\", not \"all strings are now NULL\".\n>\n> Yes, we could possibly call it \"need_strings\" or something like that.\n\n+1\n\n--Jacob\n\n\n",
"msg_date": "Mon, 26 Feb 2024 07:08:18 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 7:08 AM Jacob Champion\n<[email protected]> wrote:\n> As a brute force example of the latter, with the attached diff I get\n> test failures at chunk sizes 1, 2, 3, 4, 6, and 12.\n\nBut this time with the diff.\n\n--Jacob",
"msg_date": "Mon, 26 Feb 2024 07:10:08 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-02-26 Mo 10:10, Jacob Champion wrote:\n> On Mon, Feb 26, 2024 at 7:08 AM Jacob Champion\n> <[email protected]> wrote:\n>> As a brute force example of the latter, with the attached diff I get\n>> test failures at chunk sizes 1, 2, 3, 4, 6, and 12.\n> But this time with the diff.\n>\n\nOuch. I'll check it out. Thanks!\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 19:20:46 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-02-26 Mo 19:20, Andrew Dunstan wrote:\n>\n> On 2024-02-26 Mo 10:10, Jacob Champion wrote:\n>> On Mon, Feb 26, 2024 at 7:08 AM Jacob Champion\n>> <[email protected]> wrote:\n>>> As a brute force example of the latter, with the attached diff I get\n>>> test failures at chunk sizes 1, 2, 3, 4, 6, and 12.\n>> But this time with the diff.\n>>\n>\n> Ouch. I'll check it out. Thanks!\n>\n>\n>\n\nThe good news is that the parser is doing fine - this issue was due to a \nthinko on my part in the test program that got triggered by the input \nfile size being an exact multiple of the chunk size. I'll have a new \npatch set later this week.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 00:20:42 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 9:20 PM Andrew Dunstan <[email protected]> wrote:\n> The good news is that the parser is doing fine - this issue was due to a\n> thinko on my part in the test program that got triggered by the input\n> file size being an exact multiple of the chunk size. I'll have a new\n> patch set later this week.\n\nAh, feof()! :D Good to know it's not the partial token logic.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 27 Feb 2024 07:21:34 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Some more observations as I make my way through the patch:\n\nIn src/common/jsonapi.c,\n\n> +#define JSON_NUM_NONTERMINALS 6\n\nShould this be 5 now?\n\n> + res = pg_parse_json_incremental(&(incstate->lex), &(incstate->sem),\n> + chunk, size, is_last);\n> +\n> + expected = is_last ? JSON_SUCCESS : JSON_INCOMPLETE;\n> +\n> + if (res != expected)\n> + json_manifest_parse_failure(context, \"parsing failed\");\n\nThis leads to error messages like\n\n pg_verifybackup: error: could not parse backup manifest: parsing failed\n\nwhich I would imagine is going to lead to confused support requests in\nthe event that someone does manage to corrupt their manifest. Can we\nmake use of json_errdetail() and print the line and column numbers?\nPatch 0001 over at [1] has one approach to making json_errdetail()\nworkable in frontend code.\n\nTop-level scalars like `false` or `12345` do not parse correctly if\nthe chunk size is too small; instead json_errdetail() reports 'Token\n\"\" is invalid'. With small chunk sizes, json_errdetail() additionally\nsegfaults on constructions like `[tru]` or `12zz`.\n\nFor my local testing, I'm carrying the following diff in\n001_test_json_parser_incremental.pl:\n\n> - ok($stdout =~ /SUCCESS/, \"chunk size $size: test succeeds\");\n> - ok(!$stderr, \"chunk size $size: no error output\");\n> + like($stdout, qr/SUCCESS/, \"chunk size $size: test succeeds\");\n> + is($stderr, \"\", \"chunk size $size: no error output\");\n\nThis is particularly helpful when a test fails spuriously due to code\ncoverage spray on stderr.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2BmSSY4SvOtVN7zLyUCQ4-RDkxkzmTuPEN%2Bt-PsB7GHnZA%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 07:28:45 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-03-07 Th 10:28, Jacob Champion wrote:\n> Some more observations as I make my way through the patch:\n>\n> In src/common/jsonapi.c,\n>\n>> +#define JSON_NUM_NONTERMINALS 6\n> Should this be 5 now?\n\n\nYep.\n\n\n>\n>> + res = pg_parse_json_incremental(&(incstate->lex), &(incstate->sem),\n>> + chunk, size, is_last);\n>> +\n>> + expected = is_last ? JSON_SUCCESS : JSON_INCOMPLETE;\n>> +\n>> + if (res != expected)\n>> + json_manifest_parse_failure(context, \"parsing failed\");\n> This leads to error messages like\n>\n> pg_verifybackup: error: could not parse backup manifest: parsing failed\n>\n> which I would imagine is going to lead to confused support requests in\n> the event that someone does manage to corrupt their manifest. Can we\n> make use of json_errdetail() and print the line and column numbers?\n> Patch 0001 over at [1] has one approach to making json_errdetail()\n> workable in frontend code.\n\n\nLooks sound on a first look. Maybe we should get that pushed ASAP so we \ncan take advantage of it.\n\n\n\n>\n> Top-level scalars like `false` or `12345` do not parse correctly if\n> the chunk size is too small; instead json_errdetail() reports 'Token\n> \"\" is invalid'. With small chunk sizes, json_errdetail() additionally\n> segfaults on constructions like `[tru]` or `12zz`.\n\n\nUgh. Will investigate.\n\n\n>\n> For my local testing, I'm carrying the following diff in\n> 001_test_json_parser_incremental.pl:\n>\n>> - ok($stdout =~ /SUCCESS/, \"chunk size $size: test succeeds\");\n>> - ok(!$stderr, \"chunk size $size: no error output\");\n>> + like($stdout, qr/SUCCESS/, \"chunk size $size: test succeeds\");\n>> + is($stderr, \"\", \"chunk size $size: no error output\");\n> This is particularly helpful when a test fails spuriously due to code\n> coverage spray on stderr.\n>\n\nMakes sense, thanks.\n\n\nI'll have a fresh patch set soon which will also take care of the bitrot.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 7 Mar 2024 22:42:06 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-03-07 Th 22:42, Andrew Dunstan wrote:\n>\n>\n>\n>\n>\n>\n>\n>>\n>> Top-level scalars like `false` or `12345` do not parse correctly if\n>> the chunk size is too small; instead json_errdetail() reports 'Token\n>> \"\" is invalid'. With small chunk sizes, json_errdetail() additionally\n>> segfaults on constructions like `[tru]` or `12zz`.\n>\n>\n> Ugh. Will investigate.\n\n\nI haven't managed to reproduce this. But I'm including some tests for it.\n\n>\n>\n>\n>\n> I'll have a fresh patch set soon which will also take care of the bitrot.\n>\n>\n>\n\nSee attached.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 11 Mar 2024 02:43:17 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 11:43 PM Andrew Dunstan <[email protected]> wrote:\n> I haven't managed to reproduce this. But I'm including some tests for it.\n\nIf I remove the newline from the end of the new tests:\n\n> @@ -25,7 +25,7 @@ for (my $size = 64; $size > 0; $size--)\n> foreach my $test_string (@inlinetests)\n> {\n> my ($fh, $fname) = tempfile();\n> - print $fh \"$test_string\\n\";\n> + print $fh \"$test_string\";\n> close($fh);\n>\n> foreach my $size (1..6, 10, 20, 30)\n\nthen I can reproduce the same result as my local tests. That extra\nwhitespace appears to help the partial token logic out somehow.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 11 Mar 2024 06:47:16 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-03-11 Mo 09:47, Jacob Champion wrote:\n> On Sun, Mar 10, 2024 at 11:43 PM Andrew Dunstan <[email protected]> wrote:\n>> I haven't managed to reproduce this. But I'm including some tests for it.\n> If I remove the newline from the end of the new tests:\n>\n>> @@ -25,7 +25,7 @@ for (my $size = 64; $size > 0; $size--)\n>> foreach my $test_string (@inlinetests)\n>> {\n>> my ($fh, $fname) = tempfile();\n>> - print $fh \"$test_string\\n\";\n>> + print $fh \"$test_string\";\n>> close($fh);\n>>\n>> foreach my $size (1..6, 10, 20, 30)\n> then I can reproduce the same result as my local tests. That extra\n> whitespace appears to help the partial token logic out somehow.\n>\n\n\nYep, here's a patch set with that fixed, and the tests adjusted.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 11 Mar 2024 21:43:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "I've been poking at the partial token logic. The json_errdetail() bug\nmentioned upthread (e.g. for an invalid input `[12zz]` and small chunk\nsize) seems to be due to the disconnection between the \"main\" lex\ninstance and the dummy_lex that's created from it. The dummy_lex\ncontains all the information about the failed token, which is\ndiscarded upon an error return:\n\n> partial_result = json_lex(&dummy_lex);\n> if (partial_result != JSON_SUCCESS)\n> return partial_result;\n\nIn these situations, there's an additional logical error:\nlex->token_start is pointing to a spot in the string after\nlex->token_terminator, which breaks an invariant that will mess up\nlater pointer math. Nothing appears to be setting lex->token_start to\npoint into the partial token buffer until _after_ the partial token is\nsuccessfully lexed, which doesn't seem right -- in addition to the\npointer math problems, if a previous chunk was freed (or on a stale\nstack frame), lex->token_start will still be pointing off into space.\nSimilarly, wherever we set token_terminator, we need to know that\ntoken_start is pointing into the same buffer.\n\nDetermining the end of a token is now done in two separate places\nbetween the partial- and full-lexer code paths, which is giving me a\nlittle heartburn. I'm concerned that those could drift apart, and if\nthe two disagree on where to end a token, we could lose data into the\npartial token buffer in a way that would be really hard to debug. Is\nthere a way to combine them?\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 14 Mar 2024 12:35:29 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 3:35 PM Jacob Champion <\[email protected]> wrote:\n\n> I've been poking at the partial token logic. The json_errdetail() bug\n> mentioned upthread (e.g. for an invalid input `[12zz]` and small chunk\n> size) seems to be due to the disconnection between the \"main\" lex\n> instance and the dummy_lex that's created from it. The dummy_lex\n> contains all the information about the failed token, which is\n> discarded upon an error return:\n>\n> > partial_result = json_lex(&dummy_lex);\n> > if (partial_result != JSON_SUCCESS)\n> > return partial_result;\n>\n> In these situations, there's an additional logical error:\n> lex->token_start is pointing to a spot in the string after\n> lex->token_terminator, which breaks an invariant that will mess up\n> later pointer math. Nothing appears to be setting lex->token_start to\n> point into the partial token buffer until _after_ the partial token is\n> successfully lexed, which doesn't seem right -- in addition to the\n> pointer math problems, if a previous chunk was freed (or on a stale\n> stack frame), lex->token_start will still be pointing off into space.\n> Similarly, wherever we set token_terminator, we need to know that\n> token_start is pointing into the same buffer.\n>\n> Determining the end of a token is now done in two separate places\n> between the partial- and full-lexer code paths, which is giving me a\n> little heartburn. I'm concerned that those could drift apart, and if\n> the two disagree on where to end a token, we could lose data into the\n> partial token buffer in a way that would be really hard to debug. Is\n> there a way to combine them?\n>\n\n\nNot very easily. But I think and hope I've fixed the issue you've\nidentified above about returning before lex->token_start is properly set.\n\n Attached is a new set of patches that does that and is updated for the\njson_errdetaiil() changes.\n\ncheers\n\nandrew",
"msg_date": "Mon, 18 Mar 2024 06:32:32 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 3:32 AM Andrew Dunstan <[email protected]> wrote:\n> Not very easily. But I think and hope I've fixed the issue you've identified above about returning before lex->token_start is properly set.\n>\n> Attached is a new set of patches that does that and is updated for the json_errdetaiil() changes.\n\nThanks!\n\n> ++ * Normally token_start would be ptok->data, but it could be later,\n> ++ * see json_lex_string's handling of invalid escapes.\n> + */\n> -+ lex->token_start = ptok->data;\n> ++ lex->token_start = dummy_lex.token_start;\n> + lex->token_terminator = ptok->data + ptok->len;\n\nBy the same token (ha), the lex->token_terminator needs to be updated\nfrom dummy_lex for some error paths. (IIUC, on success, the\ntoken_terminator should always point to the end of the buffer. If it's\nnot possible to combine the two code paths, maybe it'd be good to\ncheck that and assert/error out if we've incorrectly pulled additional\ndata into the partial token.)\n\nWith the incremental parser, I think prev_token_terminator is not\nlikely to be safe to use except in very specific circumstances, since\nit could be pointing into a stale chunk. Some documentation around how\nto use that safely in a semantic action would be good.\n\nIt looks like some of the newly added error handling paths cannot be\nhit, because the production stack makes it logically impossible to get\nthere. (For example, if it takes a successfully lexed comma to\ntransition into JSON_PROD_MORE_ARRAY_ELEMENTS to begin with, then when\nwe pull that production's JSON_TOKEN_COMMA off the stack, we can't\nsomehow fail to match that same comma.) Assuming I haven't missed a\ndifferent way to get into that situation, could the \"impossible\" cases\nhave assert calls added?\n\nI've attached two diffs. One is the group of tests I've been using\nlocally (called 002_inline.pl; I replaced the existing inline tests\nwith it), and the other is a set of potential fixes to get those tests\ngreen.\n\nThanks,\n--Jacob",
"msg_date": "Mon, 18 Mar 2024 12:34:51 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 3:35 PM Jacob Champion <\[email protected]> wrote:\n\n> On Mon, Mar 18, 2024 at 3:32 AM Andrew Dunstan <[email protected]>\n> wrote:\n> > Not very easily. But I think and hope I've fixed the issue you've\n> identified above about returning before lex->token_start is properly set.\n> >\n> > Attached is a new set of patches that does that and is updated for the\n> json_errdetaiil() changes.\n>\n> Thanks!\n>\n> > ++ * Normally token_start would be ptok->data, but it could\n> be later,\n> > ++ * see json_lex_string's handling of invalid escapes.\n> > + */\n> > -+ lex->token_start = ptok->data;\n> > ++ lex->token_start = dummy_lex.token_start;\n> > + lex->token_terminator = ptok->data + ptok->len;\n>\n> By the same token (ha), the lex->token_terminator needs to be updated\n> from dummy_lex for some error paths. (IIUC, on success, the\n> token_terminator should always point to the end of the buffer. If it's\n> not possible to combine the two code paths, maybe it'd be good to\n> check that and assert/error out if we've incorrectly pulled additional\n> data into the partial token.)\n>\n\n\nYes, good point. Will take a look at that.\n\n\n\n>\n> With the incremental parser, I think prev_token_terminator is not\n> likely to be safe to use except in very specific circumstances, since\n> it could be pointing into a stale chunk. Some documentation around how\n> to use that safely in a semantic action would be good.\n>\n\nQuite right. It's not safe. Should we ensure it's set to something like\nNULL or -1?\n\nAlso, where do you think we should put a warning about it?\n\n\n\n>\n> It looks like some of the newly added error handling paths cannot be\n> hit, because the production stack makes it logically impossible to get\n> there. (For example, if it takes a successfully lexed comma to\n> transition into JSON_PROD_MORE_ARRAY_ELEMENTS to begin with, then when\n> we pull that production's JSON_TOKEN_COMMA off the stack, we can't\n> somehow fail to match that same comma.) Assuming I haven't missed a\n> different way to get into that situation, could the \"impossible\" cases\n> have assert calls added?\n>\n\nGood idea.\n\n\n\n>\n> I've attached two diffs. One is the group of tests I've been using\n> locally (called 002_inline.pl; I replaced the existing inline tests\n> with it), and the other is a set of potential fixes to get those tests\n> green.\n>\n>\n>\n\nThanks. Here's a patch set that incorporates your two patches.\n\nIt also removes the frontend exits I had. In the case of stack depth, we\nfollow the example of the RD parser and only check stack depth for backend\ncode. In the case of the check that the lexer is set up for incremental\nparsing, the exit is replaced by an Assert. That means your test for an\nover-nested array doesn't work any more, so I have commented it out.\n\n\ncheers\n\nandrew",
"msg_date": "Tue, 19 Mar 2024 18:07:21 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 6:07 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n>\n>\n>\n> It also removes the frontend exits I had. In the case of stack depth, we\n> follow the example of the RD parser and only check stack depth for backend\n> code. In the case of the check that the lexer is set up for incremental\n> parsing, the exit is replaced by an Assert.\n>\n>\nOn second thoughts, I think it might be better if we invent a new error\nreturn code for a lexer mode mismatch.\n\ncheers\n\nandrew\n\nOn Tue, Mar 19, 2024 at 6:07 PM Andrew Dunstan <[email protected]> wrote:It also removes the frontend exits I had. In the case of stack depth, we follow the example of the RD parser and only check stack depth for backend code. In the case of the check that the lexer is set up for incremental parsing, the exit is replaced by an Assert. On second thoughts, I think it might be better if we invent a new error return code for a lexer mode mismatch.cheersandrew",
"msg_date": "Wed, 20 Mar 2024 03:19:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 3:07 PM Andrew Dunstan <[email protected]> wrote:\n> On Mon, Mar 18, 2024 at 3:35 PM Jacob Champion <[email protected]> wrote:\n>> With the incremental parser, I think prev_token_terminator is not\n>> likely to be safe to use except in very specific circumstances, since\n>> it could be pointing into a stale chunk. Some documentation around how\n>> to use that safely in a semantic action would be good.\n>\n> Quite right. It's not safe. Should we ensure it's set to something like NULL or -1?\n\nNulling it out seems reasonable.\n\n> Also, where do you think we should put a warning about it?\n\nI was thinking in the doc comment for JsonLexContext.\n\n> It also removes the frontend exits I had. In the case of stack depth, we follow the example of the RD parser and only check stack depth for backend code. In the case of the check that the lexer is set up for incremental parsing, the exit is replaced by an Assert. That means your test for an over-nested array doesn't work any more, so I have commented it out.\n\nHm, okay. We really ought to fix the recursive parser, but that's for\na separate thread. (Probably OAuth.) The ideal behavior IMO would be\nfor the caller to configure a maximum depth in the JsonLexContext.\n\nNote that the repalloc will eventually still exit() if the pstack gets\ntoo big; is that a concern? Alternatively, could unbounded heap growth\nbe a problem for a superuser? I guess the scalars themselves aren't\ncapped for length...\n\nOn Wed, Mar 20, 2024 at 12:19 AM Andrew Dunstan <[email protected]> wrote:\n> On second thoughts, I think it might be better if we invent a new error return code for a lexer mode mismatch.\n\n+1\n\n--Jacob\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:09:33 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "This new return path...\n\n> + if (!tok_done)\n> + {\n> + if (lex->inc_state->is_last_chunk)\n> + return JSON_INVALID_TOKEN;\n\n...also needs to set the token pointers. See one approach in the\nattached diff, which additionally asserts that we've consumed the\nentire chunk in this case, along with a handful of new tests.\n\n--Jacob",
"msg_date": "Wed, 20 Mar 2024 12:06:33 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 3:06 PM Jacob Champion <\[email protected]> wrote:\n\n> This new return path...\n>\n> > + if (!tok_done)\n> > + {\n> > + if (lex->inc_state->is_last_chunk)\n> > + return JSON_INVALID_TOKEN;\n>\n> ...also needs to set the token pointers. See one approach in the\n> attached diff, which additionally asserts that we've consumed the\n> entire chunk in this case, along with a handful of new tests.\n>\n\n\nThanks, included that and attended to the other issues we discussed. I\nthink this is pretty close now.\n\ncheers\n\nandrew",
"msg_date": "Thu, 21 Mar 2024 02:56:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 11:56 PM Andrew Dunstan <[email protected]> wrote:\n> Thanks, included that and attended to the other issues we discussed. I think this is pretty close now.\n\nOkay, looking over the thread, there are the following open items:\n- extend the incremental test in order to exercise the semantic callbacks [1]\n- add Assert calls in impossible error cases [2]\n- error out if the non-incremental lex doesn't consume the entire token [2]\n- double-check that out of memory is an appropriate failure mode for\nthe frontend [3]\n\nJust as a general style nit:\n\n> + if (lex->incremental)\n> + {\n> + lex->input = lex->token_terminator = lex->line_start = json;\n> + lex->input_length = len;\n> + lex->inc_state->is_last_chunk = is_last;\n> + }\n> + else\n> + return JSON_INVALID_LEXER_TYPE;\n\nI think flipping this around would probably make it more readable;\nsomething like:\n\n if (!lex->incremental)\n return JSON_INVALID_LEXER_TYPE;\n\n lex->input = ...\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2BnHV55Uhz%2Bo-HKq0GNiWn2L5gMcuyRQEz_fqpGY%3DpFxKA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAD5tBcLi2ffZkktV2qrsKSBykE-N8CiYgrfbv0vZ-F7%3DxLFeqw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAOYmi%2BnY%3DrF6dJCzaOuA3d-3FbwXCcecOs_S1NutexFA3dRXAw%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 15:14:52 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 6:15 PM Jacob Champion <\[email protected]> wrote:\n\n> On Wed, Mar 20, 2024 at 11:56 PM Andrew Dunstan <[email protected]>\n> wrote:\n> > Thanks, included that and attended to the other issues we discussed. I\n> think this is pretty close now.\n>\n> Okay, looking over the thread, there are the following open items:\n> - extend the incremental test in order to exercise the semantic callbacks\n> [1]\n>\n\n\nYeah, I'm on a super long plane trip later this week, so I might get it\ndone then :-)\n\n\n> - add Assert calls in impossible error cases [2]\n>\n\nok, will do\n\n\n> - error out if the non-incremental lex doesn't consume the entire token [2]\n>\n\nok, will do\n\n\n> - double-check that out of memory is an appropriate failure mode for\n> the frontend [3]\n>\n\n\nWell, what's the alternative? The current parser doesn't check stack depth\nin frontend code. Presumably it too will eventually just run out of memory,\npossibly rather sooner as the stack frames could be more expensive than\nthe incremental parser stack extensions.\n\n\n\n>\n> Just as a general style nit:\n>\n> > + if (lex->incremental)\n> > + {\n> > + lex->input = lex->token_terminator = lex->line_start = json;\n> > + lex->input_length = len;\n> > + lex->inc_state->is_last_chunk = is_last;\n> > + }\n> > + else\n> > + return JSON_INVALID_LEXER_TYPE;\n>\n> I think flipping this around would probably make it more readable;\n> something like:\n>\n> if (!lex->incremental)\n> return JSON_INVALID_LEXER_TYPE;\n>\n> lex->input = ...\n>\n>\n>\nNoted. will do, Thanks.\n\ncheers\n\nandrew\n\n\n\n> [1]\n> https://www.postgresql.org/message-id/CAOYmi%2BnHV55Uhz%2Bo-HKq0GNiWn2L5gMcuyRQEz_fqpGY%3DpFxKA%40mail.gmail.com\n> [2]\n> https://www.postgresql.org/message-id/CAD5tBcLi2ffZkktV2qrsKSBykE-N8CiYgrfbv0vZ-F7%3DxLFeqw%40mail.gmail.com\n> [3]\n> https://www.postgresql.org/message-id/CAOYmi%2BnY%3DrF6dJCzaOuA3d-3FbwXCcecOs_S1NutexFA3dRXAw%40mail.gmail.com\n>\n\nOn Mon, Mar 25, 2024 at 6:15 PM Jacob Champion <[email protected]> wrote:On Wed, Mar 20, 2024 at 11:56 PM Andrew Dunstan <[email protected]> wrote:\n> Thanks, included that and attended to the other issues we discussed. I think this is pretty close now.\n\nOkay, looking over the thread, there are the following open items:\n- extend the incremental test in order to exercise the semantic callbacks [1]Yeah, I'm on a super long plane trip later this week, so I might get it done then :-) \n- add Assert calls in impossible error cases [2]ok, will do \n- error out if the non-incremental lex doesn't consume the entire token [2]ok, will do \n- double-check that out of memory is an appropriate failure mode for\nthe frontend [3]Well, what's the alternative? The current parser doesn't check stack depth in frontend code. Presumably it too will eventually just run out of memory, possibly rather sooner as the stack frames could be more expensive than the incremental parser stack extensions. \n\nJust as a general style nit:\n\n> + if (lex->incremental)\n> + {\n> + lex->input = lex->token_terminator = lex->line_start = json;\n> + lex->input_length = len;\n> + lex->inc_state->is_last_chunk = is_last;\n> + }\n> + else\n> + return JSON_INVALID_LEXER_TYPE;\n\nI think flipping this around would probably make it more readable;\nsomething like:\n\n if (!lex->incremental)\n return JSON_INVALID_LEXER_TYPE;\n\n lex->input = ...\nNoted. will do, Thanks.cheersandrew \n[1] https://www.postgresql.org/message-id/CAOYmi%2BnHV55Uhz%2Bo-HKq0GNiWn2L5gMcuyRQEz_fqpGY%3DpFxKA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAD5tBcLi2ffZkktV2qrsKSBykE-N8CiYgrfbv0vZ-F7%3DxLFeqw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAOYmi%2BnY%3DrF6dJCzaOuA3d-3FbwXCcecOs_S1NutexFA3dRXAw%40mail.gmail.com",
"msg_date": "Mon, 25 Mar 2024 19:02:10 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 4:02 PM Andrew Dunstan <[email protected]> wrote:\n> Well, what's the alternative? The current parser doesn't check stack depth in frontend code. Presumably it too will eventually just run out of memory, possibly rather sooner as the stack frames could be more expensive than the incremental parser stack extensions.\n\nStack size should be pretty limited, at least on the platforms I'm\nfamiliar with. So yeah, the recursive descent will segfault pretty\nquickly, but it won't repalloc() an unbounded amount of heap space.\nThe alternative would just be to go back to a hardcoded limit in the\nshort term, I think.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 25 Mar 2024 16:12:00 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 4:12 PM Jacob Champion\n<[email protected]> wrote:\n> Stack size should be pretty limited, at least on the platforms I'm\n> familiar with. So yeah, the recursive descent will segfault pretty\n> quickly, but it won't repalloc() an unbounded amount of heap space.\n> The alternative would just be to go back to a hardcoded limit in the\n> short term, I think.\n\nAnd I should mention that there are other ways to consume a bunch of\nmemory, but I think they're bounded by the size of the JSON file.\nLooks like the repalloc()s amplify the JSON size by a factor of ~20\n(JS_MAX_PROD_LEN + sizeof(char*) + sizeof(bool)). That may or may not\nbe enough to be concerned about in the end, since I think it's still\nlinear, but I wanted to make sure it was known.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 25 Mar 2024 16:21:40 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 7:12 PM Jacob Champion <\[email protected]> wrote:\n\n> On Mon, Mar 25, 2024 at 4:02 PM Andrew Dunstan <[email protected]>\n> wrote:\n> > Well, what's the alternative? The current parser doesn't check stack\n> depth in frontend code. Presumably it too will eventually just run out of\n> memory, possibly rather sooner as the stack frames could be more expensive\n> than the incremental parser stack extensions.\n>\n> Stack size should be pretty limited, at least on the platforms I'm\n> familiar with. So yeah, the recursive descent will segfault pretty\n> quickly, but it won't repalloc() an unbounded amount of heap space.\n> The alternative would just be to go back to a hardcoded limit in the\n> short term, I think.\n>\n>\n>\nOK, so we invent a new error code and have the parser return that if the\nstack depth gets too big?\n\ncheers\n\nandrew\n\nOn Mon, Mar 25, 2024 at 7:12 PM Jacob Champion <[email protected]> wrote:On Mon, Mar 25, 2024 at 4:02 PM Andrew Dunstan <[email protected]> wrote:\n> Well, what's the alternative? The current parser doesn't check stack depth in frontend code. Presumably it too will eventually just run out of memory, possibly rather sooner as the stack frames could be more expensive than the incremental parser stack extensions.\n\nStack size should be pretty limited, at least on the platforms I'm\nfamiliar with. So yeah, the recursive descent will segfault pretty\nquickly, but it won't repalloc() an unbounded amount of heap space.\nThe alternative would just be to go back to a hardcoded limit in the\nshort term, I think.\nOK, so we invent a new error code and have the parser return that if the stack depth gets too big?cheersandrew",
"msg_date": "Mon, 25 Mar 2024 19:24:08 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 4:24 PM Andrew Dunstan <[email protected]> wrote:\n> OK, so we invent a new error code and have the parser return that if the stack depth gets too big?\n\nYeah, that seems reasonable. I'd potentially be able to build on that\nvia OAuth for next cycle, too, since that client needs to limit its\nmemory usage.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 25 Mar 2024 16:27:46 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 3:14 PM Jacob Champion\n<[email protected]> wrote:\n> - add Assert calls in impossible error cases [2]\n\nTo expand on this one, I think these parts of the code (marked with\n`<- here`) are impossible to reach:\n\n> switch (top)\n> {\n> case JSON_TOKEN_STRING:\n> if (next_prediction(pstack) == JSON_TOKEN_COLON)\n> ctx = JSON_PARSE_STRING;\n> else\n> ctx = JSON_PARSE_VALUE; <- here\n> break;\n> case JSON_TOKEN_NUMBER: <- here\n> case JSON_TOKEN_TRUE: <- here\n> case JSON_TOKEN_FALSE: <- here\n> case JSON_TOKEN_NULL: <- here\n> case JSON_TOKEN_ARRAY_START: <- here\n> case JSON_TOKEN_OBJECT_START: <- here\n> ctx = JSON_PARSE_VALUE;\n> break;\n> case JSON_TOKEN_ARRAY_END: <- here\n> ctx = JSON_PARSE_ARRAY_NEXT;\n> break;\n> case JSON_TOKEN_OBJECT_END: <- here\n> ctx = JSON_PARSE_OBJECT_NEXT;\n> break;\n> case JSON_TOKEN_COMMA: <- here\n> if (next_prediction(pstack) == JSON_TOKEN_STRING)\n> ctx = JSON_PARSE_OBJECT_NEXT;\n> else\n> ctx = JSON_PARSE_ARRAY_NEXT;\n> break;\n\nSince none of these cases are non-terminals, the only way to get to\nthis part of the code is if (top != tok). But inspecting the\nproductions and transitions that can put these tokens on the stack, it\nlooks like the only way for them to be at the top of the stack to\nbegin with is if (tok == top). (Otherwise, we would have chosen a\ndifferent production, or else errored out on a non-terminal.)\n\nThis case is possible...\n\n> case JSON_TOKEN_STRING:\n> if (next_prediction(pstack) == JSON_TOKEN_COLON)\n> ctx = JSON_PARSE_STRING;\n\n...if we're in the middle of JSON_PROD_[MORE_]KEY_PAIRS. But the\ncorresponding else case is not, because if we're in the middle of a\n_KEY_PAIRS production, the next_prediction() _must_ be\nJSON_TOKEN_COLON.\n\nDo you agree, or am I missing a way to get there?\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 26 Mar 2024 14:52:23 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-03-26 Tu 17:52, Jacob Champion wrote:\n> On Mon, Mar 25, 2024 at 3:14 PM Jacob Champion\n> <[email protected]> wrote:\n>> - add Assert calls in impossible error cases [2]\n> To expand on this one, I think these parts of the code (marked with\n> `<- here`) are impossible to reach:\n>\n>> switch (top)\n>> {\n>> case JSON_TOKEN_STRING:\n>> if (next_prediction(pstack) == JSON_TOKEN_COLON)\n>> ctx = JSON_PARSE_STRING;\n>> else\n>> ctx = JSON_PARSE_VALUE; <- here\n>> break;\n>> case JSON_TOKEN_NUMBER: <- here\n>> case JSON_TOKEN_TRUE: <- here\n>> case JSON_TOKEN_FALSE: <- here\n>> case JSON_TOKEN_NULL: <- here\n>> case JSON_TOKEN_ARRAY_START: <- here\n>> case JSON_TOKEN_OBJECT_START: <- here\n>> ctx = JSON_PARSE_VALUE;\n>> break;\n>> case JSON_TOKEN_ARRAY_END: <- here\n>> ctx = JSON_PARSE_ARRAY_NEXT;\n>> break;\n>> case JSON_TOKEN_OBJECT_END: <- here\n>> ctx = JSON_PARSE_OBJECT_NEXT;\n>> break;\n>> case JSON_TOKEN_COMMA: <- here\n>> if (next_prediction(pstack) == JSON_TOKEN_STRING)\n>> ctx = JSON_PARSE_OBJECT_NEXT;\n>> else\n>> ctx = JSON_PARSE_ARRAY_NEXT;\n>> break;\n> Since none of these cases are non-terminals, the only way to get to\n> this part of the code is if (top != tok). But inspecting the\n> productions and transitions that can put these tokens on the stack, it\n> looks like the only way for them to be at the top of the stack to\n> begin with is if (tok == top). (Otherwise, we would have chosen a\n> different production, or else errored out on a non-terminal.)\n>\n> This case is possible...\n>\n>> case JSON_TOKEN_STRING:\n>> if (next_prediction(pstack) == JSON_TOKEN_COLON)\n>> ctx = JSON_PARSE_STRING;\n> ...if we're in the middle of JSON_PROD_[MORE_]KEY_PAIRS. But the\n> corresponding else case is not, because if we're in the middle of a\n> _KEY_PAIRS production, the next_prediction() _must_ be\n> JSON_TOKEN_COLON.\n>\n> Do you agree, or am I missing a way to get there?\n>\n\n\nOne good way of testing would be to add the Asserts, build with \n-DFORCE_JSON_PSTACK, and run the standard regression suite, which has a \nfairly comprehensive set of JSON errors. I'll play with that.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 12:38:38 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-03-25 Mo 19:02, Andrew Dunstan wrote:\n>\n>\n> On Mon, Mar 25, 2024 at 6:15 PM Jacob Champion \n> <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 11:56 PM Andrew Dunstan\n> <[email protected]> wrote:\n> > Thanks, included that and attended to the other issues we\n> discussed. I think this is pretty close now.\n>\n> Okay, looking over the thread, there are the following open items:\n> - extend the incremental test in order to exercise the semantic\n> callbacks [1]\n>\n>\n>\n> Yeah, I'm on a super long plane trip later this week, so I might get \n> it done then :-)\n>\n> - add Assert calls in impossible error cases [2]\n>\n>\n> ok, will do\n>\n> - error out if the non-incremental lex doesn't consume the entire\n> token [2]\n>\n>\n> ok, will do\n>\n> - double-check that out of memory is an appropriate failure mode for\n> the frontend [3]\n>\n>\n>\n> Well, what's the alternative? The current parser doesn't check stack \n> depth in frontend code. Presumably it too will eventually just run out \n> of memory, possibly rather sooner as the stack frames could be more \n> expensive than the incremental parser stack extensions.\n>\n>\n> Just as a general style nit:\n>\n> > + if (lex->incremental)\n> > + {\n> > + lex->input = lex->token_terminator = lex->line_start = json;\n> > + lex->input_length = len;\n> > + lex->inc_state->is_last_chunk = is_last;\n> > + }\n> > + else\n> > + return JSON_INVALID_LEXER_TYPE;\n>\n> I think flipping this around would probably make it more readable;\n> something like:\n>\n> if (!lex->incremental)\n> return JSON_INVALID_LEXER_TYPE;\n>\n> lex->input = ...\n>\n>\n>\n> Noted. will do, Thanks.\n>\n>\n\nHere's a new set of patches, with I think everything except the error \ncase Asserts attended to. There's a change to add semantic processing to \nthe test suite in patch 4, but I'd fold that into patch 1 when committing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Fri, 29 Mar 2024 12:42:16 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 9:42 AM Andrew Dunstan <[email protected]> wrote:\n> Here's a new set of patches, with I think everything except the error case Asserts attended to. There's a change to add semantic processing to the test suite in patch 4, but I'd fold that into patch 1 when committing.\n\nThanks! 0004 did highlight one last bug for me -- the return value\nfrom semantic callbacks is ignored, so applications can't interrupt\nthe parsing early:\n\n> + if (ostart != NULL)\n> + (*ostart) (sem->semstate);\n> + inc_lex_level(lex);\n\nAnything not JSON_SUCCESS should be returned immediately, I think, for\nthis and the other calls.\n\n> + /* make sure we've used all the input */\n> + if (lex->token_terminator - lex->token_start != ptok->len)\n> + return JSON_INVALID_TOKEN;\n\nnit: Debugging this, if anyone ever hits it, is going to be confusing.\nThe error message is going to claim that a valid token is invalid, as\nopposed to saying that the parser has a bug. Short of introducing\nsomething like JSON_INTERNAL_ERROR, maybe an Assert() should be added\nat least?\n\n> + case JSON_NESTING_TOO_DEEP:\n> + return(_(\"Json nested too deep, maximum permitted depth is 6400\"));\n\nnit: other error messages use all-caps, \"JSON\"\n\n> + char chunk_start[100],\n> + chunk_end[100];\n> +\n> + snprintf(chunk_start, 100, \"%s\", ib->buf.data);\n> + snprintf(chunk_end, 100, \"%s\", ib->buf.data + (ib->buf.len - (MIN_CHUNK + 99)));\n> + elog(NOTICE, \"incremental manifest:\\nchunk_start='%s',\\nchunk_end='%s'\", chunk_start, chunk_end);\n\nIs this NOTICE intended to be part of the final commit?\n\n> + inc_state = json_parse_manifest_incremental_init(&context);\n> +\n> + buffer = pg_malloc(chunk_size + 64);\n\nThese `+ 64`s (there are two of them) seem to be left over from\ndebugging. In v7 they were just `+ 1`.\n\n--\n\n From this point onward, I think all of my feedback is around\nmaintenance characteristics, which is firmly in YMMV territory, and I\ndon't intend to hold up a v1 of the feature for it since I'm not the\nmaintainer. :D\n\nThe complexity around the checksum handling and \"final chunk\"-ing is\nunfortunate, but not a fault of this patch -- just a weird consequence\nof the layering violation in the format, where the checksum is inside\nthe object being checksummed. I don't like it, but I don't think\nthere's much to be done about it.\n\nBy far the trickiest part of the implementation is the partial token\nlogic, because of all the new ways there are to handle different types\nof failures. I think any changes to the incremental handling in\njson_lex() are going to need intense scrutiny from someone who already\nhas the mental model loaded up. I'm going snowblind on the patch and\nI'm no longer the right person to review how hard it is to get up to\nspeed with the architecture, but I'd say it's not easy.\n\nFor something as security-critical as a JSON parser, I'd usually want\nto counteract that by sinking the observable behaviors in concrete and\ngetting both the effective test coverage *and* the code coverage as\nclose to 100% as we can stand. (By that, I mean that purposely\nintroducing observable bugs into the parser should result in test\nfailures. We're not there yet when it comes to the semantic callbacks,\nat least.) But I don't think the current JSON parser is held to that\nstandard currently, so it seems strange for me to ask for that here.\n\nI think it'd be good for a v1.x of this feature to focus on\nsimplification of the code, and hopefully consolidate and/or eliminate\nsome of the duplicated parsing work so that the mental model isn't\nquite so big.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Fri, 29 Mar 2024 14:21:07 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-03-29 Fr 17:21, Jacob Champion wrote:\n> On Fri, Mar 29, 2024 at 9:42 AM Andrew Dunstan <[email protected]> wrote:\n>> Here's a new set of patches, with I think everything except the error case Asserts attended to. There's a change to add semantic processing to the test suite in patch 4, but I'd fold that into patch 1 when committing.\n> Thanks! 0004 did highlight one last bug for me -- the return value\n> from semantic callbacks is ignored, so applications can't interrupt\n> the parsing early:\n>\n>> + if (ostart != NULL)\n>> + (*ostart) (sem->semstate);\n>> + inc_lex_level(lex);\n> Anything not JSON_SUCCESS should be returned immediately, I think, for\n> this and the other calls.\n\n\nFixed\n\n\n>\n>> + /* make sure we've used all the input */\n>> + if (lex->token_terminator - lex->token_start != ptok->len)\n>> + return JSON_INVALID_TOKEN;\n> nit: Debugging this, if anyone ever hits it, is going to be confusing.\n> The error message is going to claim that a valid token is invalid, as\n> opposed to saying that the parser has a bug. Short of introducing\n> something like JSON_INTERNAL_ERROR, maybe an Assert() should be added\n> at least?\n\n\nDone\n\n\n>\n>> + case JSON_NESTING_TOO_DEEP:\n>> + return(_(\"Json nested too deep, maximum permitted depth is 6400\"));\n> nit: other error messages use all-caps, \"JSON\"\n\n\nFixed\n\n\n>\n>> + char chunk_start[100],\n>> + chunk_end[100];\n>> +\n>> + snprintf(chunk_start, 100, \"%s\", ib->buf.data);\n>> + snprintf(chunk_end, 100, \"%s\", ib->buf.data + (ib->buf.len - (MIN_CHUNK + 99)));\n>> + elog(NOTICE, \"incremental manifest:\\nchunk_start='%s',\\nchunk_end='%s'\", chunk_start, chunk_end);\n> Is this NOTICE intended to be part of the final commit?\n\n\nNo, removed.\n\n\n>\n>> + inc_state = json_parse_manifest_incremental_init(&context);\n>> +\n>> + buffer = pg_malloc(chunk_size + 64);\n> These `+ 64`s (there are two of them) seem to be left over from\n> debugging. In v7 they were just `+ 1`.\n\n\nFixed\n\n\nI've also added the Asserts to the error handling code you suggested. \nThis tests fine with the regression suite under FORCE_JSON_PSTACK.\n\n\n>\n> --\n>\n> >From this point onward, I think all of my feedback is around\n> maintenance characteristics, which is firmly in YMMV territory, and I\n> don't intend to hold up a v1 of the feature for it since I'm not the\n> maintainer. :D\n>\n> The complexity around the checksum handling and \"final chunk\"-ing is\n> unfortunate, but not a fault of this patch -- just a weird consequence\n> of the layering violation in the format, where the checksum is inside\n> the object being checksummed. I don't like it, but I don't think\n> there's much to be done about it.\n\n\nYeah. I agree it's ugly, but I don't think we should let it hold us up \nat all. Working around it doesn't involve too much extra code.\n\n\n>\n> By far the trickiest part of the implementation is the partial token\n> logic, because of all the new ways there are to handle different types\n> of failures. I think any changes to the incremental handling in\n> json_lex() are going to need intense scrutiny from someone who already\n> has the mental model loaded up. I'm going snowblind on the patch and\n> I'm no longer the right person to review how hard it is to get up to\n> speed with the architecture, but I'd say it's not easy.\n\n\njson_lex() is not really a very hot piece of code. I have added a \ncomment at the top giving a brief description of the partial token \nhandling. Even without the partial token bit it can be a bit scary, but \nI think the partial token piece here is not so scary that we should not \nproceed.\n\n\n>\n> For something as security-critical as a JSON parser, I'd usually want\n> to counteract that by sinking the observable behaviors in concrete and\n> getting both the effective test coverage *and* the code coverage as\n> close to 100% as we can stand. (By that, I mean that purposely\n> introducing observable bugs into the parser should result in test\n> failures. We're not there yet when it comes to the semantic callbacks,\n> at least.) But I don't think the current JSON parser is held to that\n> standard currently, so it seems strange for me to ask for that here.\n\n\nYeah.\n\n\n>\n> I think it'd be good for a v1.x of this feature to focus on\n> simplification of the code, and hopefully consolidate and/or eliminate\n> some of the duplicated parsing work so that the mental model isn't\n> quite so big.\n>\n\nI'm not sure how you think that can be done. What parts of the \nprocessing are you having difficulty in coming to grips with?\n\nAnyway, here are new patches. I've rolled the new semantic test into the \nfirst patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 1 Apr 2024 19:52:59 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 4:53 PM Andrew Dunstan <[email protected]> wrote:\n> Anyway, here are new patches. I've rolled the new semantic test into the\n> first patch.\n\nLooks good! I've marked RfC.\n\n> json_lex() is not really a very hot piece of code.\n\nSure, but I figure if someone is trying to get the performance of the\nincremental parser to match the recursive one, so we can eventually\nreplace it, it might get a little warmer. :)\n\n> > I think it'd be good for a v1.x of this feature to focus on\n> > simplification of the code, and hopefully consolidate and/or eliminate\n> > some of the duplicated parsing work so that the mental model isn't\n> > quite so big.\n>\n> I'm not sure how you think that can be done.\n\nI think we'd need to teach the lower levels of the lexer about\nincremental parsing too, so that we don't have two separate sources of\ntruth about what ends a token. Bonus points if we could keep the parse\nstate across chunks to the extent that we didn't need to restart at\nthe beginning of the token every time. (Our current tools for this are\nkind of poor, like the restartable state machine in PQconnectPoll.\nWhile I'm dreaming, I'd like coroutines.) Now, whether the end result\nwould be more or less maintainable is left as an exercise...\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Apr 2024 12:38:10 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-02 Tu 15:38, Jacob Champion wrote:\n> On Mon, Apr 1, 2024 at 4:53 PM Andrew Dunstan <[email protected]> wrote:\n>> Anyway, here are new patches. I've rolled the new semantic test into the\n>> first patch.\n> Looks good! I've marked RfC.\n\n\nThanks! I appreciate all the work you've done on this. I will give it \none more pass and commit RSN.\n\n>\n>> json_lex() is not really a very hot piece of code.\n> Sure, but I figure if someone is trying to get the performance of the\n> incremental parser to match the recursive one, so we can eventually\n> replace it, it might get a little warmer. :)\n\nI don't think this is where the performance penalty lies. Rather, I \nsuspect it's the stack operations in the non-recursive parser itself. \nThe speed test doesn't involve any partial token processing at all, and \nyet the non-recursive parser is significantly slower in that test.\n\n>>> I think it'd be good for a v1.x of this feature to focus on\n>>> simplification of the code, and hopefully consolidate and/or eliminate\n>>> some of the duplicated parsing work so that the mental model isn't\n>>> quite so big.\n>> I'm not sure how you think that can be done.\n> I think we'd need to teach the lower levels of the lexer about\n> incremental parsing too, so that we don't have two separate sources of\n> truth about what ends a token. Bonus points if we could keep the parse\n> state across chunks to the extent that we didn't need to restart at\n> the beginning of the token every time. (Our current tools for this are\n> kind of poor, like the restartable state machine in PQconnectPoll.\n> While I'm dreaming, I'd like coroutines.) Now, whether the end result\n> would be more or less maintainable is left as an exercise...\n>\n\nI tried to disturb the main lexer processing as little as possible. We \ncould possibly unify the two paths, but I have a strong suspicion that \nwould result in a performance hit (the main part of the lexer doesn't \ncopy any characters at all, it just keeps track of pointers into the \ninput). And while the code might not undergo lots of change, the routine \nitself is quite performance critical.\n\nAnyway, I think that's all something for another day.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 2 Apr 2024 17:12:51 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-02 Tu 17:12, Andrew Dunstan wrote:\n>\n> On 2024-04-02 Tu 15:38, Jacob Champion wrote:\n>> On Mon, Apr 1, 2024 at 4:53 PM Andrew Dunstan <[email protected]> \n>> wrote:\n>>> Anyway, here are new patches. I've rolled the new semantic test into \n>>> the\n>>> first patch.\n>> Looks good! I've marked RfC.\n>\n>\n> Thanks! I appreciate all the work you've done on this. I will give it \n> one more pass and commit RSN.\n\n\npushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 07:00:32 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> pushed.\n\nMy animals don't like this [1][2]:\n\nparse_manifest.c:99:3: error: redefinition of typedef 'JsonManifestParseIncrementalState' is a C11 feature [-Werror,-Wtypedef-redefinition]\n} JsonManifestParseIncrementalState;\n ^\n../../src/include/common/parse_manifest.h:23:50: note: previous definition is here\ntypedef struct JsonManifestParseIncrementalState JsonManifestParseIncrementalState;\n ^\n1 error generated.\n\n(and similarly in other places)\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2024-04-04%2013%3A08%3A02\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2024-04-04%2013%3A07%3A01\n\n\n",
"msg_date": "Thu, 04 Apr 2024 10:26:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-04 Th 10:26, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> pushed.\n> My animals don't like this [1][2]:\n>\n> parse_manifest.c:99:3: error: redefinition of typedef 'JsonManifestParseIncrementalState' is a C11 feature [-Werror,-Wtypedef-redefinition]\n> } JsonManifestParseIncrementalState;\n> ^\n> ../../src/include/common/parse_manifest.h:23:50: note: previous definition is here\n> typedef struct JsonManifestParseIncrementalState JsonManifestParseIncrementalState;\n> ^\n> 1 error generated.\n>\n> (and similarly in other places)\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2024-04-04%2013%3A08%3A02\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2024-04-04%2013%3A07%3A01\n\n\nDarn it. Ok, will try to fix.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 10:57:22 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-04-04 Th 10:26, Tom Lane wrote:\n>> My animals don't like this [1][2]:\n\n> Darn it. Ok, will try to fix.\n\nI think you just need to follow the standard pattern:\n\ntypedef struct foo foo;\n\n... foo* is ok to use here ...\n\nstruct foo {\n ... fields\n};\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2024 11:00:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "I wrote:\n> I think you just need to follow the standard pattern:\n\nYeah, the attached is enough to silence it for me.\n(But personally I'd add comments saying that the typedef\nappears in thus-and-such header file; see examples in\nour tree.)\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 04 Apr 2024 11:16:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-04 Th 11:16, Tom Lane wrote:\n> I wrote:\n>> I think you just need to follow the standard pattern:\n> Yeah, the attached is enough to silence it for me.\n\n\nThanks, that's what I came up with too (after converting my setup to use \nclang)\n\n\n> (But personally I'd add comments saying that the typedef\n> appears in thus-and-such header file; see examples in\n> our tree.)\n\n\nDone\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 11:37:40 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Oh, more problems: after running check-world in a non-VPATH build,\nthere are droppings all over my tree:\n\n$ git status\nOn branch master\nYour branch is up to date with 'origin/master'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n src/interfaces/ecpg/test/sql/sqljson_jsontable\n src/interfaces/ecpg/test/sql/sqljson_jsontable.c\n src/test/modules/test_json_parser/test_json_parser_incremental\n src/test/modules/test_json_parser/test_json_parser_perf\n src/test/modules/test_json_parser/tmp_check/\n\nnothing added to commit but untracked files present (use \"git add\" to track)\n\n\"make clean\" or \"make distclean\" removes some of that but not all:\n\n$ make -s distclean\n$ git status\nOn branch master\nYour branch is up to date with 'origin/master'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n src/test/modules/test_json_parser/test_json_parser_incremental\n src/test/modules/test_json_parser/test_json_parser_perf\n\nnothing added to commit but untracked files present (use \"git add\" to track)\n\nSo we're short several .gitignore entries, and apparently also\nshy a couple of make-clean rules.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2024 12:04:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-04 Th 12:04, Tom Lane wrote:\n> Oh, more problems: after running check-world in a non-VPATH build,\n> there are droppings all over my tree:\n>\n> $ git status\n> On branch master\n> Your branch is up to date with 'origin/master'.\n>\n> Untracked files:\n> (use \"git add <file>...\" to include in what will be committed)\n> src/interfaces/ecpg/test/sql/sqljson_jsontable\n> src/interfaces/ecpg/test/sql/sqljson_jsontable.c\n> src/test/modules/test_json_parser/test_json_parser_incremental\n> src/test/modules/test_json_parser/test_json_parser_perf\n> src/test/modules/test_json_parser/tmp_check/\n>\n> nothing added to commit but untracked files present (use \"git add\" to track)\n>\n> \"make clean\" or \"make distclean\" removes some of that but not all:\n>\n> $ make -s distclean\n> $ git status\n> On branch master\n> Your branch is up to date with 'origin/master'.\n>\n> Untracked files:\n> (use \"git add <file>...\" to include in what will be committed)\n> src/test/modules/test_json_parser/test_json_parser_incremental\n> src/test/modules/test_json_parser/test_json_parser_perf\n>\n> nothing added to commit but untracked files present (use \"git add\" to track)\n>\n> So we're short several .gitignore entries, and apparently also\n> shy a couple of make-clean rules.\n\n\nArgh. You get out of the habit when you're running with meson :-(\n\n\nThere's another issue I'm running down too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:13:04 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 18:13, Andrew Dunstan <[email protected]> wrote:\n> Argh. You get out of the habit when you're running with meson :-(\n\nI'm having some issues with meson too actually. \"meson test -C build\"\nis now failing with this for me:\n\nCommand 'test_json_parser_incremental' not found in\n/home/jelte/opensource/postgres/build/tmp_install//home/jelte/.pgenv/pgsql-17beta9/bin,\n...\n\n\n",
"msg_date": "Thu, 4 Apr 2024 19:16:50 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Apologies for piling on, but my compiler (gcc 9.4.0) is unhappy:\n\n../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n../postgresql/src/common/jsonapi.c:2016:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 2016 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n | ~~~^~~~~~~~~~~\n../postgresql/src/common/jsonapi.c:2020:36: error: ‘dummy_lex.token_start’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 2020 | lex->token_start, s - lex->token_start);\n | ~~~^~~~~~~~~~~~~\n../postgresql/src/common/jsonapi.c:302:26: error: ‘numeric_error’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 302 | return (!numeric_error) && (total_len == dummy_lex.input_length);\n | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 13:06:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 10:17 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Command 'test_json_parser_incremental' not found in\n> /home/jelte/opensource/postgres/build/tmp_install//home/jelte/.pgenv/pgsql-17beta9/bin,\n> ...\n\nI can't reproduce this locally...\n\nWhat's in the `...`? I wouldn't expect to find the test binary in your\ntmp_install.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 4 Apr 2024 11:12:18 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 11:12 AM Jacob Champion\n<[email protected]> wrote:\n> What's in the `...`? I wouldn't expect to find the test binary in your\n> tmp_install.\n\nOh, I wonder if this is just a build dependency thing? I typically run\na bare `ninja` right before testing, because I think most of those\ndependencies are missing for the tests at the moment. (For example, if\nI manually remove the `libpq_pipeline` executable and then try to run\n`meson test` without a rebuild, it fails.)\n\nWe can certainly fix that (see attached patch as a first draft) but I\nwonder if there was a reason we decided not to during the Meson port?\n\n--Jacob",
"msg_date": "Thu, 4 Apr 2024 11:48:26 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 11:06 AM Nathan Bossart <[email protected]> wrote:\n> ../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n> ../postgresql/src/common/jsonapi.c:2016:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 2016 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n> | ~~~^~~~~~~~~~~\n> ../postgresql/src/common/jsonapi.c:2020:36: error: ‘dummy_lex.token_start’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 2020 | lex->token_start, s - lex->token_start);\n> | ~~~^~~~~~~~~~~~~\n\nAh, it hasn't figured out that the incremental path is disabled.\nZero-initializing the dummy_lex would be good regardless.\n\n> ../postgresql/src/common/jsonapi.c:302:26: error: ‘numeric_error’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 302 | return (!numeric_error) && (total_len == dummy_lex.input_length);\n> | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor this case, though, I think it'd be good for API consistency if\n*numeric_error were always set on return, instead of relying on the\ncaller to initialize.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:28:59 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-04 Th 14:06, Nathan Bossart wrote:\n> Apologies for piling on, but my compiler (gcc 9.4.0) is unhappy:\n>\n> ../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n> ../postgresql/src/common/jsonapi.c:2016:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 2016 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n> | ~~~^~~~~~~~~~~\n> ../postgresql/src/common/jsonapi.c:2020:36: error: ‘dummy_lex.token_start’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 2020 | lex->token_start, s - lex->token_start);\n> | ~~~^~~~~~~~~~~~~\n> ../postgresql/src/common/jsonapi.c:302:26: error: ‘numeric_error’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 302 | return (!numeric_error) && (total_len == dummy_lex.input_length);\n> | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n\nPlease pile on. I want to get this fixed.\n\n\nIt seems odd that my much later gcc didn't complain.\n\n\nDoes the attached patch fix it for you?\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 4 Apr 2024 15:31:12 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 03:31:12PM -0400, Andrew Dunstan wrote:\n> Does the attached patch fix it for you?\n\nIt clears up 2 of the 3 warnings for me:\n\n../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n../postgresql/src/common/jsonapi.c:2018:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 2018 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n |\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 14:54:46 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 20:48, Jacob Champion\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 11:12 AM Jacob Champion\n> <[email protected]> wrote:\n> > What's in the `...`? I wouldn't expect to find the test binary in your\n> > tmp_install.\n>\n> Oh, I wonder if this is just a build dependency thing? I typically run\n> a bare `ninja` right before testing, because I think most of those\n> dependencies are missing for the tests at the moment. (For example, if\n> I manually remove the `libpq_pipeline` executable and then try to run\n> `meson test` without a rebuild, it fails.)\n>\n> We can certainly fix that (see attached patch as a first draft) but I\n> wonder if there was a reason we decided not to during the Meson port?\n\nYeah, that patch fixed my problem. (as well as running plain ninja)\n\nTo clarify: I did do a rebuild. But it turns out that install-quiet\ndoesn't build everything... The full command I was having problems\nwith was:\n\nninja -C build install-quiet && meson test -C build\n\nMaybe that's something worth addressing too. I expected that\ninstall/install-quiet was a strict superset of a plain ninja\ninvocation.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 22:42:09 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 1:42 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Maybe that's something worth addressing too. I expected that\n> install/install-quiet was a strict superset of a plain ninja\n> invocation.\n\nMaybe that's the intent, but I hope not, because I don't see any\nreason for `ninja install` to worry about test-only binaries that\nwon't be installed. For the tests, there's a pretty clear rationale\nfor the dependency.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 4 Apr 2024 14:34:13 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 23:34, Jacob Champion\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 1:42 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > Maybe that's something worth addressing too. I expected that\n> > install/install-quiet was a strict superset of a plain ninja\n> > invocation.\n>\n> Maybe that's the intent, but I hope not, because I don't see any\n> reason for `ninja install` to worry about test-only binaries that\n> won't be installed. For the tests, there's a pretty clear rationale\n> for the dependency.\n\nFair enough, I guess I'll change my invocation to include the ninja\n\"test\" target too:\nninja -C build test install-quiet && meson test -C build\n\n\n",
"msg_date": "Thu, 4 Apr 2024 23:52:14 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 23:52, Jelte Fennema-Nio <[email protected]> wrote:\n> Fair enough, I guess I'll change my invocation to include the ninja\n> \"test\" target too:\n> ninja -C build test install-quiet && meson test -C build\n\nActually that doesn't do what I want either because that actually runs\nthe tests too... And I prefer to do that using meson directly, so I\ncan add some filters to it. I'll figure something out when I'm more\nfresh tomorrow.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 23:55:43 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-04 Th 15:54, Nathan Bossart wrote:\n> On Thu, Apr 04, 2024 at 03:31:12PM -0400, Andrew Dunstan wrote:\n>> Does the attached patch fix it for you?\n> It clears up 2 of the 3 warnings for me:\n>\n> ../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n> ../postgresql/src/common/jsonapi.c:2018:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 2018 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n> |\n>\n\nThanks, please try this instead.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 5 Apr 2024 10:15:45 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 10:15:45AM -0400, Andrew Dunstan wrote:\n> On 2024-04-04 Th 15:54, Nathan Bossart wrote:\n>> On Thu, Apr 04, 2024 at 03:31:12PM -0400, Andrew Dunstan wrote:\n>> > Does the attached patch fix it for you?\n>> It clears up 2 of the 3 warnings for me:\n>> \n>> ../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n>> ../postgresql/src/common/jsonapi.c:2018:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n>> 2018 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n>> |\n> \n> Thanks, please try this instead.\n\nLGTM, thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 10:43:22 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-05 Fr 11:43, Nathan Bossart wrote:\n> On Fri, Apr 05, 2024 at 10:15:45AM -0400, Andrew Dunstan wrote:\n>> On 2024-04-04 Th 15:54, Nathan Bossart wrote:\n>>> On Thu, Apr 04, 2024 at 03:31:12PM -0400, Andrew Dunstan wrote:\n>>>> Does the attached patch fix it for you?\n>>> It clears up 2 of the 3 warnings for me:\n>>>\n>>> ../postgresql/src/common/jsonapi.c: In function ‘IsValidJsonNumber’:\n>>> ../postgresql/src/common/jsonapi.c:2018:30: error: ‘dummy_lex.inc_state’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n>>> 2018 | if (lex->incremental && !lex->inc_state->is_last_chunk &&\n>>> |\n>> Thanks, please try this instead.\n> LGTM, thanks!\n>\n\nThanks for checking. Pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 5 Apr 2024 16:09:58 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Coverity complained that this patch leaks memory:\n\n/srv/coverity/git/pgsql-git/postgresql/src/bin/pg_combinebackup/load_manifest.c: 212 in load_backup_manifest()\n206 \t\t\tbytes_left -= rc;\n207 \t\t\tjson_parse_manifest_incremental_chunk(\n208 \t\t\t\t\t\t\t\t\t\t\t\t inc_state, buffer, rc, bytes_left == 0);\n209 \t\t}\n210 \n211 \t\tclose(fd);\n>>> CID 1596259: (RESOURCE_LEAK)\n>>> Variable \"inc_state\" going out of scope leaks the storage it points to.\n212 \t}\n213 \n214 \t/* All done. */\n215 \tpfree(buffer);\n216 \treturn result;\n217 }\n\n/srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 488 in parse_manifest_file()\n482 \t\t\tbytes_left -= rc;\n483 \t\t\tjson_parse_manifest_incremental_chunk(\n484 \t\t\t\t\t\t\t\t\t\t\t\t inc_state, buffer, rc, bytes_left == 0);\n485 \t\t}\n486 \n487 \t\tclose(fd);\n>>> CID 1596257: (RESOURCE_LEAK)\n>>> Variable \"inc_state\" going out of scope leaks the storage it points to.\n488 \t}\n489 \n490 \t/* Done with the buffer. */\n491 \tpfree(buffer);\n492 \n493 \treturn result;\n\nIt's right about that AFAICS, and not only is the \"inc_state\" itself\nleaked but so is its assorted infrastructure. Perhaps we don't care\ntoo much about that in the existing applications, but ISTM that\nisn't going to be a tenable assumption across the board. Shouldn't\nthere be a \"json_parse_manifest_incremental_shutdown()\" or the like\nto deallocate all the storage allocated by the parser?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 20:58:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-07 Su 20:58, Tom Lane wrote:\n> Coverity complained that this patch leaks memory:\n>\n> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_combinebackup/load_manifest.c: 212 in load_backup_manifest()\n> 206 \t\t\tbytes_left -= rc;\n> 207 \t\t\tjson_parse_manifest_incremental_chunk(\n> 208 \t\t\t\t\t\t\t\t\t\t\t\t inc_state, buffer, rc, bytes_left == 0);\n> 209 \t\t}\n> 210\n> 211 \t\tclose(fd);\n>>>> CID 1596259: (RESOURCE_LEAK)\n>>>> Variable \"inc_state\" going out of scope leaks the storage it points to.\n> 212 \t}\n> 213\n> 214 \t/* All done. */\n> 215 \tpfree(buffer);\n> 216 \treturn result;\n> 217 }\n>\n> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 488 in parse_manifest_file()\n> 482 \t\t\tbytes_left -= rc;\n> 483 \t\t\tjson_parse_manifest_incremental_chunk(\n> 484 \t\t\t\t\t\t\t\t\t\t\t\t inc_state, buffer, rc, bytes_left == 0);\n> 485 \t\t}\n> 486\n> 487 \t\tclose(fd);\n>>>> CID 1596257: (RESOURCE_LEAK)\n>>>> Variable \"inc_state\" going out of scope leaks the storage it points to.\n> 488 \t}\n> 489\n> 490 \t/* Done with the buffer. */\n> 491 \tpfree(buffer);\n> 492\n> 493 \treturn result;\n>\n> It's right about that AFAICS, and not only is the \"inc_state\" itself\n> leaked but so is its assorted infrastructure. Perhaps we don't care\n> too much about that in the existing applications, but ISTM that\n> isn't going to be a tenable assumption across the board. Shouldn't\n> there be a \"json_parse_manifest_incremental_shutdown()\" or the like\n> to deallocate all the storage allocated by the parser?\n\n\n\nyeah, probably. Will work on it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Apr 2024 09:29:57 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "Michael pointed out over at [1] that the new tiny.json is pretty\ninscrutable given its size, and I have to agree. Attached is a patch\nto pare it down 98% or so. I think people wanting to run the\nperformance comparisons will need to come up with their own gigantic\nfiles.\n\nMichael, with your \"Jacob might be a nefarious cabal of\nstate-sponsored hackers\" hat on, is this observable enough, or do we\nneed to get it smaller? I was thinking we may want to replace the URLs\nwith stuff that doesn't link randomly around the Internet. Delicious\nin its original form is long gone.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Mon, 8 Apr 2024 11:24:32 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-08 Mo 14:24, Jacob Champion wrote:\n> Michael pointed out over at [1] that the new tiny.json is pretty\n> inscrutable given its size, and I have to agree. Attached is a patch\n> to pare it down 98% or so. I think people wanting to run the\n> performance comparisons will need to come up with their own gigantic\n> files.\n\n\nLet's see if we can do a bit better than that. Maybe a script to \nconstruct a larger input for the speed test from the smaller file. \nShould be pretty simple.\n\n\n>\n> Michael, with your \"Jacob might be a nefarious cabal of\n> state-sponsored hackers\" hat on, is this observable enough, or do we\n> need to get it smaller? I was thinking we may want to replace the URLs\n> with stuff that doesn't link randomly around the Internet. Delicious\n> in its original form is long gone.\n>\n\nArguably the fact that it points nowhere is a good thing. But feel free \nto replace it with something else. It doesn't have to be URLs at all. \nThat happened simply because it was easy to extract from a very large \npiece of JSON I had lying around, probably from the last time I wrote a \nJSON parser :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Apr 2024 17:42:35 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-08 Mo 09:29, Andrew Dunstan wrote:\n>\n> On 2024-04-07 Su 20:58, Tom Lane wrote:\n>> Coverity complained that this patch leaks memory:\n>>\n>> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_combinebackup/load_manifest.c: \n>> 212 in load_backup_manifest()\n>> 206 bytes_left -= rc;\n>> 207 json_parse_manifest_incremental_chunk(\n>> 208 inc_state, buffer, rc, bytes_left == 0);\n>> 209 }\n>> 210\n>> 211 close(fd);\n>>>>> CID 1596259: (RESOURCE_LEAK)\n>>>>> Variable \"inc_state\" going out of scope leaks the storage it \n>>>>> points to.\n>> 212 }\n>> 213\n>> 214 /* All done. */\n>> 215 pfree(buffer);\n>> 216 return result;\n>> 217 }\n>>\n>> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: \n>> 488 in parse_manifest_file()\n>> 482 bytes_left -= rc;\n>> 483 json_parse_manifest_incremental_chunk(\n>> 484 inc_state, buffer, rc, bytes_left == 0);\n>> 485 }\n>> 486\n>> 487 close(fd);\n>>>>> CID 1596257: (RESOURCE_LEAK)\n>>>>> Variable \"inc_state\" going out of scope leaks the storage it \n>>>>> points to.\n>> 488 }\n>> 489\n>> 490 /* Done with the buffer. */\n>> 491 pfree(buffer);\n>> 492\n>> 493 return result;\n>>\n>> It's right about that AFAICS, and not only is the \"inc_state\" itself\n>> leaked but so is its assorted infrastructure. Perhaps we don't care\n>> too much about that in the existing applications, but ISTM that\n>> isn't going to be a tenable assumption across the board. Shouldn't\n>> there be a \"json_parse_manifest_incremental_shutdown()\" or the like\n>> to deallocate all the storage allocated by the parser?\n>\n>\n>\n> yeah, probably. Will work on it.\n>\n>\n>\n\nHere's a patch. In addition to the leaks Coverity found, there was \nanother site in the backend code that should call the shutdown function, \nand a probably memory leak from a logic bug in the incremental json \nparser code. All these are fixed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 8 Apr 2024 18:20:13 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Apr 08, 2024 at 05:42:35PM -0400, Andrew Dunstan wrote:\n> Arguably the fact that it points nowhere is a good thing. But feel free to\n> replace it with something else. It doesn't have to be URLs at all. That\n> happened simply because it was easy to extract from a very large piece of\n> JSON I had lying around, probably from the last time I wrote a JSON parser\n> :-)\n\n: \"http://www.theatermania.com/broadway/\",\n\nAs as matter of fact, this one points to something that seems to be\nreal. Most of the blobs had better be trimmed down.\n\nLooking at the code paths in this module, it looks like this could be\neven simpler than what Jacob is shaving, checking for:\n- Objects start and end, including booleans, ints and strings for the\nquotations.\n- Nested objects, including arrays made of simple JSON objects.\n\nSome escape patterns are already checked, but most of them are\nmissing:\nhttps://coverage.postgresql.org/src/test/modules/test_json_parser/test_json_parser_incremental.c.gcov.html\n\nThere is no direct check on test_json_parser_perf.c, either, only a\ncustom rule in the Makefile without specifying something for meson.\nSo it looks like you could do short execution check in a TAP test, at\nleast.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2024 09:48:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 09:48:18AM +0900, Michael Paquier wrote:\n> There is no direct check on test_json_parser_perf.c, either, only a\n> custom rule in the Makefile without specifying something for meson.\n> So it looks like you could do short execution check in a TAP test, at\n> least.\n\nWhile reading the code, I've noticed a few things, as well:\n\n+ /* max delicious line length is less than this */\n+ char buff[6001];\n\nDelicious applies to the contents, nothing to do with this code even\nif I'd like to think that these lines of code are edible and good.\nUsing a hardcoded limit for some JSON input sounds like a bad idea to\nme even if this is a test module. Comment that applies to both the\nperf and the incremental tools. You could use a #define'd buffer size\nfor readability rather than assuming this value in many places.\n\n+++ b/src/test/modules/test_json_parser/test_json_parser_incremental.c \n+ * This progam tests incremental parsing of json. The input is fed into \n+ * full range of incement handling, especially in the lexer, is exercised. \n+++ b/src/test/modules/test_json_parser/test_json_parser_perf.c\n+ * Performancet est program for both flavors of the JSON parser \n+ * This progam tests either the standard (recursive descent) JSON parser \n+++ b/src/test/modules/test_json_parser/README \n+ reads in a file and pases it in very small chunks (60 bytes at a time) to \n\nCollection of typos across various files.\n\n+ appendStringInfoString(&json, \"1+23 trailing junk\"); \nWhat's the purpose here? Perhaps the intention should be documented\nin a comment?\n\nAt the end, having a way to generate JSON blobs randomly to test this\nstuff would be more appealing than what you have currently, with\nperhaps a few factors like:\n- Array and simple object density.\n- Max Level of nesting.\n- Limitation to ASCII characters that can be escaped.\n- Perhaps more things I cannot think about?\n\nSo the current state of things is kind of disappointing, and the size\nof the data set added to the tree is not that portable either if you\nwant to test various scenarios depending on the data set. It seems to\nme that this has been committed too hastily and that this is not ready\nfor integration, even if that's just a test module. Tom also has\nshared some concerns upthread, as far as I can see.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2024 14:23:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-09 Tu 01:23, Michael Paquier wrote:\n> On Tue, Apr 09, 2024 at 09:48:18AM +0900, Michael Paquier wrote:\n>> There is no direct check on test_json_parser_perf.c, either, only a\n>> custom rule in the Makefile without specifying something for meson.\n>> So it looks like you could do short execution check in a TAP test, at\n>> least.\n\n\nNot adding a test for that was deliberate - any sane test takes a while, \nand I didn't want to spend that much time on it every time someone runs \n\"make check-world\" or equivalent. However, adding a test to run it with \na trivial number of iterations seems reasonable, so I'll add that. I'll \nalso add a meson target for the binary.\n\n\n> While reading the code, I've noticed a few things, as well:\n>\n> + /* max delicious line length is less than this */\n> + char buff[6001];\n>\n> Delicious applies to the contents, nothing to do with this code even\n> if I'd like to think that these lines of code are edible and good.\n> Using a hardcoded limit for some JSON input sounds like a bad idea to\n> me even if this is a test module. Comment that applies to both the\n> perf and the incremental tools. You could use a #define'd buffer size\n> for readability rather than assuming this value in many places.\n\n\nThe comment is a remnant of when I hadn't yet added support for \nincomplete tokens, and so I had to parse the input line by line. I agree \nit can go, and we can use a manifest constant for the buffer size.\n\n\n>\n> +++ b/src/test/modules/test_json_parser/test_json_parser_incremental.c\n> + * This progam tests incremental parsing of json. The input is fed into\n> + * full range of incement handling, especially in the lexer, is exercised.\n> +++ b/src/test/modules/test_json_parser/test_json_parser_perf.c\n> + * Performancet est program for both flavors of the JSON parser\n> + * This progam tests either the standard (recursive descent) JSON parser\n> +++ b/src/test/modules/test_json_parser/README\n> + reads in a file and pases it in very small chunks (60 bytes at a time) to\n>\n> Collection of typos across various files.\n\n\nWill fix. (The older I get the more typos I seem to make and the harder \nit is to notice them. It's very annoying.)\n\n\n>\n> + appendStringInfoString(&json, \"1+23 trailing junk\");\n> What's the purpose here? Perhaps the intention should be documented\n> in a comment?\n\n\nThe purpose is to ensure that if there is not a trailing '\\0' on the \njson chunk the parser will still do the right thing. I'll add a comment \nto that effect.\n\n\n>\n> At the end, having a way to generate JSON blobs randomly to test this\n> stuff would be more appealing than what you have currently, with\n> perhaps a few factors like:\n> - Array and simple object density.\n> - Max Level of nesting.\n> - Limitation to ASCII characters that can be escaped.\n> - Perhaps more things I cannot think about?\n\n\nNo, I disagree. Maybe we need those things as well, but we do need a \nstatic test where we can test the output against known results. I have \nno objection to changing the input and output files.\n\nIt's worth noting that t/002_inline.pl does generate some input and test \ne.g., the maximum nesting levels among other errors. Perhaps you missed \nthat. If you think we need more tests there adding them would be \nextremely simple.\n\n\n>\n> So the current state of things is kind of disappointing, and the size\n> of the data set added to the tree is not that portable either if you\n> want to test various scenarios depending on the data set. It seems to\n> me that this has been committed too hastily and that this is not ready\n> for integration, even if that's just a test module. Tom also has\n> shared some concerns upthread, as far as I can see.\n\n\nI have posted a patch already that addresses the issue Tom raised.\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-09 Tu 01:23, Michael Paquier\n wrote:\n\n\nOn Tue, Apr 09, 2024 at 09:48:18AM +0900, Michael Paquier wrote:\n\n\nThere is no direct check on test_json_parser_perf.c, either, only a\ncustom rule in the Makefile without specifying something for meson.\nSo it looks like you could do short execution check in a TAP test, at\nleast.\n\n\n\n\n\nNot adding a test for that was deliberate - any sane test takes a\n while, and I didn't want to spend that much time on it every time\n someone runs \"make check-world\" or equivalent. However, adding a\n test to run it with a trivial number of iterations seems\n reasonable, so I'll add that. I'll also add a meson target for the\n binary.\n\n\n\n\n\nWhile reading the code, I've noticed a few things, as well:\n\n+ /* max delicious line length is less than this */\n+ char buff[6001];\n\nDelicious applies to the contents, nothing to do with this code even\nif I'd like to think that these lines of code are edible and good.\nUsing a hardcoded limit for some JSON input sounds like a bad idea to\nme even if this is a test module. Comment that applies to both the\nperf and the incremental tools. You could use a #define'd buffer size\nfor readability rather than assuming this value in many places.\n\n\n\nThe comment is a remnant of when I hadn't yet added support for\n incomplete tokens, and so I had to parse the input line by line. I\n agree it can go, and we can use a manifest constant for the buffer\n size.\n\n\n\n\n\n\n+++ b/src/test/modules/test_json_parser/test_json_parser_incremental.c \n+ * This progam tests incremental parsing of json. The input is fed into \n+ * full range of incement handling, especially in the lexer, is exercised. \n+++ b/src/test/modules/test_json_parser/test_json_parser_perf.c\n+ * Performancet est program for both flavors of the JSON parser \n+ * This progam tests either the standard (recursive descent) JSON parser \n+++ b/src/test/modules/test_json_parser/README \n+ reads in a file and pases it in very small chunks (60 bytes at a time) to \n\nCollection of typos across various files.\n\n\n\nWill fix. (The older I get the more typos I seem to make and the\n harder it is to notice them. It's very annoying.)\n\n\n\n\n\n\n+ appendStringInfoString(&json, \"1+23 trailing junk\"); \nWhat's the purpose here? Perhaps the intention should be documented\nin a comment?\n\n\n\nThe purpose is to ensure that if there is not a trailing '\\0' on\n the json chunk the parser will still do the right thing. I'll add\n a comment to that effect.\n\n\n\n\n\n\nAt the end, having a way to generate JSON blobs randomly to test this\nstuff would be more appealing than what you have currently, with\nperhaps a few factors like:\n- Array and simple object density.\n- Max Level of nesting.\n- Limitation to ASCII characters that can be escaped.\n- Perhaps more things I cannot think about?\n\n\n\nNo, I disagree. Maybe we need those things as well, but we do\n need a static test where we can test the output against known\n results. I have no objection to changing the input and output\n files.\n\nIt's worth noting that t/002_inline.pl does generate some input\n and test e.g., the maximum nesting levels among other errors.\n Perhaps you missed that. If you think we need more tests there\n adding them would be extremely simple.\n\n\n\n\n\nSo the current state of things is kind of disappointing, and the size\nof the data set added to the tree is not that portable either if you\nwant to test various scenarios depending on the data set. It seems to\nme that this has been committed too hastily and that this is not ready\nfor integration, even if that's just a test module. Tom also has\nshared some concerns upthread, as far as I can see.\n\n\n\nI have posted a patch already that addresses the issue Tom\n raised.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 9 Apr 2024 07:54:25 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Apr 9, 2024 at 4:54 AM Andrew Dunstan <[email protected]> wrote:\n> On 2024-04-09 Tu 01:23, Michael Paquier wrote:\n> There is no direct check on test_json_parser_perf.c, either, only a\n> custom rule in the Makefile without specifying something for meson.\n> So it looks like you could do short execution check in a TAP test, at\n> least.\n>\n> Not adding a test for that was deliberate - any sane test takes a while, and I didn't want to spend that much time on it every time someone runs \"make check-world\" or equivalent. However, adding a test to run it with a trivial number of iterations seems reasonable, so I'll add that. I'll also add a meson target for the binary.\n\nOkay, but for what purpose? My understanding during review was that\nthis was a convenience utility for people who were actively hacking on\nthe code (and I used it for exactly that purpose a few months back, so\nI didn't question that any further). Why does the farm need to spend\nany time running it at all?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 9 Apr 2024 06:45:11 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-09 Tu 09:45, Jacob Champion wrote:\n> On Tue, Apr 9, 2024 at 4:54 AM Andrew Dunstan <[email protected]> wrote:\n>> On 2024-04-09 Tu 01:23, Michael Paquier wrote:\n>> There is no direct check on test_json_parser_perf.c, either, only a\n>> custom rule in the Makefile without specifying something for meson.\n>> So it looks like you could do short execution check in a TAP test, at\n>> least.\n>>\n>> Not adding a test for that was deliberate - any sane test takes a while, and I didn't want to spend that much time on it every time someone runs \"make check-world\" or equivalent. However, adding a test to run it with a trivial number of iterations seems reasonable, so I'll add that. I'll also add a meson target for the binary.\n> Okay, but for what purpose? My understanding during review was that\n> this was a convenience utility for people who were actively hacking on\n> the code (and I used it for exactly that purpose a few months back, so\n> I didn't question that any further). Why does the farm need to spend\n> any time running it at all?\n>\n\nI think Michael's point was that if we carry the code we should test we \ncan run it. The other possibility would be just to remove it. I can see \narguments for both.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 9 Apr 2024 10:30:07 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Apr 9, 2024 at 7:30 AM Andrew Dunstan <[email protected]> wrote:\n> I think Michael's point was that if we carry the code we should test we\n> can run it. The other possibility would be just to remove it. I can see\n> arguments for both.\n\nHm. If it's not acceptable to carry this (as a worse-is-better smoke\ntest) without also running it during tests, then my personal vote\nwould be to tear it out and just have people write/contribute targeted\nbenchmarks when they end up working on performance. I don't think the\ncost/benefit makes sense at that point.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 9 Apr 2024 09:26:49 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 10:24 PM Michael Paquier <[email protected]> wrote:\n> At the end, having a way to generate JSON blobs randomly to test this\n> stuff would be more appealing\n\nFor the record, I'm working on an LLVM fuzzer target for the JSON\nparser. I think that would be a lot more useful than anything we can\nhand-code.\n\nBut I want it to cover both the recursive and incremental code paths,\nand we'd need to talk about where it would live. libfuzzer is seeded\nwith a bunch of huge incomprehensible blobs, which is something we're\nnow trying to avoid checking in. There's also the security aspect of\n\"what do we do when it finds something\", and at that point maybe we\nneed to look into a service like oss-fuzz.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 9 Apr 2024 09:41:44 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 5:48 PM Michael Paquier <[email protected]> wrote:\n> : \"http://www.theatermania.com/broadway/\",\n>\n> As as matter of fact, this one points to something that seems to be\n> real. Most of the blobs had better be trimmed down.\n\nThis new version keeps all the existing structure but points the\nhostnames to example.com [1].\n\n--Jacob\n\n[1] https://datatracker.ietf.org/doc/html/rfc2606#section-3",
"msg_date": "Tue, 9 Apr 2024 10:54:58 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-09 Tu 07:54, Andrew Dunstan wrote:\n>\n>\n> On 2024-04-09 Tu 01:23, Michael Paquier wrote:\n>> On Tue, Apr 09, 2024 at 09:48:18AM +0900, Michael Paquier wrote:\n>>> There is no direct check on test_json_parser_perf.c, either, only a\n>>> custom rule in the Makefile without specifying something for meson.\n>>> So it looks like you could do short execution check in a TAP test, at\n>>> least.\n>\n>\n> Not adding a test for that was deliberate - any sane test takes a \n> while, and I didn't want to spend that much time on it every time \n> someone runs \"make check-world\" or equivalent. However, adding a test \n> to run it with a trivial number of iterations seems reasonable, so \n> I'll add that. I'll also add a meson target for the binary.\n>\n>\n>> While reading the code, I've noticed a few things, as well:\n>>\n>> + /* max delicious line length is less than this */\n>> + char buff[6001];\n>>\n>> Delicious applies to the contents, nothing to do with this code even\n>> if I'd like to think that these lines of code are edible and good.\n>> Using a hardcoded limit for some JSON input sounds like a bad idea to\n>> me even if this is a test module. Comment that applies to both the\n>> perf and the incremental tools. You could use a #define'd buffer size\n>> for readability rather than assuming this value in many places.\n>\n>\n> The comment is a remnant of when I hadn't yet added support for \n> incomplete tokens, and so I had to parse the input line by line. I \n> agree it can go, and we can use a manifest constant for the buffer size.\n>\n>\n>> +++ b/src/test/modules/test_json_parser/test_json_parser_incremental.c\n>> + * This progam tests incremental parsing of json. The input is fed into\n>> + * full range of incement handling, especially in the lexer, is exercised.\n>> +++ b/src/test/modules/test_json_parser/test_json_parser_perf.c\n>> + * Performancet est program for both flavors of the JSON parser\n>> + * This progam tests either the standard (recursive descent) JSON parser\n>> +++ b/src/test/modules/test_json_parser/README\n>> + reads in a file and pases it in very small chunks (60 bytes at a time) to\n>>\n>> Collection of typos across various files.\n>\n>\n> Will fix. (The older I get the more typos I seem to make and the \n> harder it is to notice them. It's very annoying.)\n>\n>\n>> + appendStringInfoString(&json, \"1+23 trailing junk\");\n>> What's the purpose here? Perhaps the intention should be documented\n>> in a comment?\n>\n>\n> The purpose is to ensure that if there is not a trailing '\\0' on the \n> json chunk the parser will still do the right thing. I'll add a \n> comment to that effect.\n>\n>\n>> At the end, having a way to generate JSON blobs randomly to test this\n>> stuff would be more appealing than what you have currently, with\n>> perhaps a few factors like:\n>> - Array and simple object density.\n>> - Max Level of nesting.\n>> - Limitation to ASCII characters that can be escaped.\n>> - Perhaps more things I cannot think about?\n>\n>\n> No, I disagree. Maybe we need those things as well, but we do need a \n> static test where we can test the output against known results. I have \n> no objection to changing the input and output files.\n>\n> It's worth noting that t/002_inline.pl does generate some input and \n> test e.g., the maximum nesting levels among other errors. Perhaps you \n> missed that. If you think we need more tests there adding them would \n> be extremely simple.\n>\n>\n>> So the current state of things is kind of disappointing, and the size\n>> of the data set added to the tree is not that portable either if you\n>> want to test various scenarios depending on the data set. It seems to\n>> me that this has been committed too hastily and that this is not ready\n>> for integration, even if that's just a test module. Tom also has\n>> shared some concerns upthread, as far as I can see.\n>\n>\n> I have posted a patch already that addresses the issue Tom raised.\n>\n>\n\n\nHere's a consolidated set of cleanup patches, including the memory leak \npatch and Jacob's shrink-tiny patch.\n\nI think the biggest open issue is whether or not we remove the \nperformance test program.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 9 Apr 2024 15:42:28 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-09 Tu 15:42, Andrew Dunstan wrote:\n>\n> On 2024-04-09 Tu 07:54, Andrew Dunstan wrote:\n>>\n>>\n>> On 2024-04-09 Tu 01:23, Michael Paquier wrote:\n>>> On Tue, Apr 09, 2024 at 09:48:18AM +0900, Michael Paquier wrote:\n>>>> There is no direct check on test_json_parser_perf.c, either, only a\n>>>> custom rule in the Makefile without specifying something for meson.\n>>>> So it looks like you could do short execution check in a TAP test, at\n>>>> least.\n>>\n>>\n>> Not adding a test for that was deliberate - any sane test takes a \n>> while, and I didn't want to spend that much time on it every time \n>> someone runs \"make check-world\" or equivalent. However, adding a test \n>> to run it with a trivial number of iterations seems reasonable, so \n>> I'll add that. I'll also add a meson target for the binary.\n>>\n>>\n>>> While reading the code, I've noticed a few things, as well:\n>>>\n>>> + /* max delicious line length is less than this */\n>>> + char buff[6001];\n>>>\n>>> Delicious applies to the contents, nothing to do with this code even\n>>> if I'd like to think that these lines of code are edible and good.\n>>> Using a hardcoded limit for some JSON input sounds like a bad idea to\n>>> me even if this is a test module. Comment that applies to both the\n>>> perf and the incremental tools. You could use a #define'd buffer size\n>>> for readability rather than assuming this value in many places.\n>>\n>>\n>> The comment is a remnant of when I hadn't yet added support for \n>> incomplete tokens, and so I had to parse the input line by line. I \n>> agree it can go, and we can use a manifest constant for the buffer size.\n>>\n>>\n>>> +++ b/src/test/modules/test_json_parser/test_json_parser_incremental.c\n>>> + * This progam tests incremental parsing of json. The input is fed \n>>> into\n>>> + * full range of incement handling, especially in the lexer, is \n>>> exercised.\n>>> +++ b/src/test/modules/test_json_parser/test_json_parser_perf.c\n>>> + * Performancet est program for both flavors of the JSON parser\n>>> + * This progam tests either the standard (recursive descent) JSON \n>>> parser\n>>> +++ b/src/test/modules/test_json_parser/README\n>>> + reads in a file and pases it in very small chunks (60 bytes at a \n>>> time) to\n>>>\n>>> Collection of typos across various files.\n>>\n>>\n>> Will fix. (The older I get the more typos I seem to make and the \n>> harder it is to notice them. It's very annoying.)\n>>\n>>\n>>> + appendStringInfoString(&json, \"1+23 trailing junk\");\n>>> What's the purpose here? Perhaps the intention should be documented\n>>> in a comment?\n>>\n>>\n>> The purpose is to ensure that if there is not a trailing '\\0' on the \n>> json chunk the parser will still do the right thing. I'll add a \n>> comment to that effect.\n>>\n>>\n>>> At the end, having a way to generate JSON blobs randomly to test this\n>>> stuff would be more appealing than what you have currently, with\n>>> perhaps a few factors like:\n>>> - Array and simple object density.\n>>> - Max Level of nesting.\n>>> - Limitation to ASCII characters that can be escaped.\n>>> - Perhaps more things I cannot think about?\n>>\n>>\n>> No, I disagree. Maybe we need those things as well, but we do need a \n>> static test where we can test the output against known results. I \n>> have no objection to changing the input and output files.\n>>\n>> It's worth noting that t/002_inline.pl does generate some input and \n>> test e.g., the maximum nesting levels among other errors. Perhaps you \n>> missed that. If you think we need more tests there adding them would \n>> be extremely simple.\n>>\n>>\n>>> So the current state of things is kind of disappointing, and the size\n>>> of the data set added to the tree is not that portable either if you\n>>> want to test various scenarios depending on the data set. It seems to\n>>> me that this has been committed too hastily and that this is not ready\n>>> for integration, even if that's just a test module. Tom also has\n>>> shared some concerns upthread, as far as I can see.\n>>\n>>\n>> I have posted a patch already that addresses the issue Tom raised.\n>>\n>>\n>\n>\n> Here's a consolidated set of cleanup patches, including the memory \n> leak patch and Jacob's shrink-tiny patch.\n\n\nHere's v2 of the cleanup patch 4, that fixes some more typos kindly \npointed out to me by Alexander Lakhin.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 10 Apr 2024 07:47:38 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 09:26:49AM -0700, Jacob Champion wrote:\n> On Tue, Apr 9, 2024 at 7:30 AM Andrew Dunstan <[email protected]> wrote:\n>> I think Michael's point was that if we carry the code we should test we\n>> can run it. The other possibility would be just to remove it. I can see\n>> arguments for both.\n> \n> Hm. If it's not acceptable to carry this (as a worse-is-better smoke\n> test) without also running it during tests, then my personal vote\n> would be to tear it out and just have people write/contribute targeted\n> benchmarks when they end up working on performance. I don't think the\n> cost/benefit makes sense at that point.\n\nAnd you may catch up a couple of bugs while on it. In my experience,\nthings with custom makefile and/or meson rules tend to rot easily\nbecause everybody forgets about them. There are a few of them in the\ntree that could be ripped off, as well..\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 14:25:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 07:47:38AM -0400, Andrew Dunstan wrote:\n> Here's v2 of the cleanup patch 4, that fixes some more typos kindly pointed\n> out to me by Alexander Lakhin.\n\nI can see that this has been applied as of 42fa4b660143 with some\nextra commits.\n\nAnyway, I have noticed another thing in the surroundings that's\nannoying. 003 has this logic:\nuse File::Temp qw(tempfile);\n[...]\nmy ($fh, $fname) = tempfile();\n\nprint $fh $stdout,\"\\n\";\n\nclose($fh);\n\nThis creates a temporary file in /tmp/ that remains around, slowing\nbloating the temporary file space on a node while leaving around some\ndata. Why using File::Temp::tempfile here? Couldn't you just use a\nfile in a PostgreSQL::Test::Utils::tempdir() that would be cleaned up\nonce the test finishes?\n\nPer [1], escape_json() has no coverage outside its default path. Is\nthat intended?\n\nDocumenting all these test files with a few comments would be welcome,\nas well, with some copyright notices...\n\n json_file = fopen(testfile, \"r\");\n fstat(fileno(json_file), &statbuf);\n bytes_left = statbuf.st_size;\n\nNo checks on failure of fstat() here?\n\n json_file = fopen(argv[2], \"r\");\n\nSecond one in test_json_parser_perf.c, with more stuff for fread().\n\n[1]: https://coverage.postgresql.org/src/test/modules/test_json_parser/test_json_parser_incremental.c.gcov.html\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 15:04:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On 2024-04-18 Th 02:04, Michael Paquier wrote:\n> On Wed, Apr 10, 2024 at 07:47:38AM -0400, Andrew Dunstan wrote:\n>> Here's v2 of the cleanup patch 4, that fixes some more typos kindly pointed\n>> out to me by Alexander Lakhin.\n> I can see that this has been applied as of 42fa4b660143 with some\n> extra commits.\n>\n> Anyway, I have noticed another thing in the surroundings that's\n> annoying. 003 has this logic:\n> useFile::Temp qw(tempfile);\n> [...]\n> my ($fh, $fname) = tempfile();\n>\n> print $fh $stdout,\"\\n\";\n>\n> close($fh);\n>\n> This creates a temporary file in /tmp/ that remains around, slowing\n> bloating the temporary file space on a node while leaving around some\n> data.\n\n\nMy bad, I should have used the UNLINK option like in the other tests.\n\n\n\n> Why usingFile::Temp::tempfile here? Couldn't you just use a\n> file in a PostgreSQL::Test::Utils::tempdir() that would be cleaned up\n> once the test finishes?\n\n\nThat's another possibility, but I think the above is the simplest.\n\n\n>\n> Per [1], escape_json() has no coverage outside its default path. Is\n> that intended?\n\n\nNot particularly. I'll add some stuff to get complete coverage.\n\n\n>\n> Documenting all these test files with a few comments would be welcome,\n> as well, with some copyright notices...\n\n\nok\n\n\n>\n> json_file = fopen(testfile, \"r\");\n> fstat(fileno(json_file), &statbuf);\n> bytes_left = statbuf.st_size;\n>\n> No checks on failure of fstat() here?\n\n\nok will fix\n\n\n>\n> json_file = fopen(argv[2], \"r\");\n>\n> Second one in test_json_parser_perf.c, with more stuff for fread().\n\n\nok will fix\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-18 Th 02:04, Michael Paquier\n wrote:\n\n\nOn Wed, Apr 10, 2024 at 07:47:38AM -0400, Andrew Dunstan wrote:\n\n\nHere's v2 of the cleanup patch 4, that fixes some more typos kindly pointed\nout to me by Alexander Lakhin.\n\n\n\nI can see that this has been applied as of 42fa4b660143 with some\nextra commits.\n\nAnyway, I have noticed another thing in the surroundings that's\nannoying. 003 has this logic:\nuse File::Temp qw(tempfile);\n[...]\nmy ($fh, $fname) = tempfile();\n\nprint $fh $stdout,\"\\n\";\n\nclose($fh);\n\nThis creates a temporary file in /tmp/ that remains around, slowing\nbloating the temporary file space on a node while leaving around some\ndata. \n\n\n\nMy bad, I should have used the UNLINK option like in the other\n tests.\n\n\n\n\n\n\n Why using File::Temp::tempfile here? Couldn't you just use a\nfile in a PostgreSQL::Test::Utils::tempdir() that would be cleaned up\nonce the test finishes?\n\n\n\nThat's another possibility, but I think the above is the\n simplest.\n\n\n\n\n\n\nPer [1], escape_json() has no coverage outside its default path. Is\nthat intended?\n\n\n\nNot particularly. I'll add some stuff to get complete coverage.\n\n\n\n\n\n\nDocumenting all these test files with a few comments would be welcome,\nas well, with some copyright notices...\n\n\n\nok\n\n\n\n\n\n json_file = fopen(testfile, \"r\");\n fstat(fileno(json_file), &statbuf);\n bytes_left = statbuf.st_size;\n\nNo checks on failure of fstat() here?\n\n\n\nok will fix\n\n\n\n\n\n\n json_file = fopen(argv[2], \"r\");\n\nSecond one in test_json_parser_perf.c, with more stuff for fread().\n\n\n\nok will fix\n\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 18 Apr 2024 06:03:45 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 06:03:45AM -0400, Andrew Dunstan wrote:\n> On 2024-04-18 Th 02:04, Michael Paquier wrote:\n>> Why usingFile::Temp::tempfile here? Couldn't you just use a\n>> file in a PostgreSQL::Test::Utils::tempdir() that would be cleaned up\n>> once the test finishes?\n> \n> That's another possibility, but I think the above is the simplest.\n\nAre you sure that relying on Temp::File is a good thing overall? The\ncurrent temporary file knowledge is encapsulated within Utils.pm, with\nfiles removed or kept depending on PG_TEST_NOCLEAN. So it would be\njust more consistent to rely on the existing facilities instead?\ntest_json_parser is the only code path in the whole tree that directly\nuses File::Temp. The rest of the TAP tests relies on Utils.pm for\ntemp file paths.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2024 14:34:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 1:35 AM Michael Paquier <[email protected]> wrote:\n> Are you sure that relying on Temp::File is a good thing overall? The\n> current temporary file knowledge is encapsulated within Utils.pm, with\n> files removed or kept depending on PG_TEST_NOCLEAN. So it would be\n> just more consistent to rely on the existing facilities instead?\n> test_json_parser is the only code path in the whole tree that directly\n> uses File::Temp. The rest of the TAP tests relies on Utils.pm for\n> temp file paths.\n\nYeah, I think this patch invented a new solution to a problem that\nwe've solved in a different way everywhere else. I think we should\nchange it to match what we do in general.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Apr 2024 14:04:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 02:04:40PM -0400, Robert Haas wrote:\n> Yeah, I think this patch invented a new solution to a problem that\n> we've solved in a different way everywhere else. I think we should\n> change it to match what we do in general.\n\nAs of ba3e6e2bca97, did you notice that test_json_parser_perf\ngenerates two core files because progname is not set, failing an\nassertion when using the frontend logging?\n--\nMichael",
"msg_date": "Wed, 24 Apr 2024 17:56:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Incremental JSON Parser"
},
{
"msg_contents": "\nOn 2024-04-24 We 04:56, Michael Paquier wrote:\n> On Fri, Apr 19, 2024 at 02:04:40PM -0400, Robert Haas wrote:\n>> Yeah, I think this patch invented a new solution to a problem that\n>> we've solved in a different way everywhere else. I think we should\n>> change it to match what we do in general.\n> As of ba3e6e2bca97, did you notice that test_json_parser_perf\n> generates two core files because progname is not set, failing an\n> assertion when using the frontend logging?\n\n\nNo, it didn't for me. Thanks for noticing, I've pushed a fix.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Apr 2024 08:35:37 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Incremental JSON Parser"
}
] |
[
{
"msg_contents": "Hi,\n\nThe commit b437571 <http://b437571714707bc6466abde1a0af5e69aaade09c> I\nthink has an oversight.\nWhen allocate memory and initialize private spool in function:\n_brin_leader_participate_as_worker\n\nThe behavior is the bs_spool (heap and index fields)\nare left empty.\n\nThe code affected is:\n buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n- buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n- buildstate->bs_spool->index = buildstate->bs_spool->index;\n+ buildstate->bs_spool->heap = heap;\n+ buildstate->bs_spool->index = index;\n\nIs the fix correct?\n\nbest regards,\nRanier Vilela",
"msg_date": "Tue, 26 Dec 2023 15:10:30 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "\n\nOn 12/26/23 19:10, Ranier Vilela wrote:\n> Hi,\n> \n> The commit b437571 <http://b437571714707bc6466abde1a0af5e69aaade09c> I\n> think has an oversight.\n> When allocate memory and initialize private spool in function:\n> _brin_leader_participate_as_worker\n> \n> The behavior is the bs_spool (heap and index fields)\n> are left empty.\n> \n> The code affected is:\n> buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> + buildstate->bs_spool->heap = heap;\n> + buildstate->bs_spool->index = index;\n> \n> Is the fix correct?\n> \n\nThanks for noticing this. Yes, I believe this is a bug - the assignments\nare certainly wrong, it leaves the fields set to NULL.\n\nI wonder how come this didn't fail during testing. Surely, if the leader\nparticipates as a worker, the tuplesort_begin_index_brin shall be called\nwith heap/index being NULL, leading to some failure during the sort. But\nmaybe this means we don't actually need the heap/index fields, it's just\na copy of TuplesortIndexArg, but BRIN does not need that because we sort\nthe tuples by blkno, and we don't need the descriptors for that.\n\nIn any case, the _brin_parallel_scan_and_build does not actually need\nthe separate heap/index arguments, those are already in the spool.\n\nI'll try to figure out if we want to simplify the tuplesort or remove\nthe arguments from _brin_parallel_scan_and_build.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Dec 2023 23:07:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra <\[email protected]> escreveu:\n\n>\n>\n> On 12/26/23 19:10, Ranier Vilela wrote:\n> > Hi,\n> >\n> > The commit b437571 <http://b437571714707bc6466abde1a0af5e69aaade09c> I\n> > think has an oversight.\n> > When allocate memory and initialize private spool in function:\n> > _brin_leader_participate_as_worker\n> >\n> > The behavior is the bs_spool (heap and index fields)\n> > are left empty.\n> >\n> > The code affected is:\n> > buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > + buildstate->bs_spool->heap = heap;\n> > + buildstate->bs_spool->index = index;\n> >\n> > Is the fix correct?\n> >\n>\n> Thanks for noticing this.\n\nYou're welcome.\n\n\n> Yes, I believe this is a bug - the assignments\n> are certainly wrong, it leaves the fields set to NULL.\n>\n> I wonder how come this didn't fail during testing. Surely, if the leader\n> participates as a worker, the tuplesort_begin_index_brin shall be called\n> with heap/index being NULL, leading to some failure during the sort. But\n> maybe this means we don't actually need the heap/index fields, it's just\n> a copy of TuplesortIndexArg, but BRIN does not need that because we sort\n> the tuples by blkno, and we don't need the descriptors for that.\n>\nUnfortunately I can't test on Windows, since I can't build with meson on\nWindows.\n\n\n> In any case, the _brin_parallel_scan_and_build does not actually need\n> the separate heap/index arguments, those are already in the spool.\n>\nYeah, for sure.\n\n\n> I'll try to figure out if we want to simplify the tuplesort or remove\n> the arguments from _brin_parallel_scan_and_build.\n>\nThank you for your work.\n\nbest regards,\nRanier Vilela\n\nEm ter., 26 de dez. de 2023 às 19:07, Tomas Vondra <[email protected]> escreveu:\n\nOn 12/26/23 19:10, Ranier Vilela wrote:\n> Hi,\n> \n> The commit b437571 <http://b437571714707bc6466abde1a0af5e69aaade09c> I\n> think has an oversight.\n> When allocate memory and initialize private spool in function:\n> _brin_leader_participate_as_worker\n> \n> The behavior is the bs_spool (heap and index fields)\n> are left empty.\n> \n> The code affected is:\n> buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> + buildstate->bs_spool->heap = heap;\n> + buildstate->bs_spool->index = index;\n> \n> Is the fix correct?\n> \n\nThanks for noticing this.You're welcome. Yes, I believe this is a bug - the assignments\nare certainly wrong, it leaves the fields set to NULL.\n\nI wonder how come this didn't fail during testing. Surely, if the leader\nparticipates as a worker, the tuplesort_begin_index_brin shall be called\nwith heap/index being NULL, leading to some failure during the sort. But\nmaybe this means we don't actually need the heap/index fields, it's just\na copy of TuplesortIndexArg, but BRIN does not need that because we sort\nthe tuples by blkno, and we don't need the descriptors for that.Unfortunately I can't test on Windows, since I can't build with meson on Windows.\n\nIn any case, the _brin_parallel_scan_and_build does not actually need\nthe separate heap/index arguments, those are already in the spool.Yeah, for sure.\n\nI'll try to figure out if we want to simplify the tuplesort or remove\nthe arguments from _brin_parallel_scan_and_build.Thank you for your work.best regards,Ranier Vilela",
"msg_date": "Wed, 27 Dec 2023 08:37:25 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "On 12/27/23 12:37, Ranier Vilela wrote:\n> Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> escreveu:\n> \n> \n> \n> On 12/26/23 19:10, Ranier Vilela wrote:\n> > Hi,\n> >\n> > The commit b437571\n> <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>> I\n> > think has an oversight.\n> > When allocate memory and initialize private spool in function:\n> > _brin_leader_participate_as_worker\n> >\n> > The behavior is the bs_spool (heap and index fields)\n> > are left empty.\n> >\n> > The code affected is:\n> > buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > + buildstate->bs_spool->heap = heap;\n> > + buildstate->bs_spool->index = index;\n> >\n> > Is the fix correct?\n> >\n> \n> Thanks for noticing this.\n> \n> You're welcome.\n> \n> \n> Yes, I believe this is a bug - the assignments\n> are certainly wrong, it leaves the fields set to NULL.\n> \n> I wonder how come this didn't fail during testing. Surely, if the leader\n> participates as a worker, the tuplesort_begin_index_brin shall be called\n> with heap/index being NULL, leading to some failure during the sort. But\n> maybe this means we don't actually need the heap/index fields, it's just\n> a copy of TuplesortIndexArg, but BRIN does not need that because we sort\n> the tuples by blkno, and we don't need the descriptors for that.\n> \n> Unfortunately I can't test on Windows, since I can't build with meson on\n> Windows.\n> \n> \n> In any case, the _brin_parallel_scan_and_build does not actually need\n> the separate heap/index arguments, those are already in the spool.\n> \n> Yeah, for sure.\n> \n> \n> I'll try to figure out if we want to simplify the tuplesort or remove\n> the arguments from _brin_parallel_scan_and_build.\n> \n\nHere is a patch simplifying the BRIN parallel create code a little bit.\nAs I suspected, we don't need the heap/index in the spool at all, and we\ndon't need to pass it to tuplesort_begin_index_brin either - we only\nneed blkno, and we have that in the datum1 field. This also means we\ndon't need TuplesortIndexBrinArg.\n\nI'll push this tomorrow, probably.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 29 Dec 2023 02:16:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra <\[email protected]> escreveu:\n\n>\n>\n> On 12/27/23 12:37, Ranier Vilela wrote:\n> > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> > <[email protected] <mailto:[email protected]>>\n> > escreveu:\n> >\n> >\n> >\n> > On 12/26/23 19:10, Ranier Vilela wrote:\n> > > Hi,\n> > >\n> > > The commit b437571\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c>> I\n> > > think has an oversight.\n> > > When allocate memory and initialize private spool in function:\n> > > _brin_leader_participate_as_worker\n> > >\n> > > The behavior is the bs_spool (heap and index fields)\n> > > are left empty.\n> > >\n> > > The code affected is:\n> > > buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > > + buildstate->bs_spool->heap = heap;\n> > > + buildstate->bs_spool->index = index;\n> > >\n> > > Is the fix correct?\n> > >\n> >\n> > Thanks for noticing this.\n> >\n> > You're welcome.\n> >\n> >\n> > Yes, I believe this is a bug - the assignments\n> > are certainly wrong, it leaves the fields set to NULL.\n> >\n> > I wonder how come this didn't fail during testing. Surely, if the\n> leader\n> > participates as a worker, the tuplesort_begin_index_brin shall be\n> called\n> > with heap/index being NULL, leading to some failure during the sort.\n> But\n> > maybe this means we don't actually need the heap/index fields, it's\n> just\n> > a copy of TuplesortIndexArg, but BRIN does not need that because we\n> sort\n> > the tuples by blkno, and we don't need the descriptors for that.\n> >\n> > Unfortunately I can't test on Windows, since I can't build with meson on\n> > Windows.\n> >\n> >\n> > In any case, the _brin_parallel_scan_and_build does not actually need\n> > the separate heap/index arguments, those are already in the spool.\n> >\n> > Yeah, for sure.\n> >\n> >\n> > I'll try to figure out if we want to simplify the tuplesort or remove\n> > the arguments from _brin_parallel_scan_and_build.\n> >\n>\n> Here is a patch simplifying the BRIN parallel create code a little bit.\n> As I suspected, we don't need the heap/index in the spool at all, and we\n> don't need to pass it to tuplesort_begin_index_brin either - we only\n> need blkno, and we have that in the datum1 field. This also means we\n> don't need TuplesortIndexBrinArg.\n>\nWith Windows 10, msvc 2022, compile end pass ninja test.\n\nBut, if you allow me, I would like to try another approach to\nsimplification.\nInstead of increasing the arguments in the call, wouldn't it be better to\ndecrease them\nand this way all arguments will be passed in the registers instead of on a\nstack?\n\nbs_spool may well contain this data and will probably be useful in the\nfuture.\n\nI made a v1 version, based on your patch, for your consideration.\n\nbest regards,\nRanier Vilela",
"msg_date": "Fri, 29 Dec 2023 08:53:44 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "\n\nOn 12/29/23 12:53, Ranier Vilela wrote:\n> Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> escreveu:\n> \n> \n> \n> On 12/27/23 12:37, Ranier Vilela wrote:\n> > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> > <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n> > escreveu:\n> >\n> >\n> >\n> > On 12/26/23 19:10, Ranier Vilela wrote:\n> > > Hi,\n> > >\n> > > The commit b437571\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>>> I\n> > > think has an oversight.\n> > > When allocate memory and initialize private spool in function:\n> > > _brin_leader_participate_as_worker\n> > >\n> > > The behavior is the bs_spool (heap and index fields)\n> > > are left empty.\n> > >\n> > > The code affected is:\n> > > buildstate->bs_spool = (BrinSpool *)\n> palloc0(sizeof(BrinSpool));\n> > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > > + buildstate->bs_spool->heap = heap;\n> > > + buildstate->bs_spool->index = index;\n> > >\n> > > Is the fix correct?\n> > >\n> >\n> > Thanks for noticing this.\n> >\n> > You're welcome.\n> > \n> >\n> > Yes, I believe this is a bug - the assignments\n> > are certainly wrong, it leaves the fields set to NULL.\n> >\n> > I wonder how come this didn't fail during testing. Surely, if\n> the leader\n> > participates as a worker, the tuplesort_begin_index_brin shall\n> be called\n> > with heap/index being NULL, leading to some failure during the\n> sort. But\n> > maybe this means we don't actually need the heap/index fields,\n> it's just\n> > a copy of TuplesortIndexArg, but BRIN does not need that\n> because we sort\n> > the tuples by blkno, and we don't need the descriptors for that.\n> >\n> > Unfortunately I can't test on Windows, since I can't build with\n> meson on\n> > Windows.\n> >\n> >\n> > In any case, the _brin_parallel_scan_and_build does not\n> actually need\n> > the separate heap/index arguments, those are already in the spool.\n> >\n> > Yeah, for sure.\n> >\n> >\n> > I'll try to figure out if we want to simplify the tuplesort or\n> remove\n> > the arguments from _brin_parallel_scan_and_build.\n> >\n> \n> Here is a patch simplifying the BRIN parallel create code a little bit.\n> As I suspected, we don't need the heap/index in the spool at all, and we\n> don't need to pass it to tuplesort_begin_index_brin either - we only\n> need blkno, and we have that in the datum1 field. This also means we\n> don't need TuplesortIndexBrinArg.\n> \n> With Windows 10, msvc 2022, compile end pass ninja test.\n> \n> But, if you allow me, I would like to try another approach to\n> simplification.\n> Instead of increasing the arguments in the call, wouldn't it be better\n> to decrease them \n> and this way all arguments will be passed in the registers instead of on\n> a stack?\n> \n\nIf this was beneficial, we'd be passing everything through structs and\nnot as explicit arguments. But we don't. If you're arguing it's\nbeneficial in this case, it'd be good to see it demonstrated.\n\n> bs_spool may well contain this data and will probably be useful in the\n> future.\n> \n> I made a v1 version, based on your patch, for your consideration.\n> \n\nI did actually consider doing it this way yesterday, but I don't like\nthis approach. I don't believe it's faster (and even if it was, the\ndifference is going to be negligible), and parameters hidden in some\nstruct increase the cognitive load. I like explicit arguments.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Dec 2023 14:33:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "Em sex., 29 de dez. de 2023 às 10:33, Tomas Vondra <\[email protected]> escreveu:\n\n>\n>\n> On 12/29/23 12:53, Ranier Vilela wrote:\n> > Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra\n> > <[email protected] <mailto:[email protected]>>\n> > escreveu:\n> >\n> >\n> >\n> > On 12/27/23 12:37, Ranier Vilela wrote:\n> > > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> > > <[email protected]\n> > <mailto:[email protected]>\n> > <mailto:[email protected]\n> > <mailto:[email protected]>>>\n> > > escreveu:\n> > >\n> > >\n> > >\n> > > On 12/26/23 19:10, Ranier Vilela wrote:\n> > > > Hi,\n> > > >\n> > > > The commit b437571\n> > > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c>\n> > > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c>>> I\n> > > > think has an oversight.\n> > > > When allocate memory and initialize private spool in\n> function:\n> > > > _brin_leader_participate_as_worker\n> > > >\n> > > > The behavior is the bs_spool (heap and index fields)\n> > > > are left empty.\n> > > >\n> > > > The code affected is:\n> > > > buildstate->bs_spool = (BrinSpool *)\n> > palloc0(sizeof(BrinSpool));\n> > > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > > > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > > > + buildstate->bs_spool->heap = heap;\n> > > > + buildstate->bs_spool->index = index;\n> > > >\n> > > > Is the fix correct?\n> > > >\n> > >\n> > > Thanks for noticing this.\n> > >\n> > > You're welcome.\n> > >\n> > >\n> > > Yes, I believe this is a bug - the assignments\n> > > are certainly wrong, it leaves the fields set to NULL.\n> > >\n> > > I wonder how come this didn't fail during testing. Surely, if\n> > the leader\n> > > participates as a worker, the tuplesort_begin_index_brin shall\n> > be called\n> > > with heap/index being NULL, leading to some failure during the\n> > sort. But\n> > > maybe this means we don't actually need the heap/index fields,\n> > it's just\n> > > a copy of TuplesortIndexArg, but BRIN does not need that\n> > because we sort\n> > > the tuples by blkno, and we don't need the descriptors for\n> that.\n> > >\n> > > Unfortunately I can't test on Windows, since I can't build with\n> > meson on\n> > > Windows.\n> > >\n> > >\n> > > In any case, the _brin_parallel_scan_and_build does not\n> > actually need\n> > > the separate heap/index arguments, those are already in the\n> spool.\n> > >\n> > > Yeah, for sure.\n> > >\n> > >\n> > > I'll try to figure out if we want to simplify the tuplesort or\n> > remove\n> > > the arguments from _brin_parallel_scan_and_build.\n> > >\n> >\n> > Here is a patch simplifying the BRIN parallel create code a little\n> bit.\n> > As I suspected, we don't need the heap/index in the spool at all,\n> and we\n> > don't need to pass it to tuplesort_begin_index_brin either - we only\n> > need blkno, and we have that in the datum1 field. This also means we\n> > don't need TuplesortIndexBrinArg.\n> >\n> > With Windows 10, msvc 2022, compile end pass ninja test.\n> >\n> > But, if you allow me, I would like to try another approach to\n> > simplification.\n> > Instead of increasing the arguments in the call, wouldn't it be better\n> > to decrease them\n> > and this way all arguments will be passed in the registers instead of on\n> > a stack?\n> >\n>\n> If this was beneficial, we'd be passing everything through structs and\n> not as explicit arguments. But we don't. If you're arguing it's\n> beneficial in this case, it'd be good to see it demonstrated.\n>\nPlease see the https://www.agner.org/optimize/optimizing_cpp.pdf\nExcerpt:\n\"Use 64-bit mode\nParameter transfer is more efficient in 64-bit mode than in 32-bit mode,\nand more efficient in 64-bit Linux than in 64-bit Windows. In 64-bit Linux,\nthe first six integer parameters and the first eight floating point\nparameters are transferred in registers, totaling up to fourteen register\nparameters. In 64-bit Windows, the first four parameters are transferred in\nregisters, regardless of whether they are integers or floating point\nnumbers.\"\n\nWith function:\n_brin_parallel_scan_and_build(buildstate, buildstate->bs_spool,\nbrinshared, sharedsort, heapRel, indexRel, sortmem, false);\nWe have:\nLinux -> six first parameters in registers and two parameters in stack\nWindows -> four parameters in registers and four parameters in stack\n\n\n> > bs_spool may well contain this data and will probably be useful in the\n> > future.\n> >\n> > I made a v1 version, based on your patch, for your consideration.\n> >\n>\n> I did actually consider doing it this way yesterday, but I don't like\n> this approach. I don't believe it's faster (and even if it was, the\n> difference is going to be negligible), and parameters hidden in some\n> struct increase the cognitive load. I like explicit arguments.\n>\nPersonally I prefer data in structs, of course,\nalways thinking about size and alignment, to optimize loading.\n\nBest regards,\nRanier Vilela\n\nEm sex., 29 de dez. de 2023 às 10:33, Tomas Vondra <[email protected]> escreveu:\n\nOn 12/29/23 12:53, Ranier Vilela wrote:\n> Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> escreveu:\n> \n> \n> \n> On 12/27/23 12:37, Ranier Vilela wrote:\n> > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> > <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n> > escreveu:\n> >\n> >\n> >\n> > On 12/26/23 19:10, Ranier Vilela wrote:\n> > > Hi,\n> > >\n> > > The commit b437571\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>>> I\n> > > think has an oversight.\n> > > When allocate memory and initialize private spool in function:\n> > > _brin_leader_participate_as_worker\n> > >\n> > > The behavior is the bs_spool (heap and index fields)\n> > > are left empty.\n> > >\n> > > The code affected is:\n> > > buildstate->bs_spool = (BrinSpool *)\n> palloc0(sizeof(BrinSpool));\n> > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > > + buildstate->bs_spool->heap = heap;\n> > > + buildstate->bs_spool->index = index;\n> > >\n> > > Is the fix correct?\n> > >\n> >\n> > Thanks for noticing this.\n> >\n> > You're welcome.\n> > \n> >\n> > Yes, I believe this is a bug - the assignments\n> > are certainly wrong, it leaves the fields set to NULL.\n> >\n> > I wonder how come this didn't fail during testing. Surely, if\n> the leader\n> > participates as a worker, the tuplesort_begin_index_brin shall\n> be called\n> > with heap/index being NULL, leading to some failure during the\n> sort. But\n> > maybe this means we don't actually need the heap/index fields,\n> it's just\n> > a copy of TuplesortIndexArg, but BRIN does not need that\n> because we sort\n> > the tuples by blkno, and we don't need the descriptors for that.\n> >\n> > Unfortunately I can't test on Windows, since I can't build with\n> meson on\n> > Windows.\n> >\n> >\n> > In any case, the _brin_parallel_scan_and_build does not\n> actually need\n> > the separate heap/index arguments, those are already in the spool.\n> >\n> > Yeah, for sure.\n> >\n> >\n> > I'll try to figure out if we want to simplify the tuplesort or\n> remove\n> > the arguments from _brin_parallel_scan_and_build.\n> >\n> \n> Here is a patch simplifying the BRIN parallel create code a little bit.\n> As I suspected, we don't need the heap/index in the spool at all, and we\n> don't need to pass it to tuplesort_begin_index_brin either - we only\n> need blkno, and we have that in the datum1 field. This also means we\n> don't need TuplesortIndexBrinArg.\n> \n> With Windows 10, msvc 2022, compile end pass ninja test.\n> \n> But, if you allow me, I would like to try another approach to\n> simplification.\n> Instead of increasing the arguments in the call, wouldn't it be better\n> to decrease them \n> and this way all arguments will be passed in the registers instead of on\n> a stack?\n> \n\nIf this was beneficial, we'd be passing everything through structs and\nnot as explicit arguments. But we don't. If you're arguing it's\nbeneficial in this case, it'd be good to see it demonstrated.Please see the https://www.agner.org/optimize/optimizing_cpp.pdfExcerpt:\"Use 64-bit modeParameter transfer is more efficient in 64-bit mode than in 32-bit mode, and more efficient in 64-bit Linux than in 64-bit Windows. In 64-bit Linux, the first six integer parameters and the first eight floating point parameters are transferred in registers, totaling up to fourteen register parameters. In 64-bit Windows, the first four parameters are transferred in registers, regardless of whether they are integers or floating point numbers.\"With function:_brin_parallel_scan_and_build(buildstate, buildstate->bs_spool, brinshared, sharedsort, heapRel, indexRel, sortmem, false);We have:Linux -> six first parameters in registers and two parameters in stackWindows -> four \nparameters in registers and four parameters in stack \n\n> bs_spool may well contain this data and will probably be useful in the\n> future.\n> \n> I made a v1 version, based on your patch, for your consideration.\n> \n\nI did actually consider doing it this way yesterday, but I don't like\nthis approach. I don't believe it's faster (and even if it was, the\ndifference is going to be negligible), and parameters hidden in some\nstruct increase the cognitive load. I like explicit arguments.Personally I prefer data in structs, of course,always thinking about size and alignment, to optimize loading.Best regards,Ranier Vilela",
"msg_date": "Fri, 29 Dec 2023 10:53:12 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "\n\nOn 12/29/23 14:53, Ranier Vilela wrote:\n> \n> \n> Em sex., 29 de dez. de 2023 às 10:33, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> escreveu:\n> \n> \n> \n> On 12/29/23 12:53, Ranier Vilela wrote:\n> > Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra\n> > <[email protected]\n> <mailto:[email protected]>\n> <mailto:[email protected]\n> <mailto:[email protected]>>>\n> > escreveu:\n> >\n> >\n> >\n> > On 12/27/23 12:37, Ranier Vilela wrote:\n> > > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> > > <[email protected]\n> <mailto:[email protected]>\n> > <mailto:[email protected]\n> <mailto:[email protected]>>\n> > <mailto:[email protected]\n> <mailto:[email protected]>\n> > <mailto:[email protected]\n> <mailto:[email protected]>>>>\n> > > escreveu:\n> > >\n> > >\n> > >\n> > > On 12/26/23 19:10, Ranier Vilela wrote:\n> > > > Hi,\n> > > >\n> > > > The commit b437571\n> > > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>>\n> > > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>\n> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>>>> I\n> > > > think has an oversight.\n> > > > When allocate memory and initialize private spool in\n> function:\n> > > > _brin_leader_participate_as_worker\n> > > >\n> > > > The behavior is the bs_spool (heap and index fields)\n> > > > are left empty.\n> > > >\n> > > > The code affected is:\n> > > > buildstate->bs_spool = (BrinSpool *)\n> > palloc0(sizeof(BrinSpool));\n> > > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > > > - buildstate->bs_spool->index =\n> buildstate->bs_spool->index;\n> > > > + buildstate->bs_spool->heap = heap;\n> > > > + buildstate->bs_spool->index = index;\n> > > >\n> > > > Is the fix correct?\n> > > >\n> > >\n> > > Thanks for noticing this.\n> > >\n> > > You're welcome.\n> > > \n> > >\n> > > Yes, I believe this is a bug - the assignments\n> > > are certainly wrong, it leaves the fields set to NULL.\n> > >\n> > > I wonder how come this didn't fail during testing.\n> Surely, if\n> > the leader\n> > > participates as a worker, the tuplesort_begin_index_brin\n> shall\n> > be called\n> > > with heap/index being NULL, leading to some failure\n> during the\n> > sort. But\n> > > maybe this means we don't actually need the heap/index\n> fields,\n> > it's just\n> > > a copy of TuplesortIndexArg, but BRIN does not need that\n> > because we sort\n> > > the tuples by blkno, and we don't need the descriptors\n> for that.\n> > >\n> > > Unfortunately I can't test on Windows, since I can't build with\n> > meson on\n> > > Windows.\n> > >\n> > >\n> > > In any case, the _brin_parallel_scan_and_build does not\n> > actually need\n> > > the separate heap/index arguments, those are already in\n> the spool.\n> > >\n> > > Yeah, for sure.\n> > >\n> > >\n> > > I'll try to figure out if we want to simplify the\n> tuplesort or\n> > remove\n> > > the arguments from _brin_parallel_scan_and_build.\n> > >\n> >\n> > Here is a patch simplifying the BRIN parallel create code a\n> little bit.\n> > As I suspected, we don't need the heap/index in the spool at\n> all, and we\n> > don't need to pass it to tuplesort_begin_index_brin either -\n> we only\n> > need blkno, and we have that in the datum1 field. This also\n> means we\n> > don't need TuplesortIndexBrinArg.\n> >\n> > With Windows 10, msvc 2022, compile end pass ninja test.\n> >\n> > But, if you allow me, I would like to try another approach to\n> > simplification.\n> > Instead of increasing the arguments in the call, wouldn't it be better\n> > to decrease them \n> > and this way all arguments will be passed in the registers instead\n> of on\n> > a stack?\n> >\n> \n> If this was beneficial, we'd be passing everything through structs and\n> not as explicit arguments. But we don't. If you're arguing it's\n> beneficial in this case, it'd be good to see it demonstrated.\n> \n> Please see the https://www.agner.org/optimize/optimizing_cpp.pdf\n> <https://www.agner.org/optimize/optimizing_cpp.pdf>\n> Excerpt:\n> \"Use 64-bit mode\n> Parameter transfer is more efficient in 64-bit mode than in 32-bit mode,\n> and more efficient in 64-bit Linux than in 64-bit Windows. In 64-bit\n> Linux, the first six integer parameters and the first eight floating\n> point parameters are transferred in registers, totaling up to fourteen\n> register parameters. In 64-bit Windows, the first four parameters are\n> transferred in registers, regardless of whether they are integers or\n> floating point numbers.\"\n> \n> With function:\n> _brin_parallel_scan_and_build(buildstate, buildstate->bs_spool, \n> brinshared, sharedsort, heapRel, indexRel, sortmem, false);\n> We have:\n> Linux -> six first parameters in registers and two parameters in stack\n> Windows -> four parameters in registers and four parameters in stack\n> \n\nI suggested you demonstrate this actually makes a difference in\npractice. Quoting a document is not that.\n\nAlso, that document is about C++, and while C and C++ are very close, I\nwouldn't be surprised if there were differences. Furthermore, that\nsection talks about integer/floating point arguments, while we're\ndealing with pointers, and it's not clear if that changes something (the\ndocument has a separate section about pointers/references, which\nsuggests pointers and integers are not 100% the same thing).\n\nAnd finally, I haven't tried disassembling the code, but I'd be quite\nsurprised if these things were not heavily dependent on the compiler\nand/or optimization level.\n\n> \n> > bs_spool may well contain this data and will probably be useful in the\n> > future.\n> >\n> > I made a v1 version, based on your patch, for your consideration.\n> >\n> \n> I did actually consider doing it this way yesterday, but I don't like\n> this approach. I don't believe it's faster (and even if it was, the\n> difference is going to be negligible), and parameters hidden in some\n> struct increase the cognitive load. I like explicit arguments.\n> \n> Personally I prefer data in structs, of course,\n> always thinking about size and alignment, to optimize loading.\n> \n\nAs I said, I think this is quite irrelevant because we'll call the\nfunction maybe 10-times during the whole index build. With millions of\nother function calls.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Dec 2023 15:32:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "Em sex., 29 de dez. de 2023 às 08:53, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra <\n> [email protected]> escreveu:\n>\n>>\n>>\n>> On 12/27/23 12:37, Ranier Vilela wrote:\n>> > Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n>> > <[email protected] <mailto:[email protected]>>\n>> > escreveu:\n>> >\n>> >\n>> >\n>> > On 12/26/23 19:10, Ranier Vilela wrote:\n>> > > Hi,\n>> > >\n>> > > The commit b437571\n>> > <http://b437571714707bc6466abde1a0af5e69aaade09c\n>> > <http://b437571714707bc6466abde1a0af5e69aaade09c>> I\n>> > > think has an oversight.\n>> > > When allocate memory and initialize private spool in function:\n>> > > _brin_leader_participate_as_worker\n>> > >\n>> > > The behavior is the bs_spool (heap and index fields)\n>> > > are left empty.\n>> > >\n>> > > The code affected is:\n>> > > buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n>> > > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n>> > > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n>> > > + buildstate->bs_spool->heap = heap;\n>> > > + buildstate->bs_spool->index = index;\n>> > >\n>> > > Is the fix correct?\n>> > >\n>> >\n>> > Thanks for noticing this.\n>> >\n>> > You're welcome.\n>> >\n>> >\n>> > Yes, I believe this is a bug - the assignments\n>> > are certainly wrong, it leaves the fields set to NULL.\n>> >\n>> > I wonder how come this didn't fail during testing. Surely, if the\n>> leader\n>> > participates as a worker, the tuplesort_begin_index_brin shall be\n>> called\n>> > with heap/index being NULL, leading to some failure during the\n>> sort. But\n>> > maybe this means we don't actually need the heap/index fields, it's\n>> just\n>> > a copy of TuplesortIndexArg, but BRIN does not need that because we\n>> sort\n>> > the tuples by blkno, and we don't need the descriptors for that.\n>> >\n>> > Unfortunately I can't test on Windows, since I can't build with meson on\n>> > Windows.\n>> >\n>> >\n>> > In any case, the _brin_parallel_scan_and_build does not actually\n>> need\n>> > the separate heap/index arguments, those are already in the spool.\n>> >\n>> > Yeah, for sure.\n>> >\n>> >\n>> > I'll try to figure out if we want to simplify the tuplesort or\n>> remove\n>> > the arguments from _brin_parallel_scan_and_build.\n>> >\n>>\n>> Here is a patch simplifying the BRIN parallel create code a little bit.\n>> As I suspected, we don't need the heap/index in the spool at all, and we\n>> don't need to pass it to tuplesort_begin_index_brin either - we only\n>> need blkno, and we have that in the datum1 field. This also means we\n>> don't need TuplesortIndexBrinArg.\n>>\n> With Windows 10, msvc 2022, compile end pass ninja test.\n>\n> But, if you allow me, I would like to try another approach to\n> simplification.\n> Instead of increasing the arguments in the call, wouldn't it be better to\n> decrease them\n> and this way all arguments will be passed in the registers instead of on a\n> stack?\n>\n> bs_spool may well contain this data and will probably be useful in the\n> future.\n>\n> I made a v1 version, based on your patch, for your consideration.\n>\nAs I wrote, the new patch version was for consideration.\nIt seems more like a question of style, so it's better to remove it.\n\nAnyway +1 for your original patch.\n\nBest regards,\nRanier Vilela\n\nEm sex., 29 de dez. de 2023 às 08:53, Ranier Vilela <[email protected]> escreveu:Em qui., 28 de dez. de 2023 às 22:16, Tomas Vondra <[email protected]> escreveu:\n\nOn 12/27/23 12:37, Ranier Vilela wrote:\n> Em ter., 26 de dez. de 2023 às 19:07, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> escreveu:\n> \n> \n> \n> On 12/26/23 19:10, Ranier Vilela wrote:\n> > Hi,\n> >\n> > The commit b437571\n> <http://b437571714707bc6466abde1a0af5e69aaade09c\n> <http://b437571714707bc6466abde1a0af5e69aaade09c>> I\n> > think has an oversight.\n> > When allocate memory and initialize private spool in function:\n> > _brin_leader_participate_as_worker\n> >\n> > The behavior is the bs_spool (heap and index fields)\n> > are left empty.\n> >\n> > The code affected is:\n> > buildstate->bs_spool = (BrinSpool *) palloc0(sizeof(BrinSpool));\n> > - buildstate->bs_spool->heap = buildstate->bs_spool->heap;\n> > - buildstate->bs_spool->index = buildstate->bs_spool->index;\n> > + buildstate->bs_spool->heap = heap;\n> > + buildstate->bs_spool->index = index;\n> >\n> > Is the fix correct?\n> >\n> \n> Thanks for noticing this.\n> \n> You're welcome.\n> \n> \n> Yes, I believe this is a bug - the assignments\n> are certainly wrong, it leaves the fields set to NULL.\n> \n> I wonder how come this didn't fail during testing. Surely, if the leader\n> participates as a worker, the tuplesort_begin_index_brin shall be called\n> with heap/index being NULL, leading to some failure during the sort. But\n> maybe this means we don't actually need the heap/index fields, it's just\n> a copy of TuplesortIndexArg, but BRIN does not need that because we sort\n> the tuples by blkno, and we don't need the descriptors for that.\n> \n> Unfortunately I can't test on Windows, since I can't build with meson on\n> Windows.\n> \n> \n> In any case, the _brin_parallel_scan_and_build does not actually need\n> the separate heap/index arguments, those are already in the spool.\n> \n> Yeah, for sure.\n> \n> \n> I'll try to figure out if we want to simplify the tuplesort or remove\n> the arguments from _brin_parallel_scan_and_build.\n> \n\nHere is a patch simplifying the BRIN parallel create code a little bit.\nAs I suspected, we don't need the heap/index in the spool at all, and we\ndon't need to pass it to tuplesort_begin_index_brin either - we only\nneed blkno, and we have that in the datum1 field. This also means we\ndon't need TuplesortIndexBrinArg.With Windows 10, msvc 2022, compile end pass ninja test.But, if you allow me, I would like to try another approach to simplification.Instead of increasing the arguments in the call, wouldn't it be better to decrease them and this way all arguments will be passed in the registers instead of on a stack?bs_spool may well contain this data and will probably be useful in the future.I made a v1 version, based on your patch, for your consideration. As I wrote, the new patch version was for consideration.It seems more like a question of style, so it's better to remove it.Anyway +1 for your original patch. Best regards,Ranier Vilela",
"msg_date": "Fri, 29 Dec 2023 14:02:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "On 12/29/23 18:02, Ranier Vilela wrote:\n>\n> ...\n> \n> As I wrote, the new patch version was for consideration.\n> It seems more like a question of style, so it's better to remove it.\n> \n> Anyway +1 for your original patch.\n> \n\nI've pushed my original patch. Thanks for the report.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Dec 2023 23:19:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
},
{
"msg_contents": "Em sáb., 30 de dez. de 2023 19:19, Tomas Vondra <\[email protected]> escreveu:\n\n> On 12/29/23 18:02, Ranier Vilela wrote:\n> >\n> > ...\n> >\n> > As I wrote, the new patch version was for consideration.\n> > It seems more like a question of style, so it's better to remove it.\n> >\n> > Anyway +1 for your original patch.\n> >\n>\n> I've pushed my original patch. Thanks for the report.\n>\nThank you.\n\nBest regards,\nRanier Vilela\n\nEm sáb., 30 de dez. de 2023 19:19, Tomas Vondra <[email protected]> escreveu:On 12/29/23 18:02, Ranier Vilela wrote:\n>\n> ...\n> \n> As I wrote, the new patch version was for consideration.\n> It seems more like a question of style, so it's better to remove it.\n> \n> Anyway +1 for your original patch.\n> \n\nI've pushed my original patch. Thanks for the report.Thank you.Best regards,Ranier Vilela",
"msg_date": "Sat, 30 Dec 2023 19:50:05 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix Brin Private Spool Initialization\n (src/backend/access/brin/brin.c)"
}
] |
[
{
"msg_contents": "I investigated the report at [1] about pg_file_settings not reporting\ninvalid values of \"log_connections\". It turns out it's broken for\nPGC_BACKEND and PGC_SU_BACKEND parameters, but not other ones.\nThe cause is a bit of premature optimization in this logic:\n\n * If a PGC_BACKEND or PGC_SU_BACKEND parameter is changed in\n * the config file, we want to accept the new value in the\n * postmaster (whence it will propagate to\n * subsequently-started backends), but ignore it in existing\n * backends. ...\n\nUpon detecting that case, set_config_option just returns -1 immediately\nwithout bothering to validate the value. It should check for invalid\ninput before returning -1, which we can mechanize with a one-line fix:\n\n- return -1;\n+ changeVal = false;\n\nWhile studying this, I also noted that the bit to prevent changes in\nparallel workers seems seriously broken:\n\n if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE)\n ereport(elevel,\n (errcode(ERRCODE_INVALID_TRANSACTION_STATE),\n errmsg(\"cannot set parameters during a parallel operation\")));\n\nThis is evidently assuming that ereport() won't return, which seems\nlike a very dubious assumption given the various values that elevel\ncan have. Maybe it's accidentally true -- I don't recall any\nreports of trouble here -- but it sure looks fragile.\n\nHence, proposed patch attached.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAK30z9T9gaF_isNquccZxi7agXCSjPjMsFXiifmkfu4VpZguxw%40mail.gmail.com",
"msg_date": "Tue, 26 Dec 2023 14:02:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Two small bugs in guc.c"
},
{
"msg_contents": "On Tue Dec 26, 2023 at 1:02 PM CST, Tom Lane wrote:\n> I investigated the report at [1] about pg_file_settings not reporting\n> invalid values of \"log_connections\". It turns out it's broken for\n> PGC_BACKEND and PGC_SU_BACKEND parameters, but not other ones.\n> The cause is a bit of premature optimization in this logic:\n>\n> * If a PGC_BACKEND or PGC_SU_BACKEND parameter is changed in\n> * the config file, we want to accept the new value in the\n> * postmaster (whence it will propagate to\n> * subsequently-started backends), but ignore it in existing\n> * backends. ...\n>\n> Upon detecting that case, set_config_option just returns -1 immediately\n> without bothering to validate the value. It should check for invalid\n> input before returning -1, which we can mechanize with a one-line fix:\n>\n> - return -1;\n> + changeVal = false;\n>\n> While studying this, I also noted that the bit to prevent changes in\n> parallel workers seems seriously broken:\n>\n> if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE)\n> ereport(elevel,\n> (errcode(ERRCODE_INVALID_TRANSACTION_STATE),\n> errmsg(\"cannot set parameters during a parallel operation\")));\n>\n> This is evidently assuming that ereport() won't return, which seems\n> like a very dubious assumption given the various values that elevel\n> can have. Maybe it's accidentally true -- I don't recall any\n> reports of trouble here -- but it sure looks fragile.\n>\n> Hence, proposed patch attached.\n\nLooks good to me.\n\n--\nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 27 Dec 2023 12:26:42 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two small bugs in guc.c"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nTo improve selectivities of queries I suggest to add support of\nmultidimensional histograms as described in paper [1].\n\nTo query multidimensional histograms efficiently we can use H-trees as\ndescribed in paper [2].\n\nPostgres has limited support of multivariate statistics:\n * MCV only useful for columns with small number of distinct values;\n * functional dependencies only reflect dependencies among columns\n(not column values).\n\n[1] http://www.cs.cmu.edu/~rcarlson/docs/RyanCarlson_databases.pdf\n[2] https://dl.acm.org/doi/pdf/10.1145/50202.50205\n\n-- \nRegards,\nAlexander Cheshev\n\n\n",
"msg_date": "Wed, 27 Dec 2023 01:13:25 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multidimensional Histograms"
},
{
"msg_contents": "Hello Alexander,\n\nWe actually had histograms in the early patches adding multivariate\nstatistics [1]. We however ended up removing histograms and only kept\nthe simpler types, for a couple reasons.\n\nIt might be worth going through the discussion - I'm sure one of the\nreasons was we needed to limit the scope of the patch, and histograms\nwere much more complex and possibly less useful compared to the other\nstatistics types.\n\nAnother reason was that the algorithm described in the two papers you\nreference (1988 paper by DeWitt and the evaluation by Carlson and\nSutherland from ~2010) is simple but has pretty obvious weaknesses. It\nprocesses the columns one by one - first build bucket on column \"a\",\nthen splits each bucket into buckets on \"b\". So it's not symmetrical,\nand it results in lower accuracy compared to an \"ideal\" histogram with\nthe same number of buckets (especially for the dimensions split early).\n\nThis does indeed produce an equi-depth histogram, which seems nice, but\nthe mesh is regular in such a way that any value of the first dimension\nintersects all buckets on the second dimension. So for example with a\nhistogram of 100x100 buckets on [a,b], any value \"a=X\" intersects with\n100 buckets on \"b\", each representing 1/10000 of tuples. But we don't\nknow how the tuples are distributed in the tuple - so we end up using\n0.5 of the bucket as unbiased estimate, but that can easily add-up in\nthe wrong dimension.\n\nHowever, this is not the only way to build an equi-depth histogram,\nthere are ways to make it more symmetrical. Also, it's not clear\nequi-depth histograms are ideal with multiple columns. There are papers\nproposing various other types of histograms (using different criteria to\nbuild buckets optimizing a different metric). The most interesting one\nseems to be V-Optimal histograms - see for example papers [1], [2], [3],\n[4], [5] and [6]. I'm sure there are more. The drawback of course is\nthat it's more expensive to build such histograms.\n\nIIRC the patch tried to do something like V-optimal histograms, but\nusing a greedy algorithm and not the exhaustive stuff described in the\nvarious papers.\n\n[1] https://www.vldb.org/conf/1998/p275.pdf\n[2]\nhttps://cs-people.bu.edu/mathan/reading-groups/papers-classics/histograms.pdf\n[3] https://dl.acm.org/doi/pdf/10.1145/304182.304200\n[4] http://www.cs.columbia.edu/~gravano/Papers/2001/sigmod01b.pdf\n[5] https://cs.gmu.edu/~carlotta/publications/vldb090.pdf\n[6] https://core.ac.uk/download/pdf/82158702.pdf\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Dec 2023 22:19:57 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "On 12/27/23 22:19, Tomas Vondra wrote:\n> Hello Alexander,\n> \n> We actually had histograms in the early patches adding multivariate\n> statistics [1]. We however ended up removing histograms and only kept\n> the simpler types, for a couple reasons.\n> \n> It might be worth going through the discussion - I'm sure one of the\n> reasons was we needed to limit the scope of the patch, and histograms\n> were much more complex and possibly less useful compared to the other\n> statistics types.\n>\n> ...\n\nFWIW I did not intend to reject the idea of adding multi-dimensional\nhistograms, but rather to provide some insight into the history of the\npast attempt, and also point some weaknesses of the algorithm described\nin the 1988 paper. If you choose to work on this, I can do a review etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 28 Dec 2023 15:24:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "Hi Tomas,\n\n> Another reason was that the algorithm described in the two papers you\n> reference (1988 paper by DeWitt and the evaluation by Carlson and\n> Sutherland from ~2010) is simple but has pretty obvious weaknesses. It\n> processes the columns one by one - first build bucket on column \"a\",\n> then splits each bucket into buckets on \"b\". So it's not symmetrical,\n> and it results in lower accuracy compared to an \"ideal\" histogram with\n> the same number of buckets (especially for the dimensions split early).\n\nAs stated in [1] Sum Square Error (SSE) is one of the most natural\nerror metrics. Equi-Depth Histogram is not ideal because it doesn't\nminimize SSE. On the other hand V-Optimal Histogram indeed minimizes\nSSE and from this point of view can be considered as an ideal\nsolution.\n\n> This does indeed produce an equi-depth histogram, which seems nice, but\n> the mesh is regular in such a way that any value of the first dimension\n> intersects all buckets on the second dimension. So for example with a\n> histogram of 100x100 buckets on [a,b], any value \"a=X\" intersects with\n> 100 buckets on \"b\", each representing 1/10000 of tuples. But we don't\n> know how the tuples are distributed in the tuple - so we end up using\n> 0.5 of the bucket as unbiased estimate, but that can easily add-up in\n> the wrong dimension.\n\nSuppose that we have a query box a=X and we know how data is\ndistributed in buckets. Lets consider only the buckets which are\nintersected by the query box a=X. As we know how data is distributes\nin buckets we can exclude the buckets which have no tuples which\nintersect the query box a=X.\n\nAs we store buckets with no information about data distribution we\nhave to make reasonable assumptions. If we keep buckets relativly\nsmall then we can assume that buckets have uniform distribution.\n\nWhat I am trying to say is that the problem which you pointed out\ncomes from the fact that we store buckets with no information about\ndata distribution. Even in one dimensional case we have to assume how\ndata is distributed in buckets. By the way Postgres assumes that data\nhas uniform distribution in buckets.\n\n> However, this is not the only way to build an equi-depth histogram,\n> there are ways to make it more symmetrical. Also, it's not clear\n> equi-depth histograms are ideal with multiple columns. There are papers\n> proposing various other types of histograms (using different criteria to\n> build buckets optimizing a different metric). The most interesting one\n> seems to be V-Optimal histograms - see for example papers [1], [2], [3],\n> [4], [5] and [6]. I'm sure there are more. The drawback of course is\n> that it's more expensive to build such histograms.\n\nTomas thank you for shearing with me your ideas regarding V-Optimal\nHistogram. I read through the papers which you gave me and came up\nwith the following conclusion.\n\nThe problem can be formulated like this. We have N tuples in\nM-dimensional space. We need to partition space into buckets\niteratively until SSE is less than E or we reach the limit of buckets\nB.\n\nIn the case of M-dimensional space it seems to me like an NP-hard\nproblem. A greedy heuristic MHIST-2 is proposed in [2]. Preliminary we\nsort N tuples in ascending order. Then we iteratively select a bucket\nwhich leads to the largest SSE reduction and split it into two parts.\nWe repeat the process until SSE is less than E or we reach the limit\nof buckets B.\n\nIf we assume that B is significantly less than N then the time\ncomplexity of MHIST-2 can be estimated as O(M*N*B). Suppose that M\nequals 3, B equals 1000 and N equals 300*B then it will take slightly\nover 0.9*10^9 iterations to build a V-Optimal Histogram.\n\nYou can see that we have to keep B as low as possible in order to\nbuild V-Optimal Histogram in feasible time. And here is a paradox.\n From one side we use V-Optimal Histogram in order to minimize SSE but\nfrom the other side we have to keep B as low as possible which\neventually leads to increase in SSE.\n\nOn the other hand time complexity required to build an Equi-Depth\nHistogram doesn't depend on B and can be estimated as O(M*N*logN). SSE\ncan be arbitrarily reduced by increasing B which in turn is only\nlimited by the storage limit. Experimental results show low error\nmetric [3].\n\nIn Equi-Depth Histogram a bucket is represented by two vectors. The\nfirst vector points to the left bottom corner of the bucket and the\nother one point to the right top corner of the bucket. Thus space\ncomplexity of Equi-Depth Histogram can be estimated as\n2*integer_size*M*B. Assume that M equals 3, B equals 1000 and\ninteger_size equals 4 bytes then Equi-Depth Histogram will ocupy 24000\nbytes.\n\nIf a bucket is partially intersected by a query box then we assume\nthat data has uniform distribution inside of the bucket. It is a\nreasonable assumption if B is relativly large.\n\nIn order to efficianly find buckets which intersect a query box we can\nstore Equi-Depth Histogram in R-tree as proposed in [3]. On average it\ntakes O(logB) iterations to find buckets which intersect a query box.\nAs storage requirements are dominated by leaf nodes we can assume that\nit takes slightly more than 2*integer_size*M*B.\n\n> IIRC the patch tried to do something like V-optimal histograms, but\n> using a greedy algorithm and not the exhaustive stuff described in the\n> various papers.\n\nWe should only consider computationally tackable solutions. In one\ndimensional case V-Optimal Histogram is probably a good solution but\nin multi-dimensional case I would only consider Equi-Width or\nEqui-Depth Histograms. As stated in multiple papers Equi-Depth\nHistogram proves to be more accurate than Equi-Width Histogram. By the\nway Postgres uses something like Equi-Width Histogram.\n\n> FWIW I did not intend to reject the idea of adding multi-dimensional\n> histograms, but rather to provide some insight into the history of the\n> past attempt, and also point some weaknesses of the algorithm described\n> in the 1988 paper. If you choose to work on this, I can do a review etc.\n\nThank you very much Tomas. I am new in the community and I definitely\ndidn't expect to have such a warm welcome.\n\nAs I indicated above Equi-Depth Histogram proves to be more accurate\nthan Equi-Width Histogram and both have the same time and space\nrequirements. Postgres uses some sort of Equi-Width Histogram. Suppose\nthat:\n * I will create a patch which will replace Equi-Width Histogram with\nEqui-Depth Histogram but only in 1-dimensional case.\n * I will show experimental results which will demonstrate improvement\nof selectivity estimation.\nThen will the path be accepted by the community?\n\nIf the above path is accepted by the community then I will proceed\nfurther with M-dimensional Equi-Depth Histogram...\n\n\n[1] https://www.vldb.org/conf/1998/p275.pdf\n[2] https://www.vldb.org/conf/1997/P486.PDF\n[3] https://dl.acm.org/doi/pdf/10.1145/50202.50205\n\nRegards,\nAlexander Cheshev\n\nOn Thu, 28 Dec 2023 at 15:25, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 12/27/23 22:19, Tomas Vondra wrote:\n> > Hello Alexander,\n> >\n> > We actually had histograms in the early patches adding multivariate\n> > statistics [1]. We however ended up removing histograms and only kept\n> > the simpler types, for a couple reasons.\n> >\n> > It might be worth going through the discussion - I'm sure one of the\n> > reasons was we needed to limit the scope of the patch, and histograms\n> > were much more complex and possibly less useful compared to the other\n> > statistics types.\n> >\n> > ...\n>\n> FWIW I did not intend to reject the idea of adding multi-dimensional\n> histograms, but rather to provide some insight into the history of the\n> past attempt, and also point some weaknesses of the algorithm described\n> in the 1988 paper. If you choose to work on this, I can do a review etc.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 6 Jan 2024 01:00:04 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "Hi Tomas,\n\nI am sorry I didn't look into the code carefully. Indeed Postgres uses\nEqui-Depth Histogram:\n\ndelta = (nvals - 1) / (num_hist - 1);\n\n\nRegards,\nAlexander Cheshev\n\nOn Sat, 6 Jan 2024 at 01:00, Alexander Cheshev <[email protected]> wrote:\n>\n> Hi Tomas,\n>\n> > Another reason was that the algorithm described in the two papers you\n> > reference (1988 paper by DeWitt and the evaluation by Carlson and\n> > Sutherland from ~2010) is simple but has pretty obvious weaknesses. It\n> > processes the columns one by one - first build bucket on column \"a\",\n> > then splits each bucket into buckets on \"b\". So it's not symmetrical,\n> > and it results in lower accuracy compared to an \"ideal\" histogram with\n> > the same number of buckets (especially for the dimensions split early).\n>\n> As stated in [1] Sum Square Error (SSE) is one of the most natural\n> error metrics. Equi-Depth Histogram is not ideal because it doesn't\n> minimize SSE. On the other hand V-Optimal Histogram indeed minimizes\n> SSE and from this point of view can be considered as an ideal\n> solution.\n>\n> > This does indeed produce an equi-depth histogram, which seems nice, but\n> > the mesh is regular in such a way that any value of the first dimension\n> > intersects all buckets on the second dimension. So for example with a\n> > histogram of 100x100 buckets on [a,b], any value \"a=X\" intersects with\n> > 100 buckets on \"b\", each representing 1/10000 of tuples. But we don't\n> > know how the tuples are distributed in the tuple - so we end up using\n> > 0.5 of the bucket as unbiased estimate, but that can easily add-up in\n> > the wrong dimension.\n>\n> Suppose that we have a query box a=X and we know how data is\n> distributed in buckets. Lets consider only the buckets which are\n> intersected by the query box a=X. As we know how data is distributes\n> in buckets we can exclude the buckets which have no tuples which\n> intersect the query box a=X.\n>\n> As we store buckets with no information about data distribution we\n> have to make reasonable assumptions. If we keep buckets relativly\n> small then we can assume that buckets have uniform distribution.\n>\n> What I am trying to say is that the problem which you pointed out\n> comes from the fact that we store buckets with no information about\n> data distribution. Even in one dimensional case we have to assume how\n> data is distributed in buckets. By the way Postgres assumes that data\n> has uniform distribution in buckets.\n>\n> > However, this is not the only way to build an equi-depth histogram,\n> > there are ways to make it more symmetrical. Also, it's not clear\n> > equi-depth histograms are ideal with multiple columns. There are papers\n> > proposing various other types of histograms (using different criteria to\n> > build buckets optimizing a different metric). The most interesting one\n> > seems to be V-Optimal histograms - see for example papers [1], [2], [3],\n> > [4], [5] and [6]. I'm sure there are more. The drawback of course is\n> > that it's more expensive to build such histograms.\n>\n> Tomas thank you for shearing with me your ideas regarding V-Optimal\n> Histogram. I read through the papers which you gave me and came up\n> with the following conclusion.\n>\n> The problem can be formulated like this. We have N tuples in\n> M-dimensional space. We need to partition space into buckets\n> iteratively until SSE is less than E or we reach the limit of buckets\n> B.\n>\n> In the case of M-dimensional space it seems to me like an NP-hard\n> problem. A greedy heuristic MHIST-2 is proposed in [2]. Preliminary we\n> sort N tuples in ascending order. Then we iteratively select a bucket\n> which leads to the largest SSE reduction and split it into two parts.\n> We repeat the process until SSE is less than E or we reach the limit\n> of buckets B.\n>\n> If we assume that B is significantly less than N then the time\n> complexity of MHIST-2 can be estimated as O(M*N*B). Suppose that M\n> equals 3, B equals 1000 and N equals 300*B then it will take slightly\n> over 0.9*10^9 iterations to build a V-Optimal Histogram.\n>\n> You can see that we have to keep B as low as possible in order to\n> build V-Optimal Histogram in feasible time. And here is a paradox.\n> From one side we use V-Optimal Histogram in order to minimize SSE but\n> from the other side we have to keep B as low as possible which\n> eventually leads to increase in SSE.\n>\n> On the other hand time complexity required to build an Equi-Depth\n> Histogram doesn't depend on B and can be estimated as O(M*N*logN). SSE\n> can be arbitrarily reduced by increasing B which in turn is only\n> limited by the storage limit. Experimental results show low error\n> metric [3].\n>\n> In Equi-Depth Histogram a bucket is represented by two vectors. The\n> first vector points to the left bottom corner of the bucket and the\n> other one point to the right top corner of the bucket. Thus space\n> complexity of Equi-Depth Histogram can be estimated as\n> 2*integer_size*M*B. Assume that M equals 3, B equals 1000 and\n> integer_size equals 4 bytes then Equi-Depth Histogram will ocupy 24000\n> bytes.\n>\n> If a bucket is partially intersected by a query box then we assume\n> that data has uniform distribution inside of the bucket. It is a\n> reasonable assumption if B is relativly large.\n>\n> In order to efficianly find buckets which intersect a query box we can\n> store Equi-Depth Histogram in R-tree as proposed in [3]. On average it\n> takes O(logB) iterations to find buckets which intersect a query box.\n> As storage requirements are dominated by leaf nodes we can assume that\n> it takes slightly more than 2*integer_size*M*B.\n>\n> > IIRC the patch tried to do something like V-optimal histograms, but\n> > using a greedy algorithm and not the exhaustive stuff described in the\n> > various papers.\n>\n> We should only consider computationally tackable solutions. In one\n> dimensional case V-Optimal Histogram is probably a good solution but\n> in multi-dimensional case I would only consider Equi-Width or\n> Equi-Depth Histograms. As stated in multiple papers Equi-Depth\n> Histogram proves to be more accurate than Equi-Width Histogram. By the\n> way Postgres uses something like Equi-Width Histogram.\n>\n> > FWIW I did not intend to reject the idea of adding multi-dimensional\n> > histograms, but rather to provide some insight into the history of the\n> > past attempt, and also point some weaknesses of the algorithm described\n> > in the 1988 paper. If you choose to work on this, I can do a review etc.\n>\n> Thank you very much Tomas. I am new in the community and I definitely\n> didn't expect to have such a warm welcome.\n>\n> As I indicated above Equi-Depth Histogram proves to be more accurate\n> than Equi-Width Histogram and both have the same time and space\n> requirements. Postgres uses some sort of Equi-Width Histogram. Suppose\n> that:\n> * I will create a patch which will replace Equi-Width Histogram with\n> Equi-Depth Histogram but only in 1-dimensional case.\n> * I will show experimental results which will demonstrate improvement\n> of selectivity estimation.\n> Then will the path be accepted by the community?\n>\n> If the above path is accepted by the community then I will proceed\n> further with M-dimensional Equi-Depth Histogram...\n>\n>\n> [1] https://www.vldb.org/conf/1998/p275.pdf\n> [2] https://www.vldb.org/conf/1997/P486.PDF\n> [3] https://dl.acm.org/doi/pdf/10.1145/50202.50205\n>\n> Regards,\n> Alexander Cheshev\n>\n> On Thu, 28 Dec 2023 at 15:25, Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 12/27/23 22:19, Tomas Vondra wrote:\n> > > Hello Alexander,\n> > >\n> > > We actually had histograms in the early patches adding multivariate\n> > > statistics [1]. We however ended up removing histograms and only kept\n> > > the simpler types, for a couple reasons.\n> > >\n> > > It might be worth going through the discussion - I'm sure one of the\n> > > reasons was we needed to limit the scope of the patch, and histograms\n> > > were much more complex and possibly less useful compared to the other\n> > > statistics types.\n> > >\n> > > ...\n> >\n> > FWIW I did not intend to reject the idea of adding multi-dimensional\n> > histograms, but rather to provide some insight into the history of the\n> > past attempt, and also point some weaknesses of the algorithm described\n> > in the 1988 paper. If you choose to work on this, I can do a review etc.\n> >\n> > regards\n> >\n> > --\n> > Tomas Vondra\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 6 Jan 2024 10:08:23 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "\n\nOn 1/6/24 01:00, Alexander Cheshev wrote:\n> Hi Tomas,\n> \n>> Another reason was that the algorithm described in the two papers you\n>> reference (1988 paper by DeWitt and the evaluation by Carlson and\n>> Sutherland from ~2010) is simple but has pretty obvious weaknesses. It\n>> processes the columns one by one - first build bucket on column \"a\",\n>> then splits each bucket into buckets on \"b\". So it's not symmetrical,\n>> and it results in lower accuracy compared to an \"ideal\" histogram with\n>> the same number of buckets (especially for the dimensions split early).\n> \n> As stated in [1] Sum Square Error (SSE) is one of the most natural\n> error metrics. Equi-Depth Histogram is not ideal because it doesn't\n> minimize SSE. On the other hand V-Optimal Histogram indeed minimizes\n> SSE and from this point of view can be considered as an ideal\n> solution.\n> \n>> This does indeed produce an equi-depth histogram, which seems nice, but\n>> the mesh is regular in such a way that any value of the first dimension\n>> intersects all buckets on the second dimension. So for example with a\n>> histogram of 100x100 buckets on [a,b], any value \"a=X\" intersects with\n>> 100 buckets on \"b\", each representing 1/10000 of tuples. But we don't\n>> know how the tuples are distributed in the tuple - so we end up using\n>> 0.5 of the bucket as unbiased estimate, but that can easily add-up in\n>> the wrong dimension.\n> \n> Suppose that we have a query box a=X and we know how data is\n> distributed in buckets. Lets consider only the buckets which are\n> intersected by the query box a=X. As we know how data is distributes\n> in buckets we can exclude the buckets which have no tuples which\n> intersect the query box a=X.\n> \n> As we store buckets with no information about data distribution we\n> have to make reasonable assumptions. If we keep buckets relativly\n> small then we can assume that buckets have uniform distribution.\n> \n> What I am trying to say is that the problem which you pointed out\n> comes from the fact that we store buckets with no information about\n> data distribution. Even in one dimensional case we have to assume how\n> data is distributed in buckets. By the way Postgres assumes that data\n> has uniform distribution in buckets.\n> \n\nIt's not just what Postgres assumes, the assumption bucket uniformity is\nsomewhat inherent to the whole concept of a histogram. Yes, maybe we\ncould keep some \"distribution\" info about each bucket, but then maybe we\ncould simply build histogram with more bins occupying the same space?\n\nThe thing I was proposing is that it should be possible to build\nhistograms with bins adapted to density in the given region. With\nsmaller buckets in areas with high density. So that queries intersect\nwith fewer buckets in low-density parts of the histogram.\n\n>> However, this is not the only way to build an equi-depth histogram,\n>> there are ways to make it more symmetrical. Also, it's not clear\n>> equi-depth histograms are ideal with multiple columns. There are papers\n>> proposing various other types of histograms (using different criteria to\n>> build buckets optimizing a different metric). The most interesting one\n>> seems to be V-Optimal histograms - see for example papers [1], [2], [3],\n>> [4], [5] and [6]. I'm sure there are more. The drawback of course is\n>> that it's more expensive to build such histograms.\n> \n> Tomas thank you for shearing with me your ideas regarding V-Optimal\n> Histogram. I read through the papers which you gave me and came up\n> with the following conclusion.\n> \n> The problem can be formulated like this. We have N tuples in\n> M-dimensional space. We need to partition space into buckets\n> iteratively until SSE is less than E or we reach the limit of buckets\n> B.\n> \n\nYes. Although v-optimal histograms minimize variance of frequencies. Not\nsure if that's what you mean by SSE.\n\n> In the case of M-dimensional space it seems to me like an NP-hard\n> problem. A greedy heuristic MHIST-2 is proposed in [2]. Preliminary we\n> sort N tuples in ascending order. Then we iteratively select a bucket\n> which leads to the largest SSE reduction and split it into two parts.\n> We repeat the process until SSE is less than E or we reach the limit\n> of buckets B.\n> \n\nI don't recall all the details of the MHIST-2 algorithm, but this sounds\nabout right. Yes, building the optimal histogram would be NP-hard, so\nwe'd have to use some approximate / greedy algorithm.\n\n> If we assume that B is significantly less than N then the time\n> complexity of MHIST-2 can be estimated as O(M*N*B). Suppose that M\n> equals 3, B equals 1000 and N equals 300*B then it will take slightly\n> over 0.9*10^9 iterations to build a V-Optimal Histogram.\n> \n> You can see that we have to keep B as low as possible in order to\n> build V-Optimal Histogram in feasible time. And here is a paradox.\n> From one side we use V-Optimal Histogram in order to minimize SSE but\n> from the other side we have to keep B as low as possible which\n> eventually leads to increase in SSE.\n> \n\nI don't recall the details of the MHIST-2 scheme, but it's true\ncalculating \"perfect\" V-optimal histogram would be quadratic O(N^2*B).\n\nBut that's exactly why greedy/approximate algorithms exist. Yes, it's\nnot going to produce the optimal V-optimal histogram, but so what?\n\n> On the other hand time complexity required to build an Equi-Depth\n> Histogram doesn't depend on B and can be estimated as O(M*N*logN). SSE\n> can be arbitrarily reduced by increasing B which in turn is only\n> limited by the storage limit. Experimental results show low error\n> metric [3].\n> \n\nAnd how does this compare to the approximate/greedy algorithms, both in\nterms of construction time and accuracy?\n\nThe [3] paper does not compare those, ofc, and I'm somewhat skeptical\nabout the results in that paper. Not that they'd be wrong, but AFAICS\nthe assumptions are quite simplistic and well-suited for that particular\ntype of histogram.\n\nIt's far from clear how would it perform for less \"smooth\" data,\nnon-numeric data, etc.\n\n> In Equi-Depth Histogram a bucket is represented by two vectors. The\n> first vector points to the left bottom corner of the bucket and the\n> other one point to the right top corner of the bucket. Thus space\n> complexity of Equi-Depth Histogram can be estimated as\n> 2*integer_size*M*B. Assume that M equals 3, B equals 1000 and\n> integer_size equals 4 bytes then Equi-Depth Histogram will ocupy 24000\n> bytes.\n> \n> If a bucket is partially intersected by a query box then we assume\n> that data has uniform distribution inside of the bucket. It is a\n> reasonable assumption if B is relativly large.\n> \n> In order to efficianly find buckets which intersect a query box we can\n> store Equi-Depth Histogram in R-tree as proposed in [3]. On average it\n> takes O(logB) iterations to find buckets which intersect a query box.\n> As storage requirements are dominated by leaf nodes we can assume that\n> it takes slightly more than 2*integer_size*M*B.\n> \n\nBut all of this can apply to histograms in general, no? It's not somehow\nspecial to equi-depth histograms, a v-optimal histogram could be stored\nas an r-tree etc.\n\n>> IIRC the patch tried to do something like V-optimal histograms, but\n>> using a greedy algorithm and not the exhaustive stuff described in the\n>> various papers.\n> \n> We should only consider computationally tackable solutions. In one\n> dimensional case V-Optimal Histogram is probably a good solution but\n> in multi-dimensional case I would only consider Equi-Width or\n> Equi-Depth Histograms. As stated in multiple papers Equi-Depth\n> Histogram proves to be more accurate than Equi-Width Histogram. By the\n> way Postgres uses something like Equi-Width Histogram.\n> \n\nSure, but AFAIK the greedy/approximate algorithms are not intractable.\n\nAnd at some point it becomes a tradeoff between accuracy of estimates\nand resources to build the histogram. Papers from ~1988 are great, but\nmaybe not sufficient to make such decisions now.\n\n>> FWIW I did not intend to reject the idea of adding multi-dimensional\n>> histograms, but rather to provide some insight into the history of the\n>> past attempt, and also point some weaknesses of the algorithm described\n>> in the 1988 paper. If you choose to work on this, I can do a review etc.\n> \n> Thank you very much Tomas. I am new in the community and I definitely\n> didn't expect to have such a warm welcome.\n> \n> As I indicated above Equi-Depth Histogram proves to be more accurate\n> than Equi-Width Histogram and both have the same time and space\n> requirements. Postgres uses some sort of Equi-Width Histogram. Suppose\n> that:\n> * I will create a patch which will replace Equi-Width Histogram with\n> Equi-Depth Histogram but only in 1-dimensional case.\n> * I will show experimental results which will demonstrate improvement\n> of selectivity estimation.\n> Then will the path be accepted by the community?\n> \n> If the above path is accepted by the community then I will proceed\n> further with M-dimensional Equi-Depth Histogram...\n> \n\nI find it very unlikely we want (or even need) to significantly rework\nthe 1-D histograms we already have. AFAICS the current histograms work\npretty well, and if better accuracy is needed, it's easy/efficient to\njust increase the statistics target. I'm sure there are various\nestimation issues, but I'm not aware of any that'd be solved simply by\nusing a different 1-D histogram.\n\nI'm not going to reject that outright - but I think the bar you'd need\nto justify such change is damn high. Pretty much everyone is using the\ncurrent histograms, which makes it a very sensitive area. You'll need to\nshow that it helps in practice, without hurting causing harm ...\n\nIt's an interesting are for experiments, no doubt about it. And if you\nchoose to explore it, that's fine. But it's better to be aware it may\nnot end with a commit.\n\n\nFor the multi-dimensional case, I propose we first try to experiment\nwith the various algorithms, and figure out what works etc. Maybe\nimplementing them in python or something would be easier than C.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Jan 2024 00:54:51 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "On 7/1/2024 06:54, Tomas Vondra wrote:\n> It's an interesting are for experiments, no doubt about it. And if you\n> choose to explore it, that's fine. But it's better to be aware it may\n> not end with a commit.\n> For the multi-dimensional case, I propose we first try to experiment\n> with the various algorithms, and figure out what works etc. Maybe\n> implementing them in python or something would be easier than C.\n\nCuriously, trying to utilize extended statistics for some problematic \ncases, I am experimenting with auto-generating such statistics by \ndefinition of indexes [1]. Doing that, I wanted to add some hand-made \nstatistics like a multidimensional histogram or just a histogram which \ncould help to perform estimation over a set of columns/expressions.\nI realized that current hooks get_relation_stats_hook and \nget_index_stats_hook are insufficient if I want to perform an estimation \nover a set of ANDed quals on different columns.\nIn your opinion, is it possible to add a hook into the extended \nstatistics to allow for an extension to propose alternative estimation?\n\n[1] https://github.com/danolivo/pg_index_stats\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Sun, 7 Jan 2024 17:22:59 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "\n\nOn 1/7/24 11:22, Andrei Lepikhov wrote:\n> On 7/1/2024 06:54, Tomas Vondra wrote:\n>> It's an interesting are for experiments, no doubt about it. And if you\n>> choose to explore it, that's fine. But it's better to be aware it may\n>> not end with a commit.\n>> For the multi-dimensional case, I propose we first try to experiment\n>> with the various algorithms, and figure out what works etc. Maybe\n>> implementing them in python or something would be easier than C.\n> \n> Curiously, trying to utilize extended statistics for some problematic\n> cases, I am experimenting with auto-generating such statistics by\n> definition of indexes [1]. Doing that, I wanted to add some hand-made\n> statistics like a multidimensional histogram or just a histogram which\n> could help to perform estimation over a set of columns/expressions.\n> I realized that current hooks get_relation_stats_hook and\n> get_index_stats_hook are insufficient if I want to perform an estimation\n> over a set of ANDed quals on different columns.\n> In your opinion, is it possible to add a hook into the extended\n> statistics to allow for an extension to propose alternative estimation?\n> \n> [1] https://github.com/danolivo/pg_index_stats\n> \n\nNo idea, I haven't thought about that very much. Presumably the existing\nhooks are insufficient because they're per-attnum? I guess it would make\nsense to have a hook for all the attnums of the relation, but I'm not\nsure it'd be enough to introduce a new extended statistics kind ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Jan 2024 11:51:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "On 7/1/2024 17:51, Tomas Vondra wrote:\n> On 1/7/24 11:22, Andrei Lepikhov wrote:\n>> On 7/1/2024 06:54, Tomas Vondra wrote:\n>>> It's an interesting are for experiments, no doubt about it. And if you\n>>> choose to explore it, that's fine. But it's better to be aware it may\n>>> not end with a commit.\n>>> For the multi-dimensional case, I propose we first try to experiment\n>>> with the various algorithms, and figure out what works etc. Maybe\n>>> implementing them in python or something would be easier than C.\n>>\n>> Curiously, trying to utilize extended statistics for some problematic\n>> cases, I am experimenting with auto-generating such statistics by\n>> definition of indexes [1]. Doing that, I wanted to add some hand-made\n>> statistics like a multidimensional histogram or just a histogram which\n>> could help to perform estimation over a set of columns/expressions.\n>> I realized that current hooks get_relation_stats_hook and\n>> get_index_stats_hook are insufficient if I want to perform an estimation\n>> over a set of ANDed quals on different columns.\n>> In your opinion, is it possible to add a hook into the extended\n>> statistics to allow for an extension to propose alternative estimation?\n>>\n>> [1] https://github.com/danolivo/pg_index_stats\n>>\n> \n> No idea, I haven't thought about that very much. Presumably the existing\n> hooks are insufficient because they're per-attnum? I guess it would make\n> sense to have a hook for all the attnums of the relation, but I'm not\n> sure it'd be enough to introduce a new extended statistics kind ...\n\nI got stuck on the same problem Alexander mentioned: we usually have \nlarge tables with many uniformly distributed values. In this case, MCV \ndoesn't help a lot.\nUsually, I face problems scanning a table with a filter containing 3-6 \nANDed quals. Here, Postgres multiplies selectivities and ends up with a \nless than 1 tuple selectivity. But such scans, in reality, mostly have \nsome physical sense and return a bunch of tuples. It looks like the set \nof columns representing some value of composite type.\nSometimes extended statistics on dependency helps well, but it expensive \nfor multiple columns. And sometimes I see that even a trivial histogram \non a ROW(x1,x2,...) could predict a much more adequate value (kind of \nconservative upper estimation) for a clause like \"x1=N1 AND x2=N2 AND \n...\" if somewhere in extension we'd transform it to ROW(x1,x2,...) = \nROW(N1, N2,...).\nFor such cases we don't have an in-core solution, and introducing a hook \non clause list estimation (paired with maybe a hook on statistics \ngeneration) could help invent an extension that would deal with that \nproblem. Also, it would open a way for experiments with different types \nof extended statistics ...\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 8 Jan 2024 00:26:43 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "\n\nOn 1/7/24 18:26, Andrei Lepikhov wrote:\n> On 7/1/2024 17:51, Tomas Vondra wrote:\n>> On 1/7/24 11:22, Andrei Lepikhov wrote:\n>>> On 7/1/2024 06:54, Tomas Vondra wrote:\n>>>> It's an interesting are for experiments, no doubt about it. And if you\n>>>> choose to explore it, that's fine. But it's better to be aware it may\n>>>> not end with a commit.\n>>>> For the multi-dimensional case, I propose we first try to experiment\n>>>> with the various algorithms, and figure out what works etc. Maybe\n>>>> implementing them in python or something would be easier than C.\n>>>\n>>> Curiously, trying to utilize extended statistics for some problematic\n>>> cases, I am experimenting with auto-generating such statistics by\n>>> definition of indexes [1]. Doing that, I wanted to add some hand-made\n>>> statistics like a multidimensional histogram or just a histogram which\n>>> could help to perform estimation over a set of columns/expressions.\n>>> I realized that current hooks get_relation_stats_hook and\n>>> get_index_stats_hook are insufficient if I want to perform an estimation\n>>> over a set of ANDed quals on different columns.\n>>> In your opinion, is it possible to add a hook into the extended\n>>> statistics to allow for an extension to propose alternative estimation?\n>>>\n>>> [1] https://github.com/danolivo/pg_index_stats\n>>>\n>>\n>> No idea, I haven't thought about that very much. Presumably the existing\n>> hooks are insufficient because they're per-attnum? I guess it would make\n>> sense to have a hook for all the attnums of the relation, but I'm not\n>> sure it'd be enough to introduce a new extended statistics kind ...\n> \n> I got stuck on the same problem Alexander mentioned: we usually have\n> large tables with many uniformly distributed values. In this case, MCV\n> doesn't help a lot.\n>\n> Usually, I face problems scanning a table with a filter containing 3-6\n> ANDed quals. Here, Postgres multiplies selectivities and ends up with a\n> less than 1 tuple selectivity. But such scans, in reality, mostly have\n> some physical sense and return a bunch of tuples. It looks like the set\n> of columns representing some value of composite type.\n\nUnderstood. That's a fairly common scenario, and it can easily end up\nwith rather terrible plan due to the underestimate.\n\n> Sometimes extended statistics on dependency helps well, but it expensive\n> for multiple columns.\n\nExpensive in what sense? Also, how would a multidimensional histogram be\nany cheaper?\n\n> And sometimes I see that even a trivial histogram\n> on a ROW(x1,x2,...) could predict a much more adequate value (kind of\n> conservative upper estimation) for a clause like \"x1=N1 AND x2=N2 AND\n> ...\" if somewhere in extension we'd transform it to ROW(x1,x2,...) =\n> ROW(N1, N2,...).\n\nAre you guessing the histogram would help, or do you know it'd help? I\nhave no problem believing that for range queries, but I think it's far\nless clear for simple equalities. I'm not sure a histograms would work\nfor that ...\n\nMaybe it'd be possible to track more stuff for each bucket, not just the\nfrequency, but also ndistinct for combinations of columns, and then use\nthat to do better equality estimates. Or maybe we could see how the\n\"expected\" and \"actual\" bucket frequency compare, and use that to\ncorrect the equality estimate? Not sure.\n\nBut perhaps you have some data to demonstrate it'd actually help?\n\n> For such cases we don't have an in-core solution, and introducing a hook\n> on clause list estimation (paired with maybe a hook on statistics\n> generation) could help invent an extension that would deal with that\n> problem. Also, it would open a way for experiments with different types\n> of extended statistics ...\n> \n\nI really don't know how to do that, or what would it take. AFAICS the\nstatistics system is not very extensible from external code. Even if we\ncould add new types of attribute/extended stats, I'm not sure the code\ncalculating the estimates could use that.\n\nThat doesn't mean it's impossible or not worth exploring, but I don't\nhave any thoughts on this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Jan 2024 19:36:17 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "Hi Tomas,\n\n> The thing I was proposing is that it should be possible to build\n> histograms with bins adapted to density in the given region. With\n> smaller buckets in areas with high density. So that queries intersect\n> with fewer buckets in low-density parts of the histogram.\n\nThis is how Equi-Depth Histogram works. Postgres has maller buckets in\nareas with high density:\n\nvalues[(i * (nvals - 1)) / (num_hist - 1)]\n\n> I don't recall the details of the MHIST-2 scheme, but it's true\n> calculating \"perfect\" V-optimal histogram would be quadratic O(N^2*B).\n\nIn M-dimensional space \"perfect\" V-Optimal Histogram is an NP-hard\nproblem. In other words it is not possible to build it in polynomial\ntime. How did you come up with the estimate?!\n\n> But that's exactly why greedy/approximate algorithms exist. Yes, it's\n> not going to produce the optimal V-optimal histogram, but so what?\n\nGreedy/approximate algorithm has time complexity O(M*N*B), where M\nequals the number of dimensions. MHIST-2 is a greedy/approximate\nalgorithm.\n\n> And how does this compare to the approximate/greedy algorithms, both in\n> terms of construction time and accuracy?\n\nTime complexity of Equi-Depth Histogram has no dependence on B.\n\n> But all of this can apply to histograms in general, no? It's not somehow\n> special to equi-depth histograms, a v-optimal histogram could be stored\n> as an r-tree etc.\n\nI agree.\n\nRegards,\nAlexander Cheshev\n\nOn Sun, 7 Jan 2024 at 00:54, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 1/6/24 01:00, Alexander Cheshev wrote:\n> > Hi Tomas,\n> >\n> >> Another reason was that the algorithm described in the two papers you\n> >> reference (1988 paper by DeWitt and the evaluation by Carlson and\n> >> Sutherland from ~2010) is simple but has pretty obvious weaknesses. It\n> >> processes the columns one by one - first build bucket on column \"a\",\n> >> then splits each bucket into buckets on \"b\". So it's not symmetrical,\n> >> and it results in lower accuracy compared to an \"ideal\" histogram with\n> >> the same number of buckets (especially for the dimensions split early).\n> >\n> > As stated in [1] Sum Square Error (SSE) is one of the most natural\n> > error metrics. Equi-Depth Histogram is not ideal because it doesn't\n> > minimize SSE. On the other hand V-Optimal Histogram indeed minimizes\n> > SSE and from this point of view can be considered as an ideal\n> > solution.\n> >\n> >> This does indeed produce an equi-depth histogram, which seems nice, but\n> >> the mesh is regular in such a way that any value of the first dimension\n> >> intersects all buckets on the second dimension. So for example with a\n> >> histogram of 100x100 buckets on [a,b], any value \"a=X\" intersects with\n> >> 100 buckets on \"b\", each representing 1/10000 of tuples. But we don't\n> >> know how the tuples are distributed in the tuple - so we end up using\n> >> 0.5 of the bucket as unbiased estimate, but that can easily add-up in\n> >> the wrong dimension.\n> >\n> > Suppose that we have a query box a=X and we know how data is\n> > distributed in buckets. Lets consider only the buckets which are\n> > intersected by the query box a=X. As we know how data is distributes\n> > in buckets we can exclude the buckets which have no tuples which\n> > intersect the query box a=X.\n> >\n> > As we store buckets with no information about data distribution we\n> > have to make reasonable assumptions. If we keep buckets relativly\n> > small then we can assume that buckets have uniform distribution.\n> >\n> > What I am trying to say is that the problem which you pointed out\n> > comes from the fact that we store buckets with no information about\n> > data distribution. Even in one dimensional case we have to assume how\n> > data is distributed in buckets. By the way Postgres assumes that data\n> > has uniform distribution in buckets.\n> >\n>\n> It's not just what Postgres assumes, the assumption bucket uniformity is\n> somewhat inherent to the whole concept of a histogram. Yes, maybe we\n> could keep some \"distribution\" info about each bucket, but then maybe we\n> could simply build histogram with more bins occupying the same space?\n>\n> The thing I was proposing is that it should be possible to build\n> histograms with bins adapted to density in the given region. With\n> smaller buckets in areas with high density. So that queries intersect\n> with fewer buckets in low-density parts of the histogram.\n>\n> >> However, this is not the only way to build an equi-depth histogram,\n> >> there are ways to make it more symmetrical. Also, it's not clear\n> >> equi-depth histograms are ideal with multiple columns. There are papers\n> >> proposing various other types of histograms (using different criteria to\n> >> build buckets optimizing a different metric). The most interesting one\n> >> seems to be V-Optimal histograms - see for example papers [1], [2], [3],\n> >> [4], [5] and [6]. I'm sure there are more. The drawback of course is\n> >> that it's more expensive to build such histograms.\n> >\n> > Tomas thank you for shearing with me your ideas regarding V-Optimal\n> > Histogram. I read through the papers which you gave me and came up\n> > with the following conclusion.\n> >\n> > The problem can be formulated like this. We have N tuples in\n> > M-dimensional space. We need to partition space into buckets\n> > iteratively until SSE is less than E or we reach the limit of buckets\n> > B.\n> >\n>\n> Yes. Although v-optimal histograms minimize variance of frequencies. Not\n> sure if that's what you mean by SSE.\n>\n> > In the case of M-dimensional space it seems to me like an NP-hard\n> > problem. A greedy heuristic MHIST-2 is proposed in [2]. Preliminary we\n> > sort N tuples in ascending order. Then we iteratively select a bucket\n> > which leads to the largest SSE reduction and split it into two parts.\n> > We repeat the process until SSE is less than E or we reach the limit\n> > of buckets B.\n> >\n>\n> I don't recall all the details of the MHIST-2 algorithm, but this sounds\n> about right. Yes, building the optimal histogram would be NP-hard, so\n> we'd have to use some approximate / greedy algorithm.\n>\n> > If we assume that B is significantly less than N then the time\n> > complexity of MHIST-2 can be estimated as O(M*N*B). Suppose that M\n> > equals 3, B equals 1000 and N equals 300*B then it will take slightly\n> > over 0.9*10^9 iterations to build a V-Optimal Histogram.\n> >\n> > You can see that we have to keep B as low as possible in order to\n> > build V-Optimal Histogram in feasible time. And here is a paradox.\n> > From one side we use V-Optimal Histogram in order to minimize SSE but\n> > from the other side we have to keep B as low as possible which\n> > eventually leads to increase in SSE.\n> >\n>\n> I don't recall the details of the MHIST-2 scheme, but it's true\n> calculating \"perfect\" V-optimal histogram would be quadratic O(N^2*B).\n>\n> But that's exactly why greedy/approximate algorithms exist. Yes, it's\n> not going to produce the optimal V-optimal histogram, but so what?\n>\n> > On the other hand time complexity required to build an Equi-Depth\n> > Histogram doesn't depend on B and can be estimated as O(M*N*logN). SSE\n> > can be arbitrarily reduced by increasing B which in turn is only\n> > limited by the storage limit. Experimental results show low error\n> > metric [3].\n> >\n>\n> And how does this compare to the approximate/greedy algorithms, both in\n> terms of construction time and accuracy?\n>\n> The [3] paper does not compare those, ofc, and I'm somewhat skeptical\n> about the results in that paper. Not that they'd be wrong, but AFAICS\n> the assumptions are quite simplistic and well-suited for that particular\n> type of histogram.\n>\n> It's far from clear how would it perform for less \"smooth\" data,\n> non-numeric data, etc.\n>\n> > In Equi-Depth Histogram a bucket is represented by two vectors. The\n> > first vector points to the left bottom corner of the bucket and the\n> > other one point to the right top corner of the bucket. Thus space\n> > complexity of Equi-Depth Histogram can be estimated as\n> > 2*integer_size*M*B. Assume that M equals 3, B equals 1000 and\n> > integer_size equals 4 bytes then Equi-Depth Histogram will ocupy 24000\n> > bytes.\n> >\n> > If a bucket is partially intersected by a query box then we assume\n> > that data has uniform distribution inside of the bucket. It is a\n> > reasonable assumption if B is relativly large.\n> >\n> > In order to efficianly find buckets which intersect a query box we can\n> > store Equi-Depth Histogram in R-tree as proposed in [3]. On average it\n> > takes O(logB) iterations to find buckets which intersect a query box.\n> > As storage requirements are dominated by leaf nodes we can assume that\n> > it takes slightly more than 2*integer_size*M*B.\n> >\n>\n> But all of this can apply to histograms in general, no? It's not somehow\n> special to equi-depth histograms, a v-optimal histogram could be stored\n> as an r-tree etc.\n>\n> >> IIRC the patch tried to do something like V-optimal histograms, but\n> >> using a greedy algorithm and not the exhaustive stuff described in the\n> >> various papers.\n> >\n> > We should only consider computationally tackable solutions. In one\n> > dimensional case V-Optimal Histogram is probably a good solution but\n> > in multi-dimensional case I would only consider Equi-Width or\n> > Equi-Depth Histograms. As stated in multiple papers Equi-Depth\n> > Histogram proves to be more accurate than Equi-Width Histogram. By the\n> > way Postgres uses something like Equi-Width Histogram.\n> >\n>\n> Sure, but AFAIK the greedy/approximate algorithms are not intractable.\n>\n> And at some point it becomes a tradeoff between accuracy of estimates\n> and resources to build the histogram. Papers from ~1988 are great, but\n> maybe not sufficient to make such decisions now.\n>\n> >> FWIW I did not intend to reject the idea of adding multi-dimensional\n> >> histograms, but rather to provide some insight into the history of the\n> >> past attempt, and also point some weaknesses of the algorithm described\n> >> in the 1988 paper. If you choose to work on this, I can do a review etc.\n> >\n> > Thank you very much Tomas. I am new in the community and I definitely\n> > didn't expect to have such a warm welcome.\n> >\n> > As I indicated above Equi-Depth Histogram proves to be more accurate\n> > than Equi-Width Histogram and both have the same time and space\n> > requirements. Postgres uses some sort of Equi-Width Histogram. Suppose\n> > that:\n> > * I will create a patch which will replace Equi-Width Histogram with\n> > Equi-Depth Histogram but only in 1-dimensional case.\n> > * I will show experimental results which will demonstrate improvement\n> > of selectivity estimation.\n> > Then will the path be accepted by the community?\n> >\n> > If the above path is accepted by the community then I will proceed\n> > further with M-dimensional Equi-Depth Histogram...\n> >\n>\n> I find it very unlikely we want (or even need) to significantly rework\n> the 1-D histograms we already have. AFAICS the current histograms work\n> pretty well, and if better accuracy is needed, it's easy/efficient to\n> just increase the statistics target. I'm sure there are various\n> estimation issues, but I'm not aware of any that'd be solved simply by\n> using a different 1-D histogram.\n>\n> I'm not going to reject that outright - but I think the bar you'd need\n> to justify such change is damn high. Pretty much everyone is using the\n> current histograms, which makes it a very sensitive area. You'll need to\n> show that it helps in practice, without hurting causing harm ...\n>\n> It's an interesting are for experiments, no doubt about it. And if you\n> choose to explore it, that's fine. But it's better to be aware it may\n> not end with a commit.\n>\n>\n> For the multi-dimensional case, I propose we first try to experiment\n> with the various algorithms, and figure out what works etc. Maybe\n> implementing them in python or something would be easier than C.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Jan 2024 23:55:01 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "\n\nOn 1/7/24 23:53, Alexander Cheshev wrote:\n> Hi Tomas,\n>\n>> The thing I was proposing is that it should be possible to build\n>> histograms with bins adapted to density in the given region. With\n>> smaller buckets in areas with high density. So that queries intersect\n>> with fewer buckets in low-density parts of the histogram.\n>\n> This is how Equi-Depth Histogram works. Postgres has maller buckets in\n> areas with high density:\n>\n> values[(i * (nvals - 1)) / (num_hist - 1)]\n>\nTrue, but the boundaries are somewhat random. Also, I was referring to\nthe multi-dimensional case, it wasn't clear to me if the proposal is to\ndo the same thing.\n\n>> I don't recall the details of the MHIST-2 scheme, but it's true\n>> calculating \"perfect\" V-optimal histogram would be quadratic O(N^2*B).\n>\n> In M-dimensional space \"perfect\" V-Optimal Histogram is an NP-hard\n> problem. In other words it is not possible to build it in polynomial\n> time. How did you come up with the estimate?!\n>\nSee section 3.2 in this paper (the \"view PDF\" does not work for me, but\nthe \"source PDF\" does download a postscript):\n\nhttps://citeseerx.ist.psu.edu/doc_view/pid/35e29cbc2bfe6662653bdae1fb89c091e2ece560\n\n>> But that's exactly why greedy/approximate algorithms exist. Yes, it's\n>> not going to produce the optimal V-optimal histogram, but so what?\n>\n> Greedy/approximate algorithm has time complexity O(M*N*B), where M\n> equals the number of dimensions. MHIST-2 is a greedy/approximate\n> algorithm.\n>\n>> And how does this compare to the approximate/greedy algorithms, both in\n>> terms of construction time and accuracy?\n>\n> Time complexity of Equi-Depth Histogram has no dependence on B.\n>\nReally? I'd expect that to build B buckets, the algorithm repeat some\nO(M*N) action B-times, roughly. I mean, it needs to pick a dimension by\nwhich to split, then do some calculation on the N tuples, etc.\n\nMaybe I'm imagining that wrong, though. It's been ages since I looked ad\nbig-O complexity and/or the histograms. I'd have to play with those\nalgorithms for a bit again.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jan 2024 00:29:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "Hi Tomas,\n\n> See section 3.2 in this paper (the \"view PDF\" does not work for me, but\n> the \"source PDF\" does download a postscript):\n\nI believe that you are referring to a dynamic programming approach. It\nis a 1-dimensional case! To find an optimal solution in the\nM-dimensional case is an NP-hard problem.\n\nRegards,\nAlexander Cheshev\n\nOn Mon, 8 Jan 2024 at 00:29, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 1/7/24 23:53, Alexander Cheshev wrote:\n> > Hi Tomas,\n> >\n> >> The thing I was proposing is that it should be possible to build\n> >> histograms with bins adapted to density in the given region. With\n> >> smaller buckets in areas with high density. So that queries intersect\n> >> with fewer buckets in low-density parts of the histogram.\n> >\n> > This is how Equi-Depth Histogram works. Postgres has maller buckets in\n> > areas with high density:\n> >\n> > values[(i * (nvals - 1)) / (num_hist - 1)]\n> >\n> True, but the boundaries are somewhat random. Also, I was referring to\n> the multi-dimensional case, it wasn't clear to me if the proposal is to\n> do the same thing.\n>\n> >> I don't recall the details of the MHIST-2 scheme, but it's true\n> >> calculating \"perfect\" V-optimal histogram would be quadratic O(N^2*B).\n> >\n> > In M-dimensional space \"perfect\" V-Optimal Histogram is an NP-hard\n> > problem. In other words it is not possible to build it in polynomial\n> > time. How did you come up with the estimate?!\n> >\n> See section 3.2 in this paper (the \"view PDF\" does not work for me, but\n> the \"source PDF\" does download a postscript):\n>\n> https://citeseerx.ist.psu.edu/doc_view/pid/35e29cbc2bfe6662653bdae1fb89c091e2ece560\n>\n> >> But that's exactly why greedy/approximate algorithms exist. Yes, it's\n> >> not going to produce the optimal V-optimal histogram, but so what?\n> >\n> > Greedy/approximate algorithm has time complexity O(M*N*B), where M\n> > equals the number of dimensions. MHIST-2 is a greedy/approximate\n> > algorithm.\n> >\n> >> And how does this compare to the approximate/greedy algorithms, both in\n> >> terms of construction time and accuracy?\n> >\n> > Time complexity of Equi-Depth Histogram has no dependence on B.\n> >\n> Really? I'd expect that to build B buckets, the algorithm repeat some\n> O(M*N) action B-times, roughly. I mean, it needs to pick a dimension by\n> which to split, then do some calculation on the N tuples, etc.\n>\n> Maybe I'm imagining that wrong, though. It's been ages since I looked ad\n> big-O complexity and/or the histograms. I'd have to play with those\n> algorithms for a bit again.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jan 2024 00:49:11 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "On 8/1/2024 01:36, Tomas Vondra wrote:\n> On 1/7/24 18:26, Andrei Lepikhov wrote:\n>> On 7/1/2024 17:51, Tomas Vondra wrote:\n>>> On 1/7/24 11:22, Andrei Lepikhov wrote:\n>>>> On 7/1/2024 06:54, Tomas Vondra wrote:\n>>>>> It's an interesting are for experiments, no doubt about it. And if you\n>>>>> choose to explore it, that's fine. But it's better to be aware it may\n>>>>> not end with a commit.\n>>>>> For the multi-dimensional case, I propose we first try to experiment\n>>>>> with the various algorithms, and figure out what works etc. Maybe\n>>>>> implementing them in python or something would be easier than C.\n>>>>\n>>>> Curiously, trying to utilize extended statistics for some problematic\n>>>> cases, I am experimenting with auto-generating such statistics by\n>>>> definition of indexes [1]. Doing that, I wanted to add some hand-made\n>>>> statistics like a multidimensional histogram or just a histogram which\n>>>> could help to perform estimation over a set of columns/expressions.\n>>>> I realized that current hooks get_relation_stats_hook and\n>>>> get_index_stats_hook are insufficient if I want to perform an estimation\n>>>> over a set of ANDed quals on different columns.\n>>>> In your opinion, is it possible to add a hook into the extended\n>>>> statistics to allow for an extension to propose alternative estimation?\n>>>>\n>>>> [1] https://github.com/danolivo/pg_index_stats\n>>>>\n>>>\n>>> No idea, I haven't thought about that very much. Presumably the existing\n>>> hooks are insufficient because they're per-attnum? I guess it would make\n>>> sense to have a hook for all the attnums of the relation, but I'm not\n>>> sure it'd be enough to introduce a new extended statistics kind ...\n>>\n>> I got stuck on the same problem Alexander mentioned: we usually have\n>> large tables with many uniformly distributed values. In this case, MCV\n>> doesn't help a lot.\n>>\n>> Usually, I face problems scanning a table with a filter containing 3-6\n>> ANDed quals. Here, Postgres multiplies selectivities and ends up with a\n>> less than 1 tuple selectivity. But such scans, in reality, mostly have\n>> some physical sense and return a bunch of tuples. It looks like the set\n>> of columns representing some value of composite type.\n> \n> Understood. That's a fairly common scenario, and it can easily end up\n> with rather terrible plan due to the underestimate.\n> \n>> Sometimes extended statistics on dependency helps well, but it expensive\n>> for multiple columns.\n> \n> Expensive in what sense? Also, how would a multidimensional histogram be\n> any cheaper?\nMaybe my wording needed to be more precise. I didn't implement \nmultidimensional histograms before, so I don't know how expensive they \nare. I meant that for dependency statistics over about six columns, we \nhave a lot of combinations to compute.\n> \n>> And sometimes I see that even a trivial histogram\n>> on a ROW(x1,x2,...) could predict a much more adequate value (kind of\n>> conservative upper estimation) for a clause like \"x1=N1 AND x2=N2 AND\n>> ...\" if somewhere in extension we'd transform it to ROW(x1,x2,...) =\n>> ROW(N1, N2,...).\n> \n> Are you guessing the histogram would help, or do you know it'd help? I\n> have no problem believing that for range queries, but I think it's far\n> less clear for simple equalities. I'm not sure a histograms would work\n> for that ...\n\nI added Teodor Sigaev to the CC of this email - He has much more user \nfeedback on this technique. As I see, having a histogram over a set of \ncolumns, we have top selectivity estimation for the filter. I'm not sure \nhow good a simple histogram is in that case, too, but according to my \npractice, it works, disallowing usage of too-optimistic query plans.\n\n> Maybe it'd be possible to track more stuff for each bucket, not just the\n> frequency, but also ndistinct for combinations of columns, and then use\n> that to do better equality estimates. Or maybe we could see how the\n> \"expected\" and \"actual\" bucket frequency compare, and use that to\n> correct the equality estimate? Not sure.\nYes, it is in my mind, too. Having such experimental stuff as an \nextension(s) in GitHub, we could get some community feedback.\n> \n> But perhaps you have some data to demonstrate it'd actually help?\nIt should be redirected to Teodor, but I will consider translating messy \nreal-life reports into a clear example.\n> \n>> For such cases we don't have an in-core solution, and introducing a hook\n>> on clause list estimation (paired with maybe a hook on statistics\n>> generation) could help invent an extension that would deal with that\n>> problem. Also, it would open a way for experiments with different types\n>> of extended statistics ...\n> I really don't know how to do that, or what would it take. AFAICS the\n> statistics system is not very extensible from external code. Even if we\n> could add new types of attribute/extended stats, I'm not sure the code\n> calculating the estimates could use that.\nI imagine we can add an analysis routine and directly store statistics \nin an extension for demonstration and experimental purposes, no problem.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 8 Jan 2024 10:12:25 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "Hi Andrei,\n\n> Maybe my wording needed to be more precise. I didn't implement\n> multidimensional histograms before, so I don't know how expensive they\n> are. I meant that for dependency statistics over about six columns, we\n> have a lot of combinations to compute.\n\nEqui-Depth Histogram in a 6 dimensional case requires 6 times more\niterations. Postgres already uses Equi-Depth Histogram. Even if you\nincrease the number of buckets from 100 to 1000 then there will be no\noverhead as the time complexity of Equi-Depth Histogram has no\ndependence on the number of buckets. So, no overhead at all!\n\nRegards,\nAlexander Cheshev\n\nOn Mon, 8 Jan 2024 at 04:12, Andrei Lepikhov <[email protected]> wrote:\n>\n> On 8/1/2024 01:36, Tomas Vondra wrote:\n> > On 1/7/24 18:26, Andrei Lepikhov wrote:\n> >> On 7/1/2024 17:51, Tomas Vondra wrote:\n> >>> On 1/7/24 11:22, Andrei Lepikhov wrote:\n> >>>> On 7/1/2024 06:54, Tomas Vondra wrote:\n> >>>>> It's an interesting are for experiments, no doubt about it. And if you\n> >>>>> choose to explore it, that's fine. But it's better to be aware it may\n> >>>>> not end with a commit.\n> >>>>> For the multi-dimensional case, I propose we first try to experiment\n> >>>>> with the various algorithms, and figure out what works etc. Maybe\n> >>>>> implementing them in python or something would be easier than C.\n> >>>>\n> >>>> Curiously, trying to utilize extended statistics for some problematic\n> >>>> cases, I am experimenting with auto-generating such statistics by\n> >>>> definition of indexes [1]. Doing that, I wanted to add some hand-made\n> >>>> statistics like a multidimensional histogram or just a histogram which\n> >>>> could help to perform estimation over a set of columns/expressions.\n> >>>> I realized that current hooks get_relation_stats_hook and\n> >>>> get_index_stats_hook are insufficient if I want to perform an estimation\n> >>>> over a set of ANDed quals on different columns.\n> >>>> In your opinion, is it possible to add a hook into the extended\n> >>>> statistics to allow for an extension to propose alternative estimation?\n> >>>>\n> >>>> [1] https://github.com/danolivo/pg_index_stats\n> >>>>\n> >>>\n> >>> No idea, I haven't thought about that very much. Presumably the existing\n> >>> hooks are insufficient because they're per-attnum? I guess it would make\n> >>> sense to have a hook for all the attnums of the relation, but I'm not\n> >>> sure it'd be enough to introduce a new extended statistics kind ...\n> >>\n> >> I got stuck on the same problem Alexander mentioned: we usually have\n> >> large tables with many uniformly distributed values. In this case, MCV\n> >> doesn't help a lot.\n> >>\n> >> Usually, I face problems scanning a table with a filter containing 3-6\n> >> ANDed quals. Here, Postgres multiplies selectivities and ends up with a\n> >> less than 1 tuple selectivity. But such scans, in reality, mostly have\n> >> some physical sense and return a bunch of tuples. It looks like the set\n> >> of columns representing some value of composite type.\n> >\n> > Understood. That's a fairly common scenario, and it can easily end up\n> > with rather terrible plan due to the underestimate.\n> >\n> >> Sometimes extended statistics on dependency helps well, but it expensive\n> >> for multiple columns.\n> >\n> > Expensive in what sense? Also, how would a multidimensional histogram be\n> > any cheaper?\n> Maybe my wording needed to be more precise. I didn't implement\n> multidimensional histograms before, so I don't know how expensive they\n> are. I meant that for dependency statistics over about six columns, we\n> have a lot of combinations to compute.\n> >\n> >> And sometimes I see that even a trivial histogram\n> >> on a ROW(x1,x2,...) could predict a much more adequate value (kind of\n> >> conservative upper estimation) for a clause like \"x1=N1 AND x2=N2 AND\n> >> ...\" if somewhere in extension we'd transform it to ROW(x1,x2,...) =\n> >> ROW(N1, N2,...).\n> >\n> > Are you guessing the histogram would help, or do you know it'd help? I\n> > have no problem believing that for range queries, but I think it's far\n> > less clear for simple equalities. I'm not sure a histograms would work\n> > for that ...\n>\n> I added Teodor Sigaev to the CC of this email - He has much more user\n> feedback on this technique. As I see, having a histogram over a set of\n> columns, we have top selectivity estimation for the filter. I'm not sure\n> how good a simple histogram is in that case, too, but according to my\n> practice, it works, disallowing usage of too-optimistic query plans.\n>\n> > Maybe it'd be possible to track more stuff for each bucket, not just the\n> > frequency, but also ndistinct for combinations of columns, and then use\n> > that to do better equality estimates. Or maybe we could see how the\n> > \"expected\" and \"actual\" bucket frequency compare, and use that to\n> > correct the equality estimate? Not sure.\n> Yes, it is in my mind, too. Having such experimental stuff as an\n> extension(s) in GitHub, we could get some community feedback.\n> >\n> > But perhaps you have some data to demonstrate it'd actually help?\n> It should be redirected to Teodor, but I will consider translating messy\n> real-life reports into a clear example.\n> >\n> >> For such cases we don't have an in-core solution, and introducing a hook\n> >> on clause list estimation (paired with maybe a hook on statistics\n> >> generation) could help invent an extension that would deal with that\n> >> problem. Also, it would open a way for experiments with different types\n> >> of extended statistics ...\n> > I really don't know how to do that, or what would it take. AFAICS the\n> > statistics system is not very extensible from external code. Even if we\n> > could add new types of attribute/extended stats, I'm not sure the code\n> > calculating the estimates could use that.\n> I imagine we can add an analysis routine and directly store statistics\n> in an extension for demonstration and experimental purposes, no problem.\n>\n> --\n> regards,\n> Andrei Lepikhov\n> Postgres Professional\n>\n\n\n",
"msg_date": "Mon, 8 Jan 2024 10:21:34 +0100",
"msg_from": "Alexander Cheshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multidimensional Histograms"
},
{
"msg_contents": "On 8/1/2024 16:21, Alexander Cheshev wrote:\n> Hi Andrei,\n> \n>> Maybe my wording needed to be more precise. I didn't implement\n>> multidimensional histograms before, so I don't know how expensive they\n>> are. I meant that for dependency statistics over about six columns, we\n>> have a lot of combinations to compute.\n> \n> Equi-Depth Histogram in a 6 dimensional case requires 6 times more\n> iterations. Postgres already uses Equi-Depth Histogram. Even if you\n> increase the number of buckets from 100 to 1000 then there will be no\n> overhead as the time complexity of Equi-Depth Histogram has no\n> dependence on the number of buckets. So, no overhead at all!\n\nMaybe. For three columns, we have 9 combinations (passes) for building \ndependency statistics and 4 combinations for ndistincts; for six \ncolumns, we have 186 and 57 combinations correspondingly.\nEven remembering that dependency is just one number for one combination, \nbuilding the dependency statistics is still massive work. So, in the \nmulticolumn case, having something like a histogram may be more effective.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 10 Jan 2024 22:49:48 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multidimensional Histograms"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've attached a patch against master that addresses a small bug in pg_dump.\n\nPreviously, pg_dump would include CREATE STATISTICS statements for\ntables that were excluded from the dump, causing reload to fail if any\nexcluded tables had extended statistics.\n\nThe patch skips the creation of the StatsExtInfo if the associated\ntable does not have the DUMP_COMPONENT_DEFINITION flag set. This is\nsimilar to how getPublicationTables behaves if a table is excluded.\n\nI've covered this with a regression test by altering one of the CREATE\nSTATISTICS examples to work with the existing 'exclude_test_table'\nrun. Without the fix, that causes the test to fail with:\n# Failed test 'exclude_test_table: should not dump CREATE STATISTICS\nextended_stats_no_options'\n# at t/002_pg_dump.pl line 4934.\n\nRegards,\nRian",
"msg_date": "Wed, 27 Dec 2023 15:44:09 +1100",
"msg_from": "Rian McGuire <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_dump: Do not dump statistics for excluded tables"
},
{
"msg_contents": "Rian McGuire <[email protected]> writes:\n> I've attached a patch against master that addresses a small bug in pg_dump.\n> Previously, pg_dump would include CREATE STATISTICS statements for\n> tables that were excluded from the dump, causing reload to fail if any\n> excluded tables had extended statistics.\n\nI agree that's a bug ...\n\n> The patch skips the creation of the StatsExtInfo if the associated\n> table does not have the DUMP_COMPONENT_DEFINITION flag set. This is\n> similar to how getPublicationTables behaves if a table is excluded.\n\n... but I don't like the details of this patch (and I'm not too\nthrilled with the implementation of getPublicationTables, either).\nThe style in pg_dump is to put such decisions into separate\npolicy-setting subroutines. Also, skipping creation of the\nDumpableObject altogether is the wrong thing because it'd prevent\npg_dump from tracing or reasoning about dependencies involving the\nstats object, which can be relevant even if the object itself isn't\ndumped --- this is why all the other data-collection subroutines\noperate as they do. getPublicationTables can probably get away\nwith its low-rent approach given that publication membership isn't\nrepresented by pg_depend entries, but it's far from clear that it'll\nnever be an issue for stats.\n\nSo I think it needs to be more like the attached.\n(I did use your test case verbatim.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 27 Dec 2023 12:01:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: Do not dump statistics for excluded tables"
}
] |
[
{
"msg_contents": "The Asserts added to bitmapset.c by commits 71a3e8c43b and 7d58f2342b\ncontain some duplicates, such as in bms_difference, bms_is_subset,\nbms_subset_compare, bms_int_members and bms_join. For instance,\n\n@@ -953,6 +1033,15 @@ bms_int_members(Bitmapset *a, const Bitmapset *b)\n int lastnonzero;\n int shortlen;\n int i;\n+#ifdef REALLOCATE_BITMAPSETS\n+ Bitmapset *tmp = a;\n+#endif\n+\n+ Assert(a == NULL || IsA(a, Bitmapset));\n+ Assert(b == NULL || IsA(b, Bitmapset));\n+\n+ Assert(a == NULL || IsA(a, Bitmapset));\n+ Assert(b == NULL || IsA(b, Bitmapset));\n\nSorry that I failed to notice those duplicates when reviewing the\npatchset, mainly because they were introduced in different patches.\n\nWhile removing those duplicates, I think we can add checks in the new\nAsserts to ensure that Bitmapsets should not contain trailing zero\nwords, as the old Asserts did. That makes the Asserts in the form of:\n\nAssert(a == NULL || (IsA(a, Bitmapset) && a->words[a->nwords - 1] != 0));\n\nI think we can define a new macro for this form and use it to check that\na Bitmapset is valid.\n\nIn passing, I prefer to move the Asserts to the beginning of functions,\njust for paranoia's sake.\n\nHence, proposed patch attached.\n\nThanks\nRichard",
"msg_date": "Wed, 27 Dec 2023 17:30:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Wed, 27 Dec 2023 at 22:30, Richard Guo <[email protected]> wrote:\n> The Asserts added to bitmapset.c by commits 71a3e8c43b and 7d58f2342b\n> contain some duplicates, such as in bms_difference, bms_is_subset,\n> bms_subset_compare, bms_int_members and bms_join. For instance,\n\nI'm just learning of these changes now. I wonder why that wasn't\ndone more like:\n\n#ifdef REALLOCATE_BITMAPSETS\nstatic Bitmapset *\nbms_clone_and_free(Bitmapset *a)\n{\n Bitmapset *c = bms_copy(a);\n\n bms_free(a);\n return c;\n}\n#endif\n\nthen instead of having to do:\n\n#ifdef REALLOCATE_BITMAPSETS\na = (Bitmapset *) palloc(BITMAPSET_SIZE(tmp->nwords));\nmemcpy(a, tmp, BITMAPSET_SIZE(tmp->nwords));\npfree(tmp);\n#endif\n\nall over the place. Couldn't we do:\n\n#ifdef REALLOCATE_BITMAPSETS\n return bms_clone_and_free(a);\n#else\n return a;\n#endif\n\nI think it's best to leave at least that and not hide the behaviour\ninside a macro.\n\nIt would also be good if REALLOCATE_BITMAPSETS was documented in\nbitmapset.c to offer some guidance to people modifying the code so\nthey know under what circumstances they need to return a copy. There\nare no comments that offer any indication of what the intentions are\nwith this :-( What's written in pg_config_manual.h isn't going to\nhelp anyone that's modifying bitmapset.c\n\n> While removing those duplicates, I think we can add checks in the new\n> Asserts to ensure that Bitmapsets should not contain trailing zero\n> words, as the old Asserts did. That makes the Asserts in the form of:\n>\n> Assert(a == NULL || (IsA(a, Bitmapset) && a->words[a->nwords - 1] != 0));\n>\n> I think we can define a new macro for this form and use it to check that\n> a Bitmapset is valid.\n\nI think that's an improvement. I did have a function for this in [1],\nbut per [2], Tom wasn't a fan. I likely shouldn't have bothered with\nthe loop there. It seems fine just to ensure the final word isn't 0.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvr5O41MuUjw0DQKqmAnv7QqvmLqXReEd5o4nXTzWp8-%2Bw%40mail.gmail.com\n[2] https://postgr.es/m/2686153.1677881312%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 28 Dec 2023 23:21:50 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Thu, 28 Dec 2023 at 23:21, David Rowley <[email protected]> wrote:\n> then instead of having to do:\n>\n> #ifdef REALLOCATE_BITMAPSETS\n> a = (Bitmapset *) palloc(BITMAPSET_SIZE(tmp->nwords));\n> memcpy(a, tmp, BITMAPSET_SIZE(tmp->nwords));\n> pfree(tmp);\n> #endif\n>\n> all over the place. Couldn't we do:\n>\n> #ifdef REALLOCATE_BITMAPSETS\n> return bms_clone_and_free(a);\n> #else\n> return a;\n> #endif\n\nI looked into this a bit more and I just can't see why the current\ncode feels like it must do the reallocation in functions such as\nbms_del_members(). We don't repalloc the set there, ever, so why do\nwe need to do it when building with REALLOCATE_BITMAPSETS? It seems\nto me the point of REALLOCATE_BITMAPSETS is to ensure that *if* we\npossibly could end up reallocating the set that we *do* reallocate.\n\nThere's also a few cases where you could argue that the\nREALLOCATE_BITMAPSETS code has introduced bugs. For example,\nbms_del_members(), bms_join() and bms_int_members() are meant to\nguarantee that their left input (both inputs in the case of\nbms_join()) are recycled. Compiling with REALLOCATE_BITMAPSETS\ninvalidates that, so it seems about as likely that building with\nREALLOCATE_BITMAPSETS could cause bugs rather than find bugs.\n\nI've hacked a bit on this to fix these problems and also added some\ndocumentation to try to offer future people modifying bitmapset.c some\nguidance on if they need to care about REALLOCATE_BITMAPSETS.\n\nI also don't think \"hangling\" is a word. So I've adjusted the comment\nin pg_config_manual.h to fix that.\n\nDavid",
"msg_date": "Fri, 29 Dec 2023 14:15:13 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 9:15 AM David Rowley <[email protected]> wrote:\n\n> I looked into this a bit more and I just can't see why the current\n> code feels like it must do the reallocation in functions such as\n> bms_del_members(). We don't repalloc the set there, ever, so why do\n> we need to do it when building with REALLOCATE_BITMAPSETS? It seems\n> to me the point of REALLOCATE_BITMAPSETS is to ensure that *if* we\n> possibly could end up reallocating the set that we *do* reallocate.\n\n\nI think the argument here is whether the REALLOCATE_BITMAPSETS option is\nintended to force a reallocation for every modification of a bitmapset,\nor only for modifications that could potentially require the set to be\nreallocated.\n\nIIUC, the v2 patch addresses the latter scenario. I agree that it can\nhelp find bugs in cases where there's more than one reference to a set,\nand a modification that could reallocate the bitmapset might leave the\nother references being dangling pointers.\n\nIt seems to me that the former scenario also makes sense in some cases.\nFor instance, let's say there are two pointers in two structs, s1->p and\ns2->p, both referencing the same bitmapset. If we modify the bitmapset\nvia s1->p (even if no reallocation could occur), s2 would see different\ncontent via its pointer 'p'. Sometimes this is just wrong and could\ncause problems. If we can force a reallocation for every modification\nof the bitmapset, it'd be much easier to find such bugs.\n\nHaving said that, I think the codes checked in by 71a3e8c43b and\n7d58f2342b are far from perfect. And I agree that the bms_copy_and_free\nin v2 patch is a good idea, as well as the bms_is_valid_set.\n\nThanks\nRichard\n\nOn Fri, Dec 29, 2023 at 9:15 AM David Rowley <[email protected]> wrote:\nI looked into this a bit more and I just can't see why the current\ncode feels like it must do the reallocation in functions such as\nbms_del_members(). We don't repalloc the set there, ever, so why do\nwe need to do it when building with REALLOCATE_BITMAPSETS? It seems\nto me the point of REALLOCATE_BITMAPSETS is to ensure that *if* we\npossibly could end up reallocating the set that we *do* reallocate.I think the argument here is whether the REALLOCATE_BITMAPSETS option isintended to force a reallocation for every modification of a bitmapset,or only for modifications that could potentially require the set to bereallocated.IIUC, the v2 patch addresses the latter scenario. I agree that it canhelp find bugs in cases where there's more than one reference to a set,and a modification that could reallocate the bitmapset might leave theother references being dangling pointers.It seems to me that the former scenario also makes sense in some cases.For instance, let's say there are two pointers in two structs, s1->p ands2->p, both referencing the same bitmapset. If we modify the bitmapsetvia s1->p (even if no reallocation could occur), s2 would see differentcontent via its pointer 'p'. Sometimes this is just wrong and couldcause problems. If we can force a reallocation for every modificationof the bitmapset, it'd be much easier to find such bugs.Having said that, I think the codes checked in by 71a3e8c43b and7d58f2342b are far from perfect. And I agree that the bms_copy_and_freein v2 patch is a good idea, as well as the bms_is_valid_set.ThanksRichard",
"msg_date": "Fri, 29 Dec 2023 16:06:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 21:07, Richard Guo <[email protected]> wrote:\n> It seems to me that the former scenario also makes sense in some cases.\n> For instance, let's say there are two pointers in two structs, s1->p and\n> s2->p, both referencing the same bitmapset. If we modify the bitmapset\n> via s1->p (even if no reallocation could occur), s2 would see different\n> content via its pointer 'p'.\n\nThat statement simply isn't true. If there's no reallocation then\nboth pointers point to the same memory and, therefore have the same\ncontent, not different content. In the absence of a reallocation,\nthen the only time s1->p and s2->p could differ is if s1->p became an\nempty set as a result of the modification.\n\n> Sometimes this is just wrong and could\n> cause problems. If we can force a reallocation for every modification\n> of the bitmapset, it'd be much easier to find such bugs.\n\nWhether it's intended or unintended, at least it's consistent,\ntherefore isn't going to behave differently if the number of\nbitmapwords in the set changes. All REALLOCATE_BITMAPSETS does for\nbms_int_members(), bms_join() and bms_del_members() is change one\nconsistent behaviour (we never reallocate) into some other consistent\nbehaviour (we always reallocate).\n\nIf we make REALLOCATE_BITMAPSETS work how I propose in my patch then\nthe reallocation is just limited to cases where the set *could* be\nrepalloced by a modification. That change gives us consistent\nbehaviour as the set is *always* reallocated when it *could* be\nreallocated. Making it consistent to me, seems good as a debug\noption. Swapping one consistent behaviour for another (as you propose)\nseems pointless and more likely to force us to change code that works\nperfectly fine today.\n\nIn any case, the comments in bms_int_members(), bms_join() and\nbms_del_members() are now only true when REALLOCATE_BITMAPSETS is not\ndefined. Are you proposing we leave those comments outdated? or do you\npropose that we mention that the left inputs are not recycled when\nbuilding with REALLOCATE_BITMAPSETS? In my view, neither of these is\nacceptable as often the primary reason to use something like\nbms_int_members() over bms_intersect() is exactly because the left\ninput is recycled. I don't want people to have to start contorting\ncode because bms_int_members()'s left input recycling cannot be relied\non.\n\nDavid\n\n\n",
"msg_date": "Fri, 29 Dec 2023 22:22:33 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 5:22 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 29 Dec 2023 at 21:07, Richard Guo <[email protected]> wrote:\n> > It seems to me that the former scenario also makes sense in some cases.\n> > For instance, let's say there are two pointers in two structs, s1->p and\n> > s2->p, both referencing the same bitmapset. If we modify the bitmapset\n> > via s1->p (even if no reallocation could occur), s2 would see different\n> > content via its pointer 'p'.\n>\n> That statement simply isn't true. If there's no reallocation then\n> both pointers point to the same memory and, therefore have the same\n> content, not different content. In the absence of a reallocation,\n> then the only time s1->p and s2->p could differ is if s1->p became an\n> empty set as a result of the modification.\n\n\nSorry I didn't make myself clear. By \"different content via s2->p\" I\nmean different content than what came before, not than s1->p. There was\nissue caused by such modifications reported before in [1]. In that\ncase, the problematic query is\n\nexplain (costs off)\nselect * from t t1\n inner join t t2 on t1.a = t2.a\n left join t t3 on t1.b > 1 and t1.b < 2;\n\nThe 'required_relids' in the two RestrictInfos for 't1.b > 1' and 't1.b\n< 2' both reference the same bitmapset. The content of this bitmapset\nis {t1, t3}. However, as we have decided to remove the t1/t2 join by\neliminating t1, we need to substitute the Vars of t1 with the Vars of\nt2. To achieve this, we remove each of the two RestrictInfos from the\njoininfo lists it is in and perform the necessary replacements.\n\nAfter applying this process to the first RestrictInfo, the bitmapset now\nbecomes {t2, t3}. Consequently, the second RestrictInfo also perceives\n{t2, t3} as its required_relids. As a result, when attempting to remove\nit from the joininfo lists, a problem arises, because it is not in t2's\njoininfo list.\n\n\nJust to clarify, I am not objecting to your v2 patch. I just want to\nmake sure what is our purpose in introducing REALLOCATE_BITMAPSETS. I'd\nlike to confirm whether our intention aligns with the former scenario or\nthe latter one that I mentioned upthread.\n\n\nAlso, here are some review comments for the v2 patch:\n\n* There is no reallocation that happens in bms_add_members(), do we need\nto call bms_copy_and_free() there?\n\n* Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Dec 29, 2023 at 5:22 PM David Rowley <[email protected]> wrote:On Fri, 29 Dec 2023 at 21:07, Richard Guo <[email protected]> wrote:\n> It seems to me that the former scenario also makes sense in some cases.\n> For instance, let's say there are two pointers in two structs, s1->p and\n> s2->p, both referencing the same bitmapset. If we modify the bitmapset\n> via s1->p (even if no reallocation could occur), s2 would see different\n> content via its pointer 'p'.\n\nThat statement simply isn't true. If there's no reallocation then\nboth pointers point to the same memory and, therefore have the same\ncontent, not different content. In the absence of a reallocation,\nthen the only time s1->p and s2->p could differ is if s1->p became an\nempty set as a result of the modification.Sorry I didn't make myself clear. By \"different content via s2->p\" Imean different content than what came before, not than s1->p. There wasissue caused by such modifications reported before in [1]. In thatcase, the problematic query isexplain (costs off)select * from t t1 inner join t t2 on t1.a = t2.a left join t t3 on t1.b > 1 and t1.b < 2;The 'required_relids' in the two RestrictInfos for 't1.b > 1' and 't1.b< 2' both reference the same bitmapset. The content of this bitmapsetis {t1, t3}. However, as we have decided to remove the t1/t2 join byeliminating t1, we need to substitute the Vars of t1 with the Vars oft2. To achieve this, we remove each of the two RestrictInfos from thejoininfo lists it is in and perform the necessary replacements.After applying this process to the first RestrictInfo, the bitmapset nowbecomes {t2, t3}. Consequently, the second RestrictInfo also perceives{t2, t3} as its required_relids. As a result, when attempting to removeit from the joininfo lists, a problem arises, because it is not in t2'sjoininfo list.Just to clarify, I am not objecting to your v2 patch. I just want tomake sure what is our purpose in introducing REALLOCATE_BITMAPSETS. I'dlike to confirm whether our intention aligns with the former scenario orthe latter one that I mentioned upthread.Also, here are some review comments for the v2 patch:* There is no reallocation that happens in bms_add_members(), do we needto call bms_copy_and_free() there?* Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?[1] https://www.postgresql.org/message-id/flat/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.comThanksRichard",
"msg_date": "Fri, 29 Dec 2023 18:38:37 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 23:38, Richard Guo <[email protected]> wrote:\n> After applying this process to the first RestrictInfo, the bitmapset now\n> becomes {t2, t3}. Consequently, the second RestrictInfo also perceives\n> {t2, t3} as its required_relids. As a result, when attempting to remove\n> it from the joininfo lists, a problem arises, because it is not in t2's\n> joininfo list.\n\nChanging the relids so they reference t2 instead of t1 requires both\nbms_add_member() and bms_del_member(). The bms_add_member() will\ncause the set to be reallocated with my patch so I don't see why this\ncase isn't covered.\n\n> Also, here are some review comments for the v2 patch:\n>\n> * There is no reallocation that happens in bms_add_members(), do we need\n> to call bms_copy_and_free() there?\n\nThe spec I put in the comment at the top of bitmapset.c says:\n\n> have the code *always* reallocate the bitmapset when the\n> * set *could* be reallocated as a result of the modification\n\nLooking at bms_add_members(), it seems to me that the set *could* be\nreallocated as a result of the modification, per:\n\nif (a->nwords < b->nwords)\n{\n result = bms_copy(b);\n other = a;\n}\n\nIn my view, that meets the spec I outlined.\n\n> * Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?\n\nI did briefly have the Assert in bms_free(), but I removed it as I\ndidn't think it was that useful to validate the set before freeing it.\nCertainly, there'd be an argument to do that, but I ended up not\nputting it there. I wondered if there could be a case where we do\nsomething like bms_int_members() which results in an empty set and\nafter checking for and finding the set is now empty, we call\nbms_free(). If we did that then we'd Assert fail. In reality, we use\npfree() instead of bms_free() as we already know the set is not NULL,\nso it wouldn't cause a problem for that particular case. I wondered if\nthere might be another one that I didn't spot, so felt it was best not\nto Assert there.\n\nI imagine that if we found some case where the bms_free() Assert\nfailed, we'd likely just remove it rather than trying to make the set\nvalid before freeing it. So it seems pretty pointless if we'd opt to\nremove the Assert() at the first sign of trouble.\n\nDavid\n\n\n",
"msg_date": "Sun, 31 Dec 2023 11:44:18 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 6:44 AM David Rowley <[email protected]> wrote:\n\n> On Fri, 29 Dec 2023 at 23:38, Richard Guo <[email protected]> wrote:\n> > After applying this process to the first RestrictInfo, the bitmapset now\n> > becomes {t2, t3}. Consequently, the second RestrictInfo also perceives\n> > {t2, t3} as its required_relids. As a result, when attempting to remove\n> > it from the joininfo lists, a problem arises, because it is not in t2's\n> > joininfo list.\n>\n> Changing the relids so they reference t2 instead of t1 requires both\n> bms_add_member() and bms_del_member(). The bms_add_member() will\n> cause the set to be reallocated with my patch so I don't see why this\n> case isn't covered.\n\n\nHmm, you're right. This particular case is covered by your patch. I\nwondered if there might be another case where a bitmapset with more than\none reference is modified without being potentially required to be\nreallocated. I'm not sure if there is such case in reality (or could be\nintroduced in the future), but if there is, I think it would be great if\nREALLOCATE_BITMAPSETS could also help handle it.\n\n\n> > Also, here are some review comments for the v2 patch:\n> >\n> > * There is no reallocation that happens in bms_add_members(), do we need\n> > to call bms_copy_and_free() there?\n>\n> The spec I put in the comment at the top of bitmapset.c says:\n>\n> > have the code *always* reallocate the bitmapset when the\n> > * set *could* be reallocated as a result of the modification\n>\n> Looking at bms_add_members(), it seems to me that the set *could* be\n> reallocated as a result of the modification, per:\n>\n> if (a->nwords < b->nwords)\n> {\n> result = bms_copy(b);\n> other = a;\n> }\n>\n> In my view, that meets the spec I outlined.\n\n\nI think one purpose of introducing REALLOCATE_BITMAPSETS is to help find\ndangling pointers to Bitmapset's. From this point of view, I agree that\nwe should call bms_copy_and_free() in bms_add_members(), because the\nbitmapset 'a' might be recycled (in-place modified, or pfreed).\n\nAccording to this criterion, it seems to me that we should also call\nbms_copy_and_free() in bms_join(), since both inputs there might be\nrecycled. But maybe I'm not understanding it correctly.\n\n\n> > * Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?\n>\n> I did briefly have the Assert in bms_free(), but I removed it as I\n> didn't think it was that useful to validate the set before freeing it.\n> Certainly, there'd be an argument to do that, but I ended up not\n> putting it there. I wondered if there could be a case where we do\n> something like bms_int_members() which results in an empty set and\n> after checking for and finding the set is now empty, we call\n> bms_free(). If we did that then we'd Assert fail. In reality, we use\n> pfree() instead of bms_free() as we already know the set is not NULL,\n> so it wouldn't cause a problem for that particular case. I wondered if\n> there might be another one that I didn't spot, so felt it was best not\n> to Assert there.\n>\n> I imagine that if we found some case where the bms_free() Assert\n> failed, we'd likely just remove it rather than trying to make the set\n> valid before freeing it. So it seems pretty pointless if we'd opt to\n> remove the Assert() at the first sign of trouble.\n\n\nI think I understand your concern. In some bitmapset manipulation\nfunctions, like bms_int_members(), the result might be computed as\nempty. In such cases we need to free the input bitmap. If we use\nbms_free(), the Assert would fail.\n\nIt seems to me that this scenario can only occur within the bitmapset\nmanipulation functions. Outside of bitmapset.c, I think it should be\nquite safe to call bms_free() with this Assert. So I think it should\nnot have problem to have this Assert in bms_free() as long as we are\ncareful enough when calling bms_free() inside bitmapset.c.\n\nThanks\nRichard\n\nOn Sun, Dec 31, 2023 at 6:44 AM David Rowley <[email protected]> wrote:On Fri, 29 Dec 2023 at 23:38, Richard Guo <[email protected]> wrote:\n> After applying this process to the first RestrictInfo, the bitmapset now\n> becomes {t2, t3}. Consequently, the second RestrictInfo also perceives\n> {t2, t3} as its required_relids. As a result, when attempting to remove\n> it from the joininfo lists, a problem arises, because it is not in t2's\n> joininfo list.\n\nChanging the relids so they reference t2 instead of t1 requires both\nbms_add_member() and bms_del_member(). The bms_add_member() will\ncause the set to be reallocated with my patch so I don't see why this\ncase isn't covered.Hmm, you're right. This particular case is covered by your patch. Iwondered if there might be another case where a bitmapset with more thanone reference is modified without being potentially required to bereallocated. I'm not sure if there is such case in reality (or could beintroduced in the future), but if there is, I think it would be great ifREALLOCATE_BITMAPSETS could also help handle it. \n> Also, here are some review comments for the v2 patch:\n>\n> * There is no reallocation that happens in bms_add_members(), do we need\n> to call bms_copy_and_free() there?\n\nThe spec I put in the comment at the top of bitmapset.c says:\n\n> have the code *always* reallocate the bitmapset when the\n> * set *could* be reallocated as a result of the modification\n\nLooking at bms_add_members(), it seems to me that the set *could* be\nreallocated as a result of the modification, per:\n\nif (a->nwords < b->nwords)\n{\n result = bms_copy(b);\n other = a;\n}\n\nIn my view, that meets the spec I outlined.I think one purpose of introducing REALLOCATE_BITMAPSETS is to help finddangling pointers to Bitmapset's. From this point of view, I agree thatwe should call bms_copy_and_free() in bms_add_members(), because thebitmapset 'a' might be recycled (in-place modified, or pfreed).According to this criterion, it seems to me that we should also callbms_copy_and_free() in bms_join(), since both inputs there might berecycled. But maybe I'm not understanding it correctly. \n> * Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?\n\nI did briefly have the Assert in bms_free(), but I removed it as I\ndidn't think it was that useful to validate the set before freeing it.\nCertainly, there'd be an argument to do that, but I ended up not\nputting it there. I wondered if there could be a case where we do\nsomething like bms_int_members() which results in an empty set and\nafter checking for and finding the set is now empty, we call\nbms_free(). If we did that then we'd Assert fail. In reality, we use\npfree() instead of bms_free() as we already know the set is not NULL,\nso it wouldn't cause a problem for that particular case. I wondered if\nthere might be another one that I didn't spot, so felt it was best not\nto Assert there.\n\nI imagine that if we found some case where the bms_free() Assert\nfailed, we'd likely just remove it rather than trying to make the set\nvalid before freeing it. So it seems pretty pointless if we'd opt to\nremove the Assert() at the first sign of trouble.I think I understand your concern. In some bitmapset manipulationfunctions, like bms_int_members(), the result might be computed asempty. In such cases we need to free the input bitmap. If we usebms_free(), the Assert would fail.It seems to me that this scenario can only occur within the bitmapsetmanipulation functions. Outside of bitmapset.c, I think it should bequite safe to call bms_free() with this Assert. So I think it shouldnot have problem to have this Assert in bms_free() as long as we arecareful enough when calling bms_free() inside bitmapset.c.ThanksRichard",
"msg_date": "Tue, 2 Jan 2024 15:18:07 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 20:18, Richard Guo <[email protected]> wrote:\n> I think one purpose of introducing REALLOCATE_BITMAPSETS is to help find\n> dangling pointers to Bitmapset's. From this point of view, I agree that\n> we should call bms_copy_and_free() in bms_add_members(), because the\n> bitmapset 'a' might be recycled (in-place modified, or pfreed).\n\nI spoke to Andres about this in our weekly meeting and he explained it\nin such I way that I now agree that it's useful to repalloc with all\nmodifications. I'd previously thought that when the comments mention\nthat some function \"recycle their left input\" that you could be\ncertain that the Bitmapset would be modified in-place, therefore any\nother pointers pointing to this set should remain valid. As Andres\nreminded me, that's not the case when the set becomes empty and\nthere's nothing particularly special about a set becoming empty so\nmaking a clone of the set will help identify any pointers that don't\nget updated as a result of the modification.\n\nI've now adjusted the patch to have all modifications to Bitmapsets\nreturn a newly allocated set. There are a few cases missing in master\nthat need to be fixed (items 6-10 below):\n\nThe patch now includes changes for the following:\n\n1. Document what REALLOCATE_BITMAPSETS is for.\n2. Provide a reusable function to check a set is valid and use it in\ncassert builds and use it to validate sets (Richard)\n3. Provide a reusable function to copy a set and pfree the original\nand use that instead of duplicating that code all over bitmapset.c\n4. Fix Assert duplication (Richard)\n5. Make comments in bms_union, bms_intersect, bms_difference clear\nthat a new set is allocated. (This has annoyed me for a while)\n6. Fix failure to duplicate the set in bms_add_members() when b == NULL.\n7. Fix failure to duplicate the set in bms_add_range() when upper < lower\n8. Fix failure to duplicate the set in bms_add_range() when the set\nhas enough words already.\n9. Fix failure to duplicate the set in bms_del_members() when b == NULL\n10. Fix failure to duplicate the set in bms_join() when a == NULL and\nalso fix the b == NULL case too\n11. Fix hazard in bms_add_members(), bms_int_members(),\nbms_del_members() and bms_join(), where the code added in 7d58f2342\ncould crash if a == b due to the REALLOCATE_BITMAPSETS code pfreeing\n'a'. This happens in knapsack.c:93, although I propose we adjust that\ncode now that empty sets are always represented as NULL.\n\nDavid\n\n\n\n\n\n> According to this criterion, it seems to me that we should also call\n> bms_copy_and_free() in bms_join(), since both inputs there might be\n> recycled. But maybe I'm not understanding it correctly.\n>\n>>\n>> > * Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?\n>>\n>> I did briefly have the Assert in bms_free(), but I removed it as I\n>> didn't think it was that useful to validate the set before freeing it.\n>> Certainly, there'd be an argument to do that, but I ended up not\n>> putting it there. I wondered if there could be a case where we do\n>> something like bms_int_members() which results in an empty set and\n>> after checking for and finding the set is now empty, we call\n>> bms_free(). If we did that then we'd Assert fail. In reality, we use\n>> pfree() instead of bms_free() as we already know the set is not NULL,\n>> so it wouldn't cause a problem for that particular case. I wondered if\n>> there might be another one that I didn't spot, so felt it was best not\n>> to Assert there.\n>>\n>> I imagine that if we found some case where the bms_free() Assert\n>> failed, we'd likely just remove it rather than trying to make the set\n>> valid before freeing it. So it seems pretty pointless if we'd opt to\n>> remove the Assert() at the first sign of trouble.\n>\n>\n> I think I understand your concern. In some bitmapset manipulation\n> functions, like bms_int_members(), the result might be computed as\n> empty. In such cases we need to free the input bitmap. If we use\n> bms_free(), the Assert would fail.\n>\n> It seems to me that this scenario can only occur within the bitmapset\n> manipulation functions. Outside of bitmapset.c, I think it should be\n> quite safe to call bms_free() with this Assert. So I think it should\n> not have problem to have this Assert in bms_free() as long as we are\n> careful enough when calling bms_free() inside bitmapset.c.\n>\n> Thanks\n> Richard",
"msg_date": "Tue, 16 Jan 2024 16:08:36 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 11:08 AM David Rowley <[email protected]> wrote:\n\n> I've now adjusted the patch to have all modifications to Bitmapsets\n> return a newly allocated set. There are a few cases missing in master\n> that need to be fixed (items 6-10 below):\n>\n> The patch now includes changes for the following:\n>\n> 1. Document what REALLOCATE_BITMAPSETS is for.\n> 2. Provide a reusable function to check a set is valid and use it in\n> cassert builds and use it to validate sets (Richard)\n> 3. Provide a reusable function to copy a set and pfree the original\n> and use that instead of duplicating that code all over bitmapset.c\n> 4. Fix Assert duplication (Richard)\n> 5. Make comments in bms_union, bms_intersect, bms_difference clear\n> that a new set is allocated. (This has annoyed me for a while)\n> 6. Fix failure to duplicate the set in bms_add_members() when b == NULL.\n> 7. Fix failure to duplicate the set in bms_add_range() when upper < lower\n> 8. Fix failure to duplicate the set in bms_add_range() when the set\n> has enough words already.\n> 9. Fix failure to duplicate the set in bms_del_members() when b == NULL\n> 10. Fix failure to duplicate the set in bms_join() when a == NULL and\n> also fix the b == NULL case too\n> 11. Fix hazard in bms_add_members(), bms_int_members(),\n> bms_del_members() and bms_join(), where the code added in 7d58f2342\n> could crash if a == b due to the REALLOCATE_BITMAPSETS code pfreeing\n> 'a'. This happens in knapsack.c:93, although I propose we adjust that\n> code now that empty sets are always represented as NULL.\n\n\nThank you so much for all the work you have put into making this patch\nperfect. I reviewed through the v3 patch and I do not have further\ncomments. I think it's in good shape now.\n\nThanks\nRichard\n\nOn Tue, Jan 16, 2024 at 11:08 AM David Rowley <[email protected]> wrote:\nI've now adjusted the patch to have all modifications to Bitmapsets\nreturn a newly allocated set. There are a few cases missing in master\nthat need to be fixed (items 6-10 below):\n\nThe patch now includes changes for the following:\n\n1. Document what REALLOCATE_BITMAPSETS is for.\n2. Provide a reusable function to check a set is valid and use it in\ncassert builds and use it to validate sets (Richard)\n3. Provide a reusable function to copy a set and pfree the original\nand use that instead of duplicating that code all over bitmapset.c\n4. Fix Assert duplication (Richard)\n5. Make comments in bms_union, bms_intersect, bms_difference clear\nthat a new set is allocated. (This has annoyed me for a while)\n6. Fix failure to duplicate the set in bms_add_members() when b == NULL.\n7. Fix failure to duplicate the set in bms_add_range() when upper < lower\n8. Fix failure to duplicate the set in bms_add_range() when the set\nhas enough words already.\n9. Fix failure to duplicate the set in bms_del_members() when b == NULL\n10. Fix failure to duplicate the set in bms_join() when a == NULL and\nalso fix the b == NULL case too\n11. Fix hazard in bms_add_members(), bms_int_members(),\nbms_del_members() and bms_join(), where the code added in 7d58f2342\ncould crash if a == b due to the REALLOCATE_BITMAPSETS code pfreeing\n'a'. This happens in knapsack.c:93, although I propose we adjust that\ncode now that empty sets are always represented as NULL.Thank you so much for all the work you have put into making this patchperfect. I reviewed through the v3 patch and I do not have furthercomments. I think it's in good shape now.ThanksRichard",
"msg_date": "Tue, 16 Jan 2024 16:00:09 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 21:00, Richard Guo <[email protected]> wrote:\n> Thank you so much for all the work you have put into making this patch\n> perfect. I reviewed through the v3 patch and I do not have further\n> comments. I think it's in good shape now.\n\nThanks for looking again. I pushed the patch after removing some\ncomments mentioning \"these operations\". I found these not to be very\nuseful and also misleading.\n\nDavid\n\n\n",
"msg_date": "Wed, 17 Jan 2024 09:44:57 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revise the Asserts added to bimapset manipulation functions"
}
] |
[
{
"msg_contents": "Hello,\nHaving issues compiling PostgreSQL 16.1 targeting ARM32 architecture.\nUsing building, which currently has 15.5 version, and it compiles and runs well.\nCurrently using GCC 11 with binutils 2.39.\n\nDuring initdb, it gives error message:\n[356] FATAL: could not load library \"/pgsql16/lib/dict_snowball.so\": /pgsql16/lib/dict_snowball.so: undefined symbol: CurrentMemoryContext\nSTATEMENT: CREATE FUNCTION dsnowball_init(INTERNAL)\n RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'\n LANGUAGE C STRICT;\n\nI've traced build log, and this is how it builds for version 13:\n/home/build/panelpi4/host/bin/arm-buildroot-linux-gnueabihf-gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -O1 -g0 -fPIC -I../../../src/include/snowball -I../../../src/include/snowball/libstemmer -I../../../src/include -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -I/home/build/panelpi4/host/arm-buildroot-linux-gnueabihf/sysroot/usr/bin/../../usr/include/libxml2 -c -o dict_snowball.o dict_snowball.c\n\nAnd this is version 16.1:\n/home/build/panelpi4/host/bin/arm-buildroot-linux-gnueabihf-gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -O1 -g0 -fPIC -fvisibility=hidden -I../../../src/include/snowball -I../../../src/include/snowball/libstemmer -I../../../src/include -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -I/home/build/panelpi4/host/arm-buildroot-linux-gnueabihf/sysroot/usr/bin/../../usr/include/libxml2 -c -o dict_snowball.o dict_snowball.c\n\nSo the new thing is: \"-fvisibility=hidden\", and there is discussion about it a year ago:\nhttps://www.postgresql.org/message-id/20220718220938.5yx7if2mru3rd6jc%40awork3.anarazel.de\n\nAny tips to go around this issue?\nThanks!\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 11:44:29 +0200",
"msg_from": "=?utf-8?Q?Vitalijus_Jefi=C5=A1ovas?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16.1 dict_snowball.so: undefined symbol:\n CurrentMemoryContext"
}
] |
[
{
"msg_contents": "Problem:\n--------\nToast works well for its claimed purposes. However the current detoast\ninfrastructure doesn't shared the detoast datum in the query's\nlifespan, which will cause some inefficiency like this:\n\nSELECT big_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\nFROM t;\n\nIn the above case, the big_jsonb_col will be detoast 3 times for every\nrecord.\n\na more common case maybe:\n\nCREATE TABLE t(a numeric);\ninsert into t select i FROM generate_series(1, 10)i;\nSELECT * FROM t WHERE a > 3;\n\nHere we needs detoasted because of VARATT_CAN_MAKE_SHORT, and it needs to\nbe detoasted twice, one is in a > 3, the other one is in the targetlist,\nwhere we need to call numeric_out. \n\nProposal\n--------\n\nWhen we access some desired toast column in EEOP_{SCAN|INNER_OUTER}_VAR\nsteps, we can detoast it immediately and save it back to\nslot->tts_values. With this way, when any other expressions seeks for\nthe Datum, it get the detoast version. Planner decides which Vars\nshould use this feature, executor manages it detoast action and memory\nmanagement. \n\nPlanner design\n---------------\n\n1. This feature only happen at the Plan Node where the detoast would\n happen in the previous topology, for example:\n\n for example:\n \n SELECT t1.toast_col, t2.toast_col FROM t1 join t2 USING (toast_col);\n\n toast_col just happens at the Join node's slot even if we have a\n projection on t1 or t2 at the scan node (except the Parameterized path).\n\n However if\n\n SELECT t1.toast_col, t2.toast_col\n FROM t1 join t2\n USING (toast_col)\n WHERE t1.toast_col > 'a';\n\n the detoast *may* happen at the scan of level t1 since \"t1.toast_col >\n 'a'\" accesses the Var within a FuncCall ('>' operator), which will\n cause a detoast. (However it should not happen if it is under a Sort\n node, for details see Planner Design section 2).\n\n At the implementation side, I added \"Bitmapset *reference_attrs;\" to\n Scan node which show if the Var should be accessed with the\n pre-detoast way in expression execution engine. the value is\n maintained at the create_plan/set_plan_refs stage. \n\n Two similar fields are added in Join node.\n\n \tBitmapset *outer_reference_attrs;\n\tBitmapset *inner_reference_attrs;\n\n In the both case, I tracked the level of walker/mutator, if the level\n greater than 1 when we access a Var, the 'var->varattno - 1' is added\n to the bitmapset. Some special node should be ignored, see\n increase_level_for_pre_detoast for details.\n\n2. We also need to make sure the detoast datum will not increase the\n work_mem usage for the nodes like Sort/Hash etc, all of such nodes\n can be found with search 'CP_SMALL_TLIST' flags. \n\n If the a node under a Sort-Hash-like nodes, we have some extra\n checking to see if a Var is a *directly input* of such nodes. If yes,\n we can't detoast it in advance, or else, we know the Var has been\n discarded before goes to these nodes, we still can use the shared\n detoast feature.\n\n The simplest cases to show this is:\n\n For example:\n\n 2.1\n Query-1\n explain (verbose) select * from t1 where b > 'a';\n -- b can be detoast in advance.\n\n Query-2\n explain (verbose) select * from t1 where b > 'a' order by c;\n -- b can't be detoast since it will makes the Sort use more work_mem.\n\n Query-3\n explain (verbose) select a, c from t1 where b > 'a' order by c;\n -- b can be pre-detoasted, since it is discarded before goes to Sort\n node. In this case it doesn't do anything good, but for some complex\n case like Query-4, it does. \n\nQuery-4 \nexplain (costs off, verbose)\nselect t3.*\nfrom t1, t2, t3\nwhere t2.c > '999999999999999'\nand t2.c = t1.c\nand t3.b = t1.b;\n\n QUERY PLAN \n--------------------------------------------------------------\n Hash Join\n Output: t3.a, t3.b, t3.c\n Hash Cond: (t3.b = t1.b)\n -> Seq Scan on public.t3\n Output: t3.a, t3.b, t3.c\n -> Hash\n Output: t1.b\n -> Nested Loop\n Output: t1.b <-- Note here 3\n -> Seq Scan on public.t2\n Output: t2.a, t2.b, t2.c\n Filter: (t2.c > '9999...'::text) <--- Note Here 1\n -> Index Scan using t1_c_idx on public.t1\n Output: t1.a, t1.b, t1.c\n Index Cond: (t1.c = t2.c) <--- Note Here 2 \n(15 rows) \n \nIn this case, detoast datum for t2.c can be shared and it benefits for\nt2.c = t1.c and no harm for the Hash node.\n\n\nExecution side\n--------------\n\nOnce we decide a Var should be pre-detoasted for a given plan node, a\nspecial step named as EEOP_{INNER/OUTER/SCAN}_VAR_TOAST will be created\nduring ExecInitExpr stage. The special steps are introduced to avoid its\nimpacts on the non-related EEOP_{INNER/OUTER/SCAN}_VAR code path.\n\nslot->tts_values is used to store the detoast datum so that any other\nexpressions can access it pretty easily.\n\nBecause of the above design, the detoast datum should have a same\nlifespan as any other slot->tts_values[*], so the default\necxt_per_tuple_memory is not OK for this. At last I used slot->tts_mcxt\nto hold the memory, and maintaining these lifecycles in execTuples.c. To\nknow which datum in slot->tts_values is pre-detoasted, Bitmapset *\nslot->detoast_attrs is introduced.\n\nDuring the execution of these steps, below code like this is used:\n\nstatic inline void\nExecSlotDetoastDatum(TupleTableSlot *slot, int attnum)\n{\n\tif (!slot->tts_isnull[attnum] && VARATT_IS_EXTENDED(slot->tts_values[attnum]))\n\t{\n\t\tDatum\t\toldDatum;\n\t\tMemoryContext old = MemoryContextSwitchTo(slot->tts_mcxt);\n\n\t\toldDatum = slot->tts_values[attnum];\n\t\tslot->tts_values[attnum] = PointerGetDatum(detoast_attr(\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t(struct varlena *) oldDatum));\n\t\tAssert(slot->tts_nvalid > attnum);\n\t\tif (oldDatum != slot->tts_values[attnum])\n\t\t\tslot->pre_detoasted_attrs = bms_add_member(slot->pre_detoasted_attrs, attnum);\n\t\tMemoryContextSwitchTo(old);\n\t}\n}\n\n\nTesting\n-------\n- shared_detoast_slow.sql is used to test the planner related codes changes.\n\n 'set jit to off' will enable more INFO logs about which Var is\n pre-detoast in which node level.\n\n- the cases the patch doesn't help much.\n\ncreate table w(a int, b numeric);\ninsert into w select i, i from generate_series(1, 1000000)i;\n\nQ3:\nselect a from w where a > 0;\n\nQ4:\nselect b from w where b > 0;\n\npgbench -n -f 1.sql postgres -T 10 -M prepared\n\nrun 5 times and calculated the average value.\n\n| Qry No | Master | patched | perf | comment |\n|--------+---------+---------+-------+-----------------------------------------|\n| 3 | 309.438 | 308.411 | 0 | nearly zero impact on them |\n| 4 | 431.735 | 420.833 | +2.6% | save the detoast effort for numeric_out |\n\n- good case\n\n setup:\n\ncreate table b(big jsonb);\ninsert into b select\njsonb_object_agg( x::text,\nrandom()::text || random()::text || random()::text )\nfrom generate_series(1,600) f(x);\ninsert into b select (select big from b) from generate_series(1, 1000)i;\n\n\n workload:\n Q1:\n select big->'1', big->'2', big->'3', big->'5', big->'10' from b;\n\n Q2:\n select 1 from b where length(big->>'1') > 0 and length(big->>'2') > 2;\n\n\n| No | Master | patched |\n|----+--------+---------|\n| 1 | 1677 | 357 |\n| 2 | 638 | 332 |\n\n\nSome Known issues:\n------------------\n\n1. Currently only Scan & Join nodes are considered for this feature.\n2. JIT is not adapted for this purpose yet.\n3. This feature builds a strong relationship between slot->tts_values\n and slot->pre_detoast_attrs. for example slot->tts_values[1] holds a\n detoast datum, so 1 is in the slot->pre_detoast_attrs bitmapset,\n however if someone changes the value directly, like\n \"slot->tts_values[1] = 2;\" but without touching the slot's\n pre_detoast_attrs then troubles comes. The good thing is the detoast\n can only happen on Scan/Join nodes. so any other slot is impossible\n to have such issue. I run the following command to find out such\n cases, looks none of them is a slot form Scan/Join node.\n\n egrep -nri 'slot->tts_values\\[[^\\]]*\\] = *\n\nAny thought?\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 27 Dec 2023 18:45:25 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shared detoast Datum proposal "
},
{
"msg_contents": "Andy Fan <[email protected]> writes:\n\n>\n> Some Known issues:\n> ------------------\n>\n> 1. Currently only Scan & Join nodes are considered for this feature.\n> 2. JIT is not adapted for this purpose yet.\n\nJIT is adapted for this feature in v2. Any feedback is welcome.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 01 Jan 2024 21:55:02 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On Mon, 1 Jan 2024 at 19:26, Andy Fan <[email protected]> wrote:\n>\n>\n> Andy Fan <[email protected]> writes:\n>\n> >\n> > Some Known issues:\n> > ------------------\n> >\n> > 1. Currently only Scan & Join nodes are considered for this feature.\n> > 2. JIT is not adapted for this purpose yet.\n>\n> JIT is adapted for this feature in v2. Any feedback is welcome.\n\nOne of the tests was aborted at CFBOT [1] with:\n[09:47:00.735] dumping /tmp/cores/postgres-11-28182.core for\n/tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/bin/postgres\n[09:47:01.035] [New LWP 28182]\n[09:47:01.748] [Thread debugging using libthread_db enabled]\n[09:47:01.748] Using host libthread_db library\n\"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n[09:47:09.392] Core was generated by `postgres: postgres regression\n[local] SELECT '.\n[09:47:09.392] Program terminated with signal SIGSEGV, Segmentation fault.\n[09:47:09.392] #0 0x00007fa4eed4a5a1 in ?? ()\n[09:47:11.123]\n[09:47:11.123] Thread 1 (Thread 0x7fa4f8050a40 (LWP 28182)):\n[09:47:11.123] #0 0x00007fa4eed4a5a1 in ?? ()\n[09:47:11.123] No symbol table info available.\n...\n...\n[09:47:11.123] #4 0x00007fa4ebc7a186 in LLVMOrcGetSymbolAddress () at\n/build/llvm-toolchain-11-HMpQvg/llvm-toolchain-11-11.0.1/llvm/lib/ExecutionEngine/Orc/OrcCBindings.cpp:124\n[09:47:11.123] No locals.\n[09:47:11.123] #5 0x00007fa4eed6fc7a in llvm_get_function\n(context=0x564b1813a8a0, funcname=0x7fa4eed4a570 \"AWAVATSH\\201\",\n<incomplete sequence \\354\\210>) at\n../src/backend/jit/llvm/llvmjit.c:460\n[09:47:11.123] addr = 94880527996960\n[09:47:11.123] __func__ = \"llvm_get_function\"\n[09:47:11.123] #6 0x00007fa4eed902e1 in ExecRunCompiledExpr\n(state=0x0, econtext=0x564b18269d20, isNull=0x7ffc11054d5f) at\n../src/backend/jit/llvm/llvmjit_expr.c:2577\n[09:47:11.123] cstate = <optimized out>\n[09:47:11.123] func = 0x564b18269d20\n[09:47:11.123] #7 0x0000564b1698e614 in ExecEvalExprSwitchContext\n(isNull=0x7ffc11054d5f, econtext=0x564b18269d20, state=0x564b182ad820)\nat ../src/include/executor/executor.h:355\n[09:47:11.123] retDatum = <optimized out>\n[09:47:11.123] oldContext = 0x564b182680d0\n[09:47:11.123] retDatum = <optimized out>\n[09:47:11.123] oldContext = <optimized out>\n[09:47:11.123] #8 ExecProject (projInfo=0x564b182ad818) at\n../src/include/executor/executor.h:389\n[09:47:11.123] econtext = 0x564b18269d20\n[09:47:11.123] state = 0x564b182ad820\n[09:47:11.123] slot = 0x564b182ad788\n[09:47:11.123] isnull = false\n[09:47:11.123] econtext = <optimized out>\n[09:47:11.123] state = <optimized out>\n[09:47:11.123] slot = <optimized out>\n[09:47:11.123] isnull = <optimized out>\n[09:47:11.123] #9 ExecMergeJoin (pstate=<optimized out>) at\n../src/backend/executor/nodeMergejoin.c:836\n[09:47:11.123] node = <optimized out>\n[09:47:11.123] joinqual = 0x0\n[09:47:11.123] otherqual = 0x0\n[09:47:11.123] qualResult = <optimized out>\n[09:47:11.123] compareResult = <optimized out>\n[09:47:11.123] innerPlan = <optimized out>\n[09:47:11.123] innerTupleSlot = <optimized out>\n[09:47:11.123] outerPlan = <optimized out>\n[09:47:11.123] outerTupleSlot = <optimized out>\n[09:47:11.123] econtext = 0x564b18269d20\n[09:47:11.123] doFillOuter = false\n[09:47:11.123] doFillInner = false\n[09:47:11.123] __func__ = \"ExecMergeJoin\"\n[09:47:11.123] #10 0x0000564b169275b9 in ExecProcNodeFirst\n(node=0x564b18269db0) at ../src/backend/executor/execProcnode.c:464\n[09:47:11.123] No locals.\n[09:47:11.123] #11 0x0000564b169a2675 in ExecProcNode\n(node=0x564b18269db0) at ../src/include/executor/executor.h:273\n[09:47:11.123] No locals.\n[09:47:11.123] #12 ExecRecursiveUnion (pstate=0x564b182684a0) at\n../src/backend/executor/nodeRecursiveunion.c:115\n[09:47:11.123] node = 0x564b182684a0\n[09:47:11.123] outerPlan = 0x564b18268d00\n[09:47:11.123] innerPlan = 0x564b18269db0\n[09:47:11.123] plan = 0x564b182ddc78\n[09:47:11.123] slot = <optimized out>\n[09:47:11.123] isnew = false\n[09:47:11.123] #13 0x0000564b1695a421 in ExecProcNode\n(node=0x564b182684a0) at ../src/include/executor/executor.h:273\n[09:47:11.123] No locals.\n[09:47:11.123] #14 CteScanNext (node=0x564b183a6830) at\n../src/backend/executor/nodeCtescan.c:103\n[09:47:11.123] cteslot = <optimized out>\n[09:47:11.123] estate = <optimized out>\n[09:47:11.123] dir = ForwardScanDirection\n[09:47:11.123] forward = true\n[09:47:11.123] tuplestorestate = 0x564b183a5cd0\n[09:47:11.123] eof_tuplestore = <optimized out>\n[09:47:11.123] slot = 0x564b183a6be0\n[09:47:11.123] #15 0x0000564b1692e22b in ExecScanFetch\n(node=node@entry=0x564b183a6830,\naccessMtd=accessMtd@entry=0x564b1695a183 <CteScanNext>,\nrecheckMtd=recheckMtd@entry=0x564b16959db3 <CteScanRecheck>) at\n../src/backend/executor/execScan.c:132\n[09:47:11.123] estate = <optimized out>\n[09:47:11.123] #16 0x0000564b1692e332 in ExecScan\n(node=0x564b183a6830, accessMtd=accessMtd@entry=0x564b1695a183\n<CteScanNext>, recheckMtd=recheckMtd@entry=0x564b16959db3\n<CteScanRecheck>) at ../src/backend/executor/execScan.c:181\n[09:47:11.123] econtext = 0x564b183a6b50\n[09:47:11.123] qual = 0x0\n[09:47:11.123] projInfo = 0x0\n[09:47:11.123] #17 0x0000564b1695a5ea in ExecCteScan\n(pstate=<optimized out>) at ../src/backend/executor/nodeCtescan.c:164\n[09:47:11.123] node = <optimized out>\n[09:47:11.123] #18 0x0000564b169a81da in ExecProcNode\n(node=0x564b183a6830) at ../src/include/executor/executor.h:273\n[09:47:11.123] No locals.\n[09:47:11.123] #19 ExecSort (pstate=0x564b183a6620) at\n../src/backend/executor/nodeSort.c:149\n[09:47:11.123] plannode = 0x564b182dfb78\n[09:47:11.123] outerNode = 0x564b183a6830\n[09:47:11.123] tupDesc = <optimized out>\n[09:47:11.123] tuplesortopts = <optimized out>\n[09:47:11.123] node = 0x564b183a6620\n[09:47:11.123] estate = 0x564b182681d0\n[09:47:11.123] dir = ForwardScanDirection\n[09:47:11.123] tuplesortstate = 0x564b18207ff0\n[09:47:11.123] slot = <optimized out>\n[09:47:11.123] #20 0x0000564b169275b9 in ExecProcNodeFirst\n(node=0x564b183a6620) at ../src/backend/executor/execProcnode.c:464\n[09:47:11.123] No locals.\n[09:47:11.123] #21 0x0000564b16913d01 in ExecProcNode\n(node=0x564b183a6620) at ../src/include/executor/executor.h:273\n[09:47:11.123] No locals.\n[09:47:11.123] #22 ExecutePlan (estate=estate@entry=0x564b182681d0,\nplanstate=0x564b183a6620, use_parallel_mode=<optimized out>,\noperation=operation@entry=CMD_SELECT,\nsendTuples=sendTuples@entry=true, numberTuples=numberTuples@entry=0,\ndirection=ForwardScanDirection, dest=0x564b182e2728,\nexecute_once=true) at ../src/backend/executor/execMain.c:1670\n[09:47:11.123] slot = <optimized out>\n[09:47:11.123] current_tuple_count = 0\n[09:47:11.123] #23 0x0000564b16914024 in standard_ExecutorRun\n(queryDesc=0x564b181ba200, direction=ForwardScanDirection, count=0,\nexecute_once=<optimized out>) at\n../src/backend/executor/execMain.c:365\n[09:47:11.123] estate = 0x564b182681d0\n[09:47:11.123] operation = CMD_SELECT\n[09:47:11.123] dest = 0x564b182e2728\n[09:47:11.123] sendTuples = true\n[09:47:11.123] oldcontext = 0x564b181ba100\n[09:47:11.123] __func__ = \"standard_ExecutorRun\"\n[09:47:11.123] #24 0x0000564b1691418f in ExecutorRun\n(queryDesc=queryDesc@entry=0x564b181ba200,\ndirection=direction@entry=ForwardScanDirection, count=count@entry=0,\nexecute_once=<optimized out>) at\n../src/backend/executor/execMain.c:309\n[09:47:11.123] No locals.\n[09:47:11.123] #25 0x0000564b16d208af in PortalRunSelect\n(portal=portal@entry=0x564b1817ae10, forward=forward@entry=true,\ncount=0, count@entry=9223372036854775807,\ndest=dest@entry=0x564b182e2728) at ../src/backend/tcop/pquery.c:924\n[09:47:11.123] queryDesc = 0x564b181ba200\n[09:47:11.123] direction = <optimized out>\n[09:47:11.123] nprocessed = <optimized out>\n[09:47:11.123] __func__ = \"PortalRunSelect\"\n[09:47:11.123] #26 0x0000564b16d2405b in PortalRun\n(portal=portal@entry=0x564b1817ae10,\ncount=count@entry=9223372036854775807,\nisTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x564b182e2728, altdest=altdest@entry=0x564b182e2728,\nqc=0x7ffc110551e0) at ../src/backend/tcop/pquery.c:768\n[09:47:11.123] _save_exception_stack = 0x7ffc11055290\n[09:47:11.123] _save_context_stack = 0x0\n[09:47:11.123] _local_sigjmp_buf = {{__jmpbuf = {1,\n3431825231787999889, 94880528213800, 94880526221072, 94880526741008,\n94880526221000, -3433879442176195951, -8991885832768699759},\n__mask_was_saved = 0, __saved_mask = {__val = {140720594047343, 688,\n94880526749216, 94880511205302, 1, 140720594047343, 94880526999808, 8,\n94880526213088, 112, 179, 94880526221072, 94880526221048,\n94880526221000, 94880508502828, 2}}}}\n[09:47:11.123] _do_rethrow = <optimized out>\n[09:47:11.123] result = <optimized out>\n[09:47:11.123] nprocessed = <optimized out>\n[09:47:11.123] saveTopTransactionResourceOwner = 0x564b18139ad8\n[09:47:11.123] saveTopTransactionContext = 0x564b181229f0\n[09:47:11.123] saveActivePortal = 0x0\n[09:47:11.123] saveResourceOwner = 0x564b18139ad8\n[09:47:11.123] savePortalContext = 0x0\n[09:47:11.123] saveMemoryContext = 0x564b181229f0\n[09:47:11.123] __func__ = \"PortalRun\"\n[09:47:11.123] #27 0x0000564b16d1d098 in exec_simple_query\n(query_string=query_string@entry=0x564b180fa0e0 \"with recursive\nsearch_graph(f, t, label) as (\\n\\tselect * from graph0 g\\n\\tunion\nall\\n\\tselect g.*\\n\\tfrom graph0 g, search_graph sg\\n\\twhere g.f =\nsg.t\\n) search depth first by f, t set seq\\nselect * from search\"...)\nat ../src/backend/tcop/postgres.c:1273\n[09:47:11.123] cmdtaglen = 6\n[09:47:11.123] snapshot_set = <optimized out>\n[09:47:11.123] per_parsetree_context = 0x0\n[09:47:11.123] plantree_list = 0x564b182e26d8\n[09:47:11.123] parsetree = 0x564b180fbec8\n[09:47:11.123] commandTag = <optimized out>\n[09:47:11.123] qc = {commandTag = CMDTAG_UNKNOWN, nprocessed = 0}\n[09:47:11.123] querytree_list = <optimized out>\n[09:47:11.123] portal = 0x564b1817ae10\n[09:47:11.123] receiver = 0x564b182e2728\n[09:47:11.123] format = 0\n[09:47:11.123] cmdtagname = <optimized out>\n[09:47:11.123] parsetree_item__state = {l = <optimized out>, i\n= <optimized out>}\n[09:47:11.123] dest = DestRemote\n[09:47:11.123] oldcontext = 0x564b181229f0\n[09:47:11.123] parsetree_list = 0x564b180fbef8\n[09:47:11.123] parsetree_item = 0x564b180fbf10\n[09:47:11.123] save_log_statement_stats = false\n[09:47:11.123] was_logged = false\n[09:47:11.123] use_implicit_block = false\n[09:47:11.123] msec_str =\n\"\\004\\000\\000\\000\\000\\000\\000\\000\\346[\\376\\026KV\\000\\000pR\\005\\021\\374\\177\\000\\000\\335\\000\\000\\000\\000\\000\\000\"\n[09:47:11.123] __func__ = \"exec_simple_query\"\n[09:47:11.123] #28 0x0000564b16d1fe33 in PostgresMain\n(dbname=<optimized out>, username=<optimized out>) at\n../src/backend/tcop/postgres.c:4653\n[09:47:11.123] query_string = 0x564b180fa0e0 \"with recursive\nsearch_graph(f, t, label) as (\\n\\tselect * from graph0 g\\n\\tunion\nall\\n\\tselect g.*\\n\\tfrom graph0 g, search_graph sg\\n\\twhere g.f =\nsg.t\\n) search depth first by f, t set seq\\nselect * from search\"...\n[09:47:11.123] firstchar = <optimized out>\n[09:47:11.123] input_message = {data = 0x564b180fa0e0 \"with\nrecursive search_graph(f, t, label) as (\\n\\tselect * from graph0\ng\\n\\tunion all\\n\\tselect g.*\\n\\tfrom graph0 g, search_graph\nsg\\n\\twhere g.f = sg.t\\n) search depth first by f, t set seq\\nselect *\nfrom search\"..., len = 221, maxlen = 1024, cursor = 221}\n[09:47:11.123] local_sigjmp_buf = {{__jmpbuf =\n{94880520543032, -8991887800862096751, 0, 4, 140720594048004, 1,\n-3433879442247499119, -8991885813233991023}, __mask_was_saved = 1,\n__saved_mask = {__val = {4194304, 1, 140346553036196, 94880526187920,\n15616, 15680, 94880508418872, 0, 94880526187920, 15616,\n94880520537224, 4, 140720594048004, 1, 94880508502235, 1}}}}\n[09:47:11.123] send_ready_for_query = false\n[09:47:11.123] idle_in_transaction_timeout_enabled = false\n[09:47:11.123] idle_session_timeout_enabled = false\n[09:47:11.123] __func__ = \"PostgresMain\"\n[09:47:11.123] #29 0x0000564b16bcc4e4 in BackendRun\n(port=port@entry=0x564b18126f50) at\n../src/backend/postmaster/postmaster.c:4464\n\n[1] - https://cirrus-ci.com/task/4765094966460416\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 20:42:42 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nvignesh C <[email protected]> writes:\n\n> On Mon, 1 Jan 2024 at 19:26, Andy Fan <[email protected]> wrote:\n>>\n>>\n>> Andy Fan <[email protected]> writes:\n>>\n>> >\n>> > Some Known issues:\n>> > ------------------\n>> >\n>> > 1. Currently only Scan & Join nodes are considered for this feature.\n>> > 2. JIT is not adapted for this purpose yet.\n>>\n>> JIT is adapted for this feature in v2. Any feedback is welcome.\n>\n> One of the tests was aborted at CFBOT [1] with:\n> [09:47:00.735] dumping /tmp/cores/postgres-11-28182.core for\n> /tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/bin/postgres\n> [09:47:01.035] [New LWP 28182]\n\nThere was a bug in JIT part, here is the fix. Thanks for taking care of\nthis!\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 07 Jan 2024 09:10:13 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Andy Fan <[email protected]> writes:\n\n>>\n>> One of the tests was aborted at CFBOT [1] with:\n>> [09:47:00.735] dumping /tmp/cores/postgres-11-28182.core for\n>> /tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/bin/postgres\n>> [09:47:01.035] [New LWP 28182]\n>\n> There was a bug in JIT part, here is the fix. Thanks for taking care of\n> this!\n\nFixed a GCC warning in cirrus-ci, hope everything is fine now.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 08 Jan 2024 17:52:57 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4759/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4759\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:41:56 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nHi,\n\nPeter Smith <[email protected]> writes:\n\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4759/\n> [2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4759\n\nv5 attached, it should fix the above issue. This version also introduce\na data struct called bitset, which has a similar APIs like bitmapset but\nhave the ability to reset all bits without recycle its allocated memory,\nthis is important for this feature.\n\ncommit 44754fb03accb0dec9710a962a334ee73eba3c49 (HEAD -> shared_detoast_value_v2)\nAuthor: yizhi.fzh <[email protected]>\nDate: Tue Jan 23 13:38:34 2024 +0800\n\n shared detoast feature.\n\ncommit 14a6eafef9ff4926b8b877d694de476657deee8a\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Jan 22 15:48:33 2024 +0800\n\n Introduce a Bitset data struct.\n \n While Bitmapset is designed for variable-length of bits, Bitset is\n designed for fixed-length of bits, the fixed length must be specified at\n the bitset_init stage and keep unchanged at the whole lifespan. Because\n of this, some operations on Bitset is simpler than Bitmapset.\n \n The bitset_clear unsets all the bits but kept the allocated memory, this\n capacity is impossible for bit Bitmapset for some solid reasons and this\n is the main reason to add this data struct.\n \n Also for performance aspect, the functions for Bitset removed some\n unlikely checks, instead with some Asserts.\n \n [1] https://postgr.es/m/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n [2] https://postgr.es/m/875xzqxbv5.fsf%40163.com\n\n\nI didn't write a good commit message for commit 2, the people who is\ninterested with this can see the first message in this thread for\nexplaination. \n\nI think anyone whose customer uses lots of jsonb probably can get\nbenefits from this. the precondition is the toast value should be\naccessed 1+ times, including the jsonb_out function. I think this would\nbe not rare to happen.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 23 Jan 2024 13:44:50 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi Andy,\n\nIt looks like v5 is missing in your mail. Could you please check and \nresend it?\n\nThanks,\n Michael.\n\nOn 1/23/24 08:44, Andy Fan wrote:\n> Hi,\n>\n> Peter Smith<[email protected]> writes:\n>\n>> 2024-01 Commitfest.\n>>\n>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>> there were CFbot test failures last time it was run [2]. Please have a\n>> look and post an updated version if necessary.\n>>\n>> ======\n>> [1]https://commitfest.postgresql.org/46/4759/\n>> [2]https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4759\n> v5 attached, it should fix the above issue. This version also introduce\n> a data struct called bitset, which has a similar APIs like bitmapset but\n> have the ability to reset all bits without recycle its allocated memory,\n> this is important for this feature.\n>\n> commit 44754fb03accb0dec9710a962a334ee73eba3c49 (HEAD -> shared_detoast_value_v2)\n> Author: yizhi.fzh<[email protected]>\n> Date: Tue Jan 23 13:38:34 2024 +0800\n>\n> shared detoast feature.\n>\n> commit 14a6eafef9ff4926b8b877d694de476657deee8a\n> Author: yizhi.fzh<[email protected]>\n> Date: Mon Jan 22 15:48:33 2024 +0800\n>\n> Introduce a Bitset data struct.\n> \n> While Bitmapset is designed for variable-length of bits, Bitset is\n> designed for fixed-length of bits, the fixed length must be specified at\n> the bitset_init stage and keep unchanged at the whole lifespan. Because\n> of this, some operations on Bitset is simpler than Bitmapset.\n> \n> The bitset_clear unsets all the bits but kept the allocated memory, this\n> capacity is impossible for bit Bitmapset for some solid reasons and this\n> is the main reason to add this data struct.\n> \n> Also for performance aspect, the functions for Bitset removed some\n> unlikely checks, instead with some Asserts.\n> \n> [1]https://postgr.es/m/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n> [2]https://postgr.es/m/875xzqxbv5.fsf%40163.com\n>\n>\n> I didn't write a good commit message for commit 2, the people who is\n> interested with this can see the first message in this thread for\n> explaination.\n>\n> I think anyone whose customer uses lots of jsonb probably can get\n> benefits from this. the precondition is the toast value should be\n> accessed 1+ times, including the jsonb_out function. I think this would\n> be not rare to happen.\n>\n\n-- \nMichael Zhilin\nPostgres Professional\n+7(925)3366270\nhttps://www.postgrespro.ru\n\n\n\n\n\n\nHi Andy,\n\n It looks like v5 is missing in your mail. Could you please check\n and resend it?\n\n Thanks,\n Michael.\n\nOn 1/23/24 08:44, Andy Fan wrote:\n\n\n\nHi,\n\nPeter Smith <[email protected]> writes:\n\n\n\n2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4759/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4759\n\n\n\nv5 attached, it should fix the above issue. This version also introduce\na data struct called bitset, which has a similar APIs like bitmapset but\nhave the ability to reset all bits without recycle its allocated memory,\nthis is important for this feature.\n\ncommit 44754fb03accb0dec9710a962a334ee73eba3c49 (HEAD -> shared_detoast_value_v2)\nAuthor: yizhi.fzh <[email protected]>\nDate: Tue Jan 23 13:38:34 2024 +0800\n\n shared detoast feature.\n\ncommit 14a6eafef9ff4926b8b877d694de476657deee8a\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Jan 22 15:48:33 2024 +0800\n\n Introduce a Bitset data struct.\n \n While Bitmapset is designed for variable-length of bits, Bitset is\n designed for fixed-length of bits, the fixed length must be specified at\n the bitset_init stage and keep unchanged at the whole lifespan. Because\n of this, some operations on Bitset is simpler than Bitmapset.\n \n The bitset_clear unsets all the bits but kept the allocated memory, this\n capacity is impossible for bit Bitmapset for some solid reasons and this\n is the main reason to add this data struct.\n \n Also for performance aspect, the functions for Bitset removed some\n unlikely checks, instead with some Asserts.\n \n [1] https://postgr.es/m/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n [2] https://postgr.es/m/875xzqxbv5.fsf%40163.com\n\n\nI didn't write a good commit message for commit 2, the people who is\ninterested with this can see the first message in this thread for\nexplaination. \n\nI think anyone whose customer uses lots of jsonb probably can get\nbenefits from this. the precondition is the toast value should be\naccessed 1+ times, including the jsonb_out function. I think this would\nbe not rare to happen.\n\n\n\n\n-- \nMichael Zhilin\nPostgres Professional\n+7(925)3366270\nhttps://www.postgrespro.ru",
"msg_date": "Tue, 23 Jan 2024 11:59:00 +0300",
"msg_from": "Michael Zhilin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Michael Zhilin <[email protected]> writes:\n\n> Hi Andy,\n>\n> It looks like v5 is missing in your mail. Could you please check and resend it?\n\nha, yes.. v5 is really attached this time.\n\ncommit eee0b2058f912d0d56282711c5d88bc0b1b75c2f (HEAD -> shared_detoast_value_v3)\nAuthor: yizhi.fzh <[email protected]>\nDate: Tue Jan 23 13:38:34 2024 +0800\n\n shared detoast feature.\n \n details at https://postgr.es/m/87il4jrk1l.fsf%40163.com\n\ncommit eeca405f5ae87e7d4e5496de989ac7b5173bcaa9\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Jan 22 15:48:33 2024 +0800\n\n Introduce a Bitset data struct.\n \n While Bitmapset is designed for variable-length of bits, Bitset is\n designed for fixed-length of bits, the fixed length must be specified at\n the bitset_init stage and keep unchanged at the whole lifespan. Because\n of this, some operations on Bitset is simpler than Bitmapset.\n \n The bitset_clear unsets all the bits but kept the allocated memory, this\n capacity is impossible for bit Bitmapset for some solid reasons and this\n is the main reason to add this data struct.\n \n Also for performance aspect, the functions for Bitset removed some\n unlikely checks, instead with some Asserts.\n \n [1] https://postgr.es/m/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n [2] https://postgr.es/m/875xzqxbv5.fsf%40163.com\n\n\nAs for the commit \"Introduce a Bitset data struct.\", the test coverage\nis 100% now. So it would be great that people can review this first. \n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 24 Jan 2024 03:18:18 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nI didn't another round of self-review. Comments, variable names, the\norder of function definition are improved so that it can be read as\nsmooth as possible. so v6 attached.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 20 Feb 2024 14:20:36 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nI took a quick look on this thread/patch today, so let me share a couple\ninitial thoughts. I may not have a particularly coherent/consistent\nopinion on the patch or what would be a better way to do this yet, but\nperhaps it'll start a discussion ...\n\n\nThe goal of the patch (as I understand it) is essentially to cache\ndetoasted values, so that the value does not need to be detoasted\nrepeatedly in different parts of the plan. I think that's perfectly\nsensible and worthwhile goal - detoasting is not cheap, and complex\nplans may easily spend a lot of time on it.\n\nThat being said, the approach seems somewhat invasive, and touching\nparts I wouldn't have expected to need a change to implement this. For\nexample, I certainly would not have guessed the patch to need changes in\ncreateplan.c or setrefs.c.\n\nPerhaps it really needs to do these things, but neither the thread nor\nthe comments are very enlightening as for why it's needed :-( In many\ncases I can guess, but I'm not sure my guess is correct. And comments in\ncode generally describe what's happening locally / next line, not the\nbigger picture & why it's happening.\n\nIIUC we walk the plan to decide which Vars should be detoasted (and\ncached) once, and which cases should not do that because it'd inflate\nthe amount of data we need to keep in a Sort, Hash etc. Not sure if\nthere's a better way to do this - it depends on what happens in the\nupper parts of the plan, so we can't decide while building the paths.\n\nBut maybe we could decide this while transforming the paths into a plan?\n(I realize the JIT thread nearby needs to do something like that in\ncreate_plan, and in that one I suggested maybe walking the plan would be\na better approach, so I may be contradicting myself a little bit.).\n\nIn any case, set_plan_forbid_pre_detoast_vars_recurse should probably\nexplain the overall strategy / reasoning in a bit more detail. Maybe\nit's somewhere in this thread, but that's not great for reviewers.\n\nSimilar for the setrefs.c changes. It seems a bit suspicious to piggy\nback the new code into fix_scan_expr/fix_scan_list and similar code.\nThose functions have a pretty clearly defined purpose, not sure we want\nto also extend them to also deal with this new thing. (FWIW I'd 100%%\ndid it this way if I hacked on a PoC of this, to make it work. But I'm\nnot sure it's the right solution.)\n\nI don't know what to thing about the Bitset - maybe it's necessary, but\nhow would I know? I don't have any way to measure the benefits, because\nthe 0002 patch uses it right away. I think it should be done the other\nway around, i.e. the patch should introduce the main feature first\n(using the traditional Bitmapset), and then add Bitset on top of that.\nThat way we could easily measure the impact and see if it's useful.\n\nOn the whole, my biggest concern is memory usage & leaks. It's not\ndifficult to already have problems with large detoasted values, and if\nwe start keeping more of them, that may get worse. Or at least that's my\nintuition - it can't really get better by keeping the values longer, right?\n\nThe other thing is the risk of leaks (in the sense of keeping detoasted\nvalues longer than expected). I see the values are allocated in\ntts_mcxt, and maybe that's the right solution - not sure.\n\n\nFWIW while looking at the patch, I couldn't help but to think about\nexpanded datums. There's similarity in what these two features do - keep\ndetoasted values for a while, so that we don't need to do the expensive\nprocessing if we access them repeatedly. Of course, expanded datums are\nnot meant to be long-lived, while \"shared detoasted values\" are meant to\nexist (potentially) for the query duration. But maybe there's something\nwe could learn from expanded datums? For example how the varlena pointer\nis leveraged to point to the expanded object.\n\nFor example, what if we add a \"TOAST cache\" as a query-level hash table,\nand modify the detoasting to first check the hash table (with the TOAST\npointer as a key)? It'd be fairly trivial to enforce a memory limit on\nthe hash table, evict values from it, etc. And it wouldn't require any\nof the createplan/setrefs changes, I think ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Feb 2024 19:15:34 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n\nHi Tomas, \n>\n> I took a quick look on this thread/patch today, so let me share a couple\n> initial thoughts. I may not have a particularly coherent/consistent\n> opinion on the patch or what would be a better way to do this yet, but\n> perhaps it'll start a discussion ...\n\nThank you for this!\n\n>\n> The goal of the patch (as I understand it) is essentially to cache\n> detoasted values, so that the value does not need to be detoasted\n> repeatedly in different parts of the plan. I think that's perfectly\n> sensible and worthwhile goal - detoasting is not cheap, and complex\n> plans may easily spend a lot of time on it.\n\nexactly.\n\n>\n> That being said, the approach seems somewhat invasive, and touching\n> parts I wouldn't have expected to need a change to implement this. For\n> example, I certainly would not have guessed the patch to need changes in\n> createplan.c or setrefs.c.\n>\n> Perhaps it really needs to do these things, but neither the thread nor\n> the comments are very enlightening as for why it's needed :-( In many\n> cases I can guess, but I'm not sure my guess is correct. And comments in\n> code generally describe what's happening locally / next line, not the\n> bigger picture & why it's happening.\n\nthere were explaination at [1], but it probably is too high level.\nWriting a proper comments is challenging for me, but I am pretty happy\nto try more. At the end of this writing, I explained the data workflow,\nI am feeling that would be useful for reviewers. \n\n> IIUC we walk the plan to decide which Vars should be detoasted (and\n> cached) once, and which cases should not do that because it'd inflate\n> the amount of data we need to keep in a Sort, Hash etc.\n\nExactly.\n\n> Not sure if\n> there's a better way to do this - it depends on what happens in the\n> upper parts of the plan, so we can't decide while building the paths.\n\nI'd say I did this intentionally. Deciding such things in paths will be\nmore expensive than create_plan stage IMO.\n\n> But maybe we could decide this while transforming the paths into a plan?\n> (I realize the JIT thread nearby needs to do something like that in\n> create_plan, and in that one I suggested maybe walking the plan would be\n> a better approach, so I may be contradicting myself a little bit.).\n\nI think that's pretty similar what I'm doing now. Just that I did it\n*just after* the create_plan. This is because the create_plan doesn't\ntransform the path to plan in the top->down manner all the time, the\nknown exception is create_mergejoin_plan. so I have to walk just after\nthe create_plan is \ndone.\n\nIn the create_mergejoin_plan, the Sort node is created *after* the\nsubplan for the Sort is created.\n\n/* Recursively process the path tree, demanding the correct tlist result */\nplan = create_plan_recurse(root, best_path, CP_EXACT_TLIST);\n\n+ /*\n+ * After the plan tree is built completed, we start to walk for which\n+ * expressions should not used the shared-detoast feature.\n+ */\n+ set_plan_forbid_pre_detoast_vars_recurse(plan, NIL);\n\n>\n> In any case, set_plan_forbid_pre_detoast_vars_recurse should probably\n> explain the overall strategy / reasoning in a bit more detail. Maybe\n> it's somewhere in this thread, but that's not great for reviewers.\n\na lession learnt, thanks.\n\na revisted version of comments from the lastest patch.\n\n/*\n * set_plan_forbid_pre_detoast_vars_recurse\n *\t Walking the Plan tree in the top-down manner to gather the vars which\n * should be as small as possible and record them in Plan.forbid_pre_detoast_vars\n * \n * plan: the plan node to walk right now.\n * small_tlist: a list of nodes which its subplan should provide them as\n * small as possible.\n */\nstatic void\nset_plan_forbid_pre_detoast_vars_recurse(Plan *plan, List *small_tlist)\n\n>\n> Similar for the setrefs.c changes. It seems a bit suspicious to piggy\n> back the new code into fix_scan_expr/fix_scan_list and similar code.\n> Those functions have a pretty clearly defined purpose, not sure we want\n> to also extend them to also deal with this new thing. (FWIW I'd 100%%\n> did it this way if I hacked on a PoC of this, to make it work. But I'm\n> not sure it's the right solution.)\n\nThe main reason of doing so is because I want to share the same walk\neffort as fix_scan_expr. otherwise I have to walk the plan for \nevery expression again. I thought this as a best practice in the past\nand thought we can treat the pre_detoast_attrs as a valuable side\neffects:(\n\n> I don't know what to thing about the Bitset - maybe it's necessary, but\n> how would I know? I don't have any way to measure the benefits, because\n> the 0002 patch uses it right away. \n\na revisted version of comments from the latest patch. graph 2 explains\nthis decision.\n\n\t/*\n\t * The attributes whose values are the detoasted version in tts_values[*],\n\t * if so these memory needs some extra clean-up. These memory can't be put\n\t * into ecxt_per_tuple_memory since many of them needs a longer life span,\n\t * for example the Datum in outer join. These memory is put into\n\t * TupleTableSlot.tts_mcxt and be clear whenever the tts_values[*] is\n\t * invalidated.\n\t *\n\t * Bitset rather than Bitmapset is chosen here because when all the members\n\t * of Bitmapset are deleted, the allocated memory will be deallocated\n\t * automatically, which is too expensive in this case since we need to\n\t * deleted all the members in each ExecClearTuple and repopulate it again\n\t * when fill the detoast datum to tts_values[*]. This situation will be run\n\t * again and again in an execution cycle.\n\t * \n\t * These values are populated by EEOP_{INNER/OUTER/SCAN}_VAR_TOAST steps.\n\t */\n\tBitset\t *pre_detoasted_attrs;\n\n> I think it should be done the other\n> way around, i.e. the patch should introduce the main feature first\n> (using the traditional Bitmapset), and then add Bitset on top of that.\n> That way we could easily measure the impact and see if it's useful.\n\nAcutally v4 used the Bitmapset, and then both perf and pgbench's tps\nindicate it is too expensive. and after talk with David at [2], I\nintroduced bitset and use it here. the test case I used comes from [1].\nIRCC, there were 5% performance difference because of this.\n\ncreate table w(a int, b numeric);\ninsert into w select i, i from generate_series(1, 1000000)i;\nselect b from w where b > 0;\n\nTo reproduce the difference, we can replace the bitset_clear() with\n\nbitset_free(slot->pre_detoasted_attrs);\nslot->pre_detoasted_attrs = bitset_init(slot->tts_tupleDescriptor->natts);\n\nin ExecFreePreDetoastDatum. then it works same as Bitmapset.\n\n\n> On the whole, my biggest concern is memory usage & leaks. It's not\n> difficult to already have problems with large detoasted values, and if\n> we start keeping more of them, that may get worse. Or at least that's my\n> intuition - it can't really get better by keeping the values longer, right?\n>\n> The other thing is the risk of leaks (in the sense of keeping detoasted\n> values longer than expected). I see the values are allocated in\n> tts_mcxt, and maybe that's the right solution - not sure.\n\nabout the memory usage, first it is kept as the same lifesplan as the\ntts_values[*] which can be released pretty quickly, only if the certain\nvalues of the tuples is not needed. it is true that we keep the detoast\nversion longer than before, but that's something we have to pay I\nthink. \n\nLeaks may happen since tts_mcxt is reset at the end of *executor*. So if\nwe forget to release the memory when the tts_values[*] is invalidated\nsomehow, the memory will be leaked until the end of executor. I think\nthat will be enough to cause an issue. Currently besides I release such\nmemory at the ExecClearTuple, I also relase such memory whenever we set\ntts_nvalid to 0, the theory used here is:\n\n/*\n * tts_values is treated invalidated since tts_nvalid is set to 0, so\n * let's free the pre-detoast datum.\n */\nExecFreePreDetoastDatum(slot);\n\nI will do more test on the memory leak stuff, since there are so many\noperation aginst slot like ExecCopySlot etc, I don't know how to test it\nfully. the method in my mind now is use TPCH with 10GB data size, and\nmonitor the query runtime memory usage.\n\n\n> FWIW while looking at the patch, I couldn't help but to think about\n> expanded datums. There's similarity in what these two features do - keep\n> detoasted values for a while, so that we don't need to do the expensive\n> processing if we access them repeatedly.\n\nCould you provide some keyword or function names for the expanded datum\nhere, I probably miss this.\n\n> Of course, expanded datums are\n> not meant to be long-lived, while \"shared detoasted values\" are meant to\n> exist (potentially) for the query duration.\n\nhmm, acutally the \"shared detoast value\" just live in the\nTupleTableSlot->tts_values[*], rather than the whole query duration. The\nsimple case is:\n\nSELECT * FROM t WHERE a_text LIKE 'abc%';\n\nwhen we scan to the next tuple, the detoast value for the previous tuple\nwill be relased. \n\n> But maybe there's something\n> we could learn from expanded datums? For example how the varlena pointer\n> is leveraged to point to the expanded object.\n\nmaybe. currently I just use detoast_attr to get the desired version. I'm\npleasure if we have more effective way.\n\nif (!slot->tts_isnull[attnum] &&\n VARATT_IS_EXTENDED(slot->tts_values[attnum]))\n{\n\tDatum\t\toldDatum;\n\tMemoryContext old = MemoryContextSwitchTo(slot->tts_mcxt);\n \n\toldDatum = slot->tts_values[attnum];\n\tslot->tts_values[attnum] = PointerGetDatum(detoast_attr(\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t(struct varlena *) oldDatum));\n\tAssert(slot->tts_nvalid > attnum);\n\tAssert(oldDatum != slot->tts_values[attnum]);\n\tbitset_add_member(slot->pre_detoasted_attrs, attnum);\n\tMemoryContextSwitchTo(old);\n}\n\n\n> For example, what if we add a \"TOAST cache\" as a query-level hash table,\n> and modify the detoasting to first check the hash table (with the TOAST\n> pointer as a key)? It'd be fairly trivial to enforce a memory limit on\n> the hash table, evict values from it, etc. And it wouldn't require any\n> of the createplan/setrefs changes, I think ...\n\nHmm, I am not sure I understand you correctly at this part. In the\ncurrent patch, to avoid the run-time (ExecExprInterp) check if we\nshould detoast and save the datum, I defined 3 extra steps so that\nthe *extra check itself* is not needed for unnecessary attributes.\nfor example an datum for int or a detoast datum should not be saved back\nto tts_values[*] due to the small_tlist reason. However these steps can\nbe generated is based on the output of createplan/setrefs changes. take\nthe INNER_VAR for example: \n\nIn ExecInitExprRec:\n\nswitch (variable->varno)\n{\n\tcase INNER_VAR:\n \tif (is_join_plan(plan) &&\n\t\t\tbms_is_member(attnum,\n\t\t\t ((JoinState *) state->parent)->inner_pre_detoast_attrs))\n\t\t{\n\t\t\tscratch.opcode = EEOP_INNER_VAR_TOAST;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tscratch.opcode = EEOP_INNER_VAR;\n\t\t}\n}\n\nThe data workflow is:\n\n1. set_plan_forbid_pre_detoast_vars_recurse (in the createplan.c)\ndecides which Vars should *not* be pre_detoasted because of small_tlist\nreason and record it in Plan.forbid_pre_detoast_vars.\n\n2. fix_scan_expr (in the setrefs.c) tracks which Vars should be\ndetoasted for the specific plan node and record them in it. Currently\nonly Scan and Join nodes support this feature.\n\ntypedef struct Scan\n{\n ...\n\t/*\n\t * Records of var's varattno - 1 where the Var is accessed indirectly by\n\t * any expression, like a > 3. However a IS [NOT] NULL is not included\n\t * since it doesn't access the tts_values[*] at all.\n\t *\n\t * This is a essential information to figure out which attrs should use\n\t * the pre-detoast-attrs logic.\n\t */\n\tBitmapset *reference_attrs;\n} Scan;\n\ntypedef struct Join\n{\n..\n\t/*\n\t * Records of var's varattno - 1 where the Var is accessed indirectly by\n\t * any expression, like a > 3. However a IS [NOT] NULL is not included\n\t * since it doesn't access the tts_values[*] at all.\n\t *\n\t * This is a essential information to figure out which attrs should use\n\t * the pre-detoast-attrs logic.\n\t */\n\tBitmapset *outer_reference_attrs;\n\tBitmapset *inner_reference_attrs;\n} Join;\n\n3. during the InitPlan stage, we maintain the\nPlanState.xxx_pre_detoast_attrs and generated different StepOp for them.\n\n4. At the ExecExprInterp stage, only the new StepOp do the extra check\nto see if the detoast should happen. Other steps doesn't need this\ncheck at all. \n\nIf we avoid the createplan/setref.c changes, probabaly some unrelated\nStepOp needs the extra check as well?\n\nWhen I worked with the UniqueKey feature, I maintained a\nUniqueKey.README to summaried all the dicussed topics in threads, the\nREADME is designed to save the effort for more reviewer, I think I\nshould apply the same logic for this feature.\n\nThank you very much for your feedback!\n\nv7 attached, just some comments and Assert changes. \n\n[1] https://www.postgresql.org/message-id/87il4jrk1l.fsf%40163.com\n[2]\nhttps://www.postgresql.org/message-id/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 21 Feb 2024 02:38:08 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 2/20/24 19:38, Andy Fan wrote:\n> \n> ...\n>\n>> I think it should be done the other\n>> way around, i.e. the patch should introduce the main feature first\n>> (using the traditional Bitmapset), and then add Bitset on top of that.\n>> That way we could easily measure the impact and see if it's useful.\n> \n> Acutally v4 used the Bitmapset, and then both perf and pgbench's tps\n> indicate it is too expensive. and after talk with David at [2], I\n> introduced bitset and use it here. the test case I used comes from [1].\n> IRCC, there were 5% performance difference because of this.\n> \n> create table w(a int, b numeric);\n> insert into w select i, i from generate_series(1, 1000000)i;\n> select b from w where b > 0;\n> \n> To reproduce the difference, we can replace the bitset_clear() with\n> \n> bitset_free(slot->pre_detoasted_attrs);\n> slot->pre_detoasted_attrs = bitset_init(slot->tts_tupleDescriptor->natts);\n> \n> in ExecFreePreDetoastDatum. then it works same as Bitmapset.\n> \n\nI understand the bitset was not introduced until v5, after noticing the\nbitmapset is not quite efficient. But I still think the patches should\nbe the other way around, i.e. the main feature first, then the bitset as\nan optimization.\n\nThat allows everyone to observe the improvement on their own (without\nhaving to tweak the code), and it also doesn't require commit of the\nbitset part before it gets actually used by anything.\n\n> \n>> On the whole, my biggest concern is memory usage & leaks. It's not\n>> difficult to already have problems with large detoasted values, and if\n>> we start keeping more of them, that may get worse. Or at least that's my\n>> intuition - it can't really get better by keeping the values longer, right?\n>>\n>> The other thing is the risk of leaks (in the sense of keeping detoasted\n>> values longer than expected). I see the values are allocated in\n>> tts_mcxt, and maybe that's the right solution - not sure.\n> \n> about the memory usage, first it is kept as the same lifesplan as the\n> tts_values[*] which can be released pretty quickly, only if the certain\n> values of the tuples is not needed. it is true that we keep the detoast\n> version longer than before, but that's something we have to pay I\n> think. \n> \n> Leaks may happen since tts_mcxt is reset at the end of *executor*. So if\n> we forget to release the memory when the tts_values[*] is invalidated\n> somehow, the memory will be leaked until the end of executor. I think\n> that will be enough to cause an issue. Currently besides I release such\n> memory at the ExecClearTuple, I also relase such memory whenever we set\n> tts_nvalid to 0, the theory used here is:\n> \n> /*\n> * tts_values is treated invalidated since tts_nvalid is set to 0, so\n> * let's free the pre-detoast datum.\n> */\n> ExecFreePreDetoastDatum(slot);\n> \n> I will do more test on the memory leak stuff, since there are so many\n> operation aginst slot like ExecCopySlot etc, I don't know how to test it\n> fully. the method in my mind now is use TPCH with 10GB data size, and\n> monitor the query runtime memory usage.\n> \n\nI think this is exactly the \"high level design\" description that should\nbe in a comment, somewhere.\n\n> \n>> FWIW while looking at the patch, I couldn't help but to think about\n>> expanded datums. There's similarity in what these two features do - keep\n>> detoasted values for a while, so that we don't need to do the expensive\n>> processing if we access them repeatedly.\n> \n> Could you provide some keyword or function names for the expanded datum\n> here, I probably miss this.\n> \n\nsee src/include/utils/expandeddatum.h\n\n>> Of course, expanded datums are\n>> not meant to be long-lived, while \"shared detoasted values\" are meant to\n>> exist (potentially) for the query duration.\n> \n> hmm, acutally the \"shared detoast value\" just live in the\n> TupleTableSlot->tts_values[*], rather than the whole query duration. The\n> simple case is:\n> \n> SELECT * FROM t WHERE a_text LIKE 'abc%';\n> \n> when we scan to the next tuple, the detoast value for the previous tuple\n> will be relased.\n> \n\nBut if the (detoasted) value is passed to the next executor node, it'll\nbe kept, right?\n\n>> But maybe there's something\n>> we could learn from expanded datums? For example how the varlena pointer\n>> is leveraged to point to the expanded object.\n> \n> maybe. currently I just use detoast_attr to get the desired version. I'm\n> pleasure if we have more effective way.\n> \n> if (!slot->tts_isnull[attnum] &&\n> VARATT_IS_EXTENDED(slot->tts_values[attnum]))\n> {\n> \tDatum\t\toldDatum;\n> \tMemoryContext old = MemoryContextSwitchTo(slot->tts_mcxt);\n> \n> \toldDatum = slot->tts_values[attnum];\n> \tslot->tts_values[attnum] = PointerGetDatum(detoast_attr(\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t(struct varlena *) oldDatum));\n> \tAssert(slot->tts_nvalid > attnum);\n> \tAssert(oldDatum != slot->tts_values[attnum]);\n> \tbitset_add_member(slot->pre_detoasted_attrs, attnum);\n> \tMemoryContextSwitchTo(old);\n> }\n> \n\nRight. FWIW I'm not sure if expanded objects are similar enough to be a\nuseful inspiration.\n\nUnrelated question - could this actually end up being too aggressive?\nThat is, could it detoast attributes that we end up not needing? For\nexample, what if the tuple gets eliminated for some reason (e.g. because\nof a restriction on the table, or not having a match in a join)? Won't\nwe detoast the tuple only to throw it away?\n\n> \n>> For example, what if we add a \"TOAST cache\" as a query-level hash table,\n>> and modify the detoasting to first check the hash table (with the TOAST\n>> pointer as a key)? It'd be fairly trivial to enforce a memory limit on\n>> the hash table, evict values from it, etc. And it wouldn't require any\n>> of the createplan/setrefs changes, I think ...\n> \n> Hmm, I am not sure I understand you correctly at this part. In the\n> current patch, to avoid the run-time (ExecExprInterp) check if we\n> should detoast and save the datum, I defined 3 extra steps so that\n> the *extra check itself* is not needed for unnecessary attributes.\n> for example an datum for int or a detoast datum should not be saved back\n> to tts_values[*] due to the small_tlist reason. However these steps can\n> be generated is based on the output of createplan/setrefs changes. take\n> the INNER_VAR for example: \n> \n> In ExecInitExprRec:\n> \n> switch (variable->varno)\n> {\n> \tcase INNER_VAR:\n> \tif (is_join_plan(plan) &&\n> \t\t\tbms_is_member(attnum,\n> \t\t\t ((JoinState *) state->parent)->inner_pre_detoast_attrs))\n> \t\t{\n> \t\t\tscratch.opcode = EEOP_INNER_VAR_TOAST;\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tscratch.opcode = EEOP_INNER_VAR;\n> \t\t}\n> }\n> \n> The data workflow is:\n> \n> 1. set_plan_forbid_pre_detoast_vars_recurse (in the createplan.c)\n> decides which Vars should *not* be pre_detoasted because of small_tlist\n> reason and record it in Plan.forbid_pre_detoast_vars.\n> \n> 2. fix_scan_expr (in the setrefs.c) tracks which Vars should be\n> detoasted for the specific plan node and record them in it. Currently\n> only Scan and Join nodes support this feature.\n> \n> typedef struct Scan\n> {\n> ...\n> \t/*\n> \t * Records of var's varattno - 1 where the Var is accessed indirectly by\n> \t * any expression, like a > 3. However a IS [NOT] NULL is not included\n> \t * since it doesn't access the tts_values[*] at all.\n> \t *\n> \t * This is a essential information to figure out which attrs should use\n> \t * the pre-detoast-attrs logic.\n> \t */\n> \tBitmapset *reference_attrs;\n> } Scan;\n> \n> typedef struct Join\n> {\n> ..\n> \t/*\n> \t * Records of var's varattno - 1 where the Var is accessed indirectly by\n> \t * any expression, like a > 3. However a IS [NOT] NULL is not included\n> \t * since it doesn't access the tts_values[*] at all.\n> \t *\n> \t * This is a essential information to figure out which attrs should use\n> \t * the pre-detoast-attrs logic.\n> \t */\n> \tBitmapset *outer_reference_attrs;\n> \tBitmapset *inner_reference_attrs;\n> } Join;\n> \n\nIs it actually necessary to add new fields to these nodes? Also, the\nnames are not particularly descriptive of the purpose - it'd be better\nto have \"detoast\" in the name, instead of generic \"reference\".\n\n\n> 3. during the InitPlan stage, we maintain the\n> PlanState.xxx_pre_detoast_attrs and generated different StepOp for them.\n> \n> 4. At the ExecExprInterp stage, only the new StepOp do the extra check\n> to see if the detoast should happen. Other steps doesn't need this\n> check at all. \n> \n> If we avoid the createplan/setref.c changes, probabaly some unrelated\n> StepOp needs the extra check as well?\n> \n> When I worked with the UniqueKey feature, I maintained a\n> UniqueKey.README to summaried all the dicussed topics in threads, the\n> README is designed to save the effort for more reviewer, I think I\n> should apply the same logic for this feature.\n> \n\nGood idea. Either that (a separate README), or a comment in a header of\nsome suitable .c/.h file (I prefer that, because that's kinda obvious\nwhen reading the code, I often not notice a README exists next to it).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Feb 2024 13:39:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n\nI see your reply when I started to write a more high level\ndocument. Thanks for the step by step help!\n\nTomas Vondra <[email protected]> writes:\n\n> On 2/20/24 19:38, Andy Fan wrote:\n>> \n>> ...\n>>\n>>> I think it should be done the other\n>>> way around, i.e. the patch should introduce the main feature first\n>>> (using the traditional Bitmapset), and then add Bitset on top of that.\n>>> That way we could easily measure the impact and see if it's useful.\n>> \n>> Acutally v4 used the Bitmapset, and then both perf and pgbench's tps\n>> indicate it is too expensive. and after talk with David at [2], I\n>> introduced bitset and use it here. the test case I used comes from [1].\n>> IRCC, there were 5% performance difference because of this.\n>> \n>> create table w(a int, b numeric);\n>> insert into w select i, i from generate_series(1, 1000000)i;\n>> select b from w where b > 0;\n>> \n>> To reproduce the difference, we can replace the bitset_clear() with\n>> \n>> bitset_free(slot->pre_detoasted_attrs);\n>> slot->pre_detoasted_attrs = bitset_init(slot->tts_tupleDescriptor->natts);\n>> \n>> in ExecFreePreDetoastDatum. then it works same as Bitmapset.\n>> \n>\n> I understand the bitset was not introduced until v5, after noticing the\n> bitmapset is not quite efficient. But I still think the patches should\n> be the other way around, i.e. the main feature first, then the bitset as\n> an optimization.\n>\n> That allows everyone to observe the improvement on their own (without\n> having to tweak the code), and it also doesn't require commit of the\n> bitset part before it gets actually used by anything.\n\nI start to think this is a better way rather than the opposite. The next\nversion will be:\n\n0001: shared detoast datum feature with high level introduction.\n0002: introduce bitset and use it shared-detoast-datum feature, with the\ntest case to show the improvement. \n\n>> I will do more test on the memory leak stuff, since there are so many\n>> operation aginst slot like ExecCopySlot etc, I don't know how to test it\n>> fully. the method in my mind now is use TPCH with 10GB data size, and\n>> monitor the query runtime memory usage.\n>> \n>\n> I think this is exactly the \"high level design\" description that should\n> be in a comment, somewhere.\n\nGot it.\n\n>>> Of course, expanded datums are\n>>> not meant to be long-lived, while \"shared detoasted values\" are meant to\n>>> exist (potentially) for the query duration.\n>> \n>> hmm, acutally the \"shared detoast value\" just live in the\n>> TupleTableSlot->tts_values[*], rather than the whole query duration. The\n>> simple case is:\n>> \n>> SELECT * FROM t WHERE a_text LIKE 'abc%';\n>> \n>> when we scan to the next tuple, the detoast value for the previous tuple\n>> will be relased.\n>> \n>\n> But if the (detoasted) value is passed to the next executor node, it'll\n> be kept, right?\n\nYes and only one copy for all the executor nodes.\n\n> Unrelated question - could this actually end up being too aggressive?\n> That is, could it detoast attributes that we end up not needing? For\n> example, what if the tuple gets eliminated for some reason (e.g. because\n> of a restriction on the table, or not having a match in a join)? Won't\n> we detoast the tuple only to throw it away?\n\nThe detoast datum will have the exactly same lifespan with other\ntts_values[*]. If the tuple get eliminated for any reason, those detoast\ndatum still exist until the slot is cleared for storing the next tuple.\n\n>> typedef struct Join\n>> {\n>> ..\n>> \t/*\n>> \t * Records of var's varattno - 1 where the Var is accessed indirectly by\n>> \t * any expression, like a > 3. However a IS [NOT] NULL is not included\n>> \t * since it doesn't access the tts_values[*] at all.\n>> \t *\n>> \t * This is a essential information to figure out which attrs should use\n>> \t * the pre-detoast-attrs logic.\n>> \t */\n>> \tBitmapset *outer_reference_attrs;\n>> \tBitmapset *inner_reference_attrs;\n>> } Join;\n>> \n>\n> Is it actually necessary to add new fields to these nodes? Also, the\n> names are not particularly descriptive of the purpose - it'd be better\n> to have \"detoast\" in the name, instead of generic \"reference\".\n\nBecause of the way of the data transformation, I think we must add the\nfields to keep such inforamtion. Then these information will be used\ninitilize the necessary information in PlanState. maybe I am having a\nfixed mindset, I can't think out a way to avoid that right now.\n\nI used 'reference' rather than detoast is because some implementaion\nissues. In the createplan.c and setref.c, I can't check the atttyplen\neffectively, so even a Var with int type is still hold here which may\nhave nothing with detoast.\n\n>> \n>> When I worked with the UniqueKey feature, I maintained a\n>> UniqueKey.README to summaried all the dicussed topics in threads, the\n>> README is designed to save the effort for more reviewer, I think I\n>> should apply the same logic for this feature.\n>> \n>\n> Good idea. Either that (a separate README), or a comment in a header of\n> some suitable .c/.h file (I prefer that, because that's kinda obvious\n> when reading the code, I often not notice a README exists next to it).\n\nGreat, I'd try this from tomorrow. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 21:20:38 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\n>>\n>> Good idea. Either that (a separate README), or a comment in a header of\n>> some suitable .c/.h file (I prefer that, because that's kinda obvious\n>> when reading the code, I often not notice a README exists next to it).\n>\n> Great, I'd try this from tomorrow. \n\nI have made it. Currently I choose README because this feature changed\ncreateplan.c, setrefs.c, execExpr.c and execExprInterp.c, so putting the\nhigh level design to any of them looks inappropriate. So the high level\ndesign is here and detailed design for each steps is in the comments\naround the code. Hope this is helpful!\n\nThe problem:\n-------------\n\nIn the current expression engine, a toasted datum is detoasted when\nrequired, but the result is discarded immediately, either by pfree it\nimmediately or leave it for ResetExprContext. Arguments for which one to\nuse exists sometimes. More serious problem is detoasting is expensive,\nespecially for the data types like jsonb or array, which the value might\nbe very huge. In the blow example, the detoasting happens twice.\n\nSELECT jb_col->'a', jb_col->'b' FROM t;\n\nWithin the shared-detoast-datum, we just need to detoast once for each\ntuple, and discard it immediately when the tuple is not needed any\nmore. FWIW this issue may existing for small numeric, text as well\nbecause of SHORT_TOAST feature where the toast's len using 1 byte rather\nthan 4 bytes.\n\nCurrent Design\n--------------\n\nThe high level design is let createplan.c and setref.c decide which\nVars can use this feature, and then the executor save the detoast\ndatum back slot->to tts_values[*] during the ExprEvalStep of\nEEOP_{INNER|OUTER|SCAN}_VAR_TOAST. The reasons includes:\n\n- The existing expression engine read datum from tts_values[*], no any\n extra work need to be done.\n- Reuse the lifespan of TupleTableSlot system to manage memory. It\n is natural to think the detoast datum is a tts_value just that it is\n in a detoast format. Since we have a clear lifespan for TupleTableSlot\n already, like ExecClearTuple, ExecCopySlot et. We are easy to reuse\n them for managing the datoast datum's memory.\n- The existing projection method can copy the datoasted datum (int64)\n automatically to the next node's slot, but keeping the ownership\n unchanged, so only the slot where the detoast really happen take the\n charge of it's lifespan.\n\nSo the final data change is adding the below field into TubleTableSlot.\n\ntypedef struct TupleTableSlot\n{\n..\n\n /*\n * The attributes whose values are the detoasted version in tts_values[*],\n * if so these memory needs some extra clean-up. These memory can't be put\n * into ecxt_per_tuple_memory since many of them needs a longer life\n * span.\n *\n * These memory is put into TupleTableSlot.tts_mcxt and be clear\n * whenever the tts_values[*] is invalidated.\n */\n Bitmapset\t*pre_detoast_attrs;\n};\n\nAssuming which Var should use this feature has been decided in\ncreateplan.c and setref.c already. The 3 new ExprEvalSteps\nEEOP_{INNER,OUTER,SCAN}_VAR_TOAST as used. During the evaluating these\nsteps, the below code is used.\n\nstatic inline void\nExecSlotDetoastDatum(TupleTableSlot *slot, int attnum)\n{\n if (!slot->tts_isnull[attnum] &&\n VARATT_IS_EXTENDED(slot->tts_values[attnum]))\n {\n Datum\t\toldDatum;\n MemoryContext old = MemoryContextSwitchTo(slot->tts_mcxt);\n\n oldDatum = slot->tts_values[attnum];\n slot->tts_values[attnum] = PointerGetDatum(detoast_attr(\n (struct varlena *) oldDatum));\n Assert(slot->tts_nvalid > attnum);\n Assert(oldDatum != slot->tts_values[attnum]);\n slot->pre_detoasted_attrs= bms_add_member(slot->pre_detoasted_attrs, attnum);\n MemoryContextSwitchTo(old);\n }\n}\n\nSince I don't want to the run-time extra check to see if is a detoast\nshould happen, so introducing 3 new steps.\n\nWhen to free the detoast datum? It depends on when the slot's\ntts_values[*] is invalidated, ExecClearTuple is the clear one, but any\nTupleTableSlotOps which set the tts_nvalid = 0 tells us no one will use\nthe datum in tts_values[*] so it is time to release them based on\nslot.pre_detoast_attrs as well.\n\nNow comes to the createplan.c/setref.c part, which decides which Vars\nshould use the shared detoast feature. The guideline of this is:\n\n1. It needs a detoast for a given expression.\n2. It should not breaks the CP_SMALL_TLIST design. Since we saved the\n detoast datum back to tts_values[*], which make tuple bigger. if we\n do this blindly, it would be harmful to the ORDER / HASH style nodes.\n\nA high level data flow is:\n\n1. at the createplan.c, we walk the plan tree go gather the\n CP_SMALL_TLIST because of SORT/HASH style nodes, information and save\n it to Plan.forbid_pre_detoast_vars via the function\n set_plan_forbid_pre_detoast_vars_recurse.\n\n2. at the setrefs.c, fix_{scan|join}_expr will recurse to Var for each\n expression, so it is a good time to track the attribute number and\n see if the Var is directly or indirectly accessed. Usually the\n indirectly access a Var means a detoast would happens, for \n example an expression like a > 3. However some known expressions like\n VAR is NULL; is ignored. The output is {Scan|Join}.xxx_reference_attrs;\n\nAs a result, the final result is added into the plan node of Scan and\nJoin.\n\ntypedef struct Scan\n{\n /*\n * Records of var's varattno - 1 where the Var is accessed indirectly by\n * any expression, like a > 3. However a IS [NOT] NULL is not included\n * since it doesn't access the tts_values[*] at all.\n *\n * This is a essential information to figure out which attrs should use\n * the pre-detoast-attrs logic.\n */\n Bitmapset *reference_attrs;\n} Scan;\n\ntypedef struct Join\n{\n ..\n /*\n * Records of var's varattno - 1 where the Var is accessed indirectly by\n * any expression, like a > 3. However a IS [NOT] NULL is not included\n * since it doesn't access the tts_values[*] at all.\n *\n * This is a essential information to figure out which attrs should use\n * the pre-detoast-attrs logic.\n */\n Bitmapset *outer_reference_attrs;\n Bitmapset *inner_reference_attrs;\n} Join;\n\n\nNote that here I used '_reference_' rather than '_detoast_' is because\nat this part, I still don't know if it is a toastable attrbiute, which\nis known at the MakeTupleTableSlot stage.\n\n3. At the InitPlan Stage, we calculate the final xxx_pre_detoast_attrs\n in ScanState & JoinState, which will be passed into expression\n engine in the ExecInitExprRec stage and EEOP_{INNER|OUTER|SCAN}\n _VAR_TOAST steps are generated finally then everything is connected\n with ExecSlotDetoastDatum!\n\n\nTesting\n-------\n\nCase 1:\n\ncreate table t (a numeric);\ninsert into t select i from generate_series(1, 100000)i;\n\ncat 1.sql\n\nselect * from t where a > 0;\n\nIn this test, the current master run detoast twice for each datum. one\nin numeric_gt, one in numeric_out. this feature makes the detoast once.\n\npgbench -f 1.sql -n postgres -T 10 -M prepared\n\nmaster: 30.218 ms\npatched(Bitmapset): 30.881ms\n\n\nThen we can see the perf report as below:\n\n- 7.34% 0.00% postgres postgres [.] ExecSlotDetoastDatum (inlined)\n - ExecSlotDetoastDatum (inlined)\n - 3.47% bms_add_member\n - 3.06% bms_make_singleton (inlined)\n - palloc0\n 1.30% AllocSetAlloc\n\n- 5.99% 0.00% postgres postgres [.] ExecFreePreDetoastDatum (inlined)\n - ExecFreePreDetoastDatum (inlined)\n 2.64% bms_next_member\n 1.17% bms_del_members\n 0.94% AllocSetFree\n\nOne of the reasons is because Bitmapset will deallocate its memory when\nall the bits are deleted due to commit 00b41463c, then we have to\nallocate memory at the next time when adding a member to it. This kind\nof action is executed 100000 times in the above workload.\n\nThen I introduce bitset data struct (commit 0002) which is pretty like\nthe Bitmapset, but it would not deallocate the memory when all the bits\nis unset. and use it in this feature (commit 0003). Then the result\nbecame to: 28.715ms \n\n- 5.22% 0.00% postgres postgres [.] ExecFreePreDetoastDatum (inlined)\n - ExecFreePreDetoastDatum (inlined)\n - 2.82% bitset_next_member\n 1.69% bms_next_member_internal (inlined)\n 0.95% bitset_next_member\n 0.66% AllocSetFree\n\nHere we can see the expensive calls are bitset_next_member on\nslot->pre_detoast_attrs and pfree. if we put the detoast datum into\na dedicated memory context, then we can save the cost of\nbitset_next_member since can discard all the memory in once and use\nMemoryContextReset instead of AllocSetFree (commit 0004). then the\nresult became to 27.840ms. \n\nSo the final result for case 1: \n\nmaster: 30.218 ms\npatched(Bitmapset): 30.881ms\npatched(bitset): 28.715ms\nlatency average(bitset + tts_value_mctx) = 27.840 ms\n\n\nBig jsonbs test:\n\ncreate table b(big jsonb);\n\ninsert into b select\njsonb_object_agg(x::text,\nrandom()::text || random()::text || random()::text)\nfrom generate_series(1,600) f(x);\n\ninsert into b select (select big from b) from generate_series(1, 1000)i;\n\nexplain analyze\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b;\n\nmaster: 702.224 ms\npatched: 133.306 ms\n\nMemory usage test:\n\nI run the workload of tpch scale 10 on against both master and patched\nversions, the memory usage looks stable.\n\nIn progress work:\n\nI'm still running tpc-h scale 100 to see if anything interesting\nfinding, that is in progress. As for the scale 10:\n\nmaster: 55s\npatched: 56s\n\nThe reason is q9 plan changed a bit, the real reason needs some more\ntime. Since this patch doesn't impact on the best path generation, so it\nshould not reasonble for me.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 26 Feb 2024 10:04:14 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi!\n\nI see this to be a very promising initiative, but some issues come into my\nmind.\nWhen we store and detoast large values, say, 1Gb - that's a very likely\nscenario,\nwe have such cases from prod systems - we would end up in using a lot of\nshared\nmemory to keep these values alive, only to discard them later. Also,\ntoasted values\nare not always being used immediately and as a whole, i.e. jsonb values are\nfully\ndetoasted (we're working on this right now) to extract the smallest value\nfrom\nbig json, and these values are not worth keeping in memory. For text values\ntoo,\nwe often do not need the whole value to be detoasted and kept in memory.\n\nWhat do you think?\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I see this to be a very promising initiative, but some issues come into my mind.When we store and detoast large values, say, 1Gb - that's a very likely scenario,we have such cases from prod systems - we would end up in using a lot of sharedmemory to keep these values alive, only to discard them later. Also, toasted valuesare not always being used immediately and as a whole, i.e. jsonb values are fullydetoasted (we're working on this right now) to extract the smallest value frombig json, and these values are not worth keeping in memory. For text values too,we often do not need the whole value to be detoasted and kept in memory.What do you think?-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 26 Feb 2024 10:12:16 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nHi, \n\n\n> When we store and detoast large values, say, 1Gb - that's a very likely scenario,\n> we have such cases from prod systems - we would end up in using a lot of shared\n> memory to keep these values alive, only to discard them later.\n\nFirst the feature can make sure if the original expression will not\ndetoast it, this feature will not detoast it as well. Based on this, if\nthere is a 1Gb datum, in the past, it took 1GB memory as well, but\nit may be discarded quicker than the patched version. but patched\nversion will *not* keep it for too long time since once the\nslot->tts_values[*] is invalidated, the memory will be released\nimmedately.\n\nFor example:\n\ncreate table t(a text, b int);\ninsert into t select '1-gb-length-text' from generate_series(1, 100);\n\nselect * from t where a like '%gb%';\n\nThe patched version will take 1gb extra memory only.\n\nAre you worried about this case or some other cases? \n\n> Also, toasted values\n> are not always being used immediately and as a whole, i.e. jsonb values are fully\n> detoasted (we're working on this right now) to extract the smallest value from\n> big json, and these values are not worth keeping in memory. For text values too,\n> we often do not need the whole value to be detoasted.\n\nI'm not sure how often this is true, espeically you metioned text data\ntype. I can't image why people acquire a piece of text and how can we\ndesign a toasting system to fulfill such case without detoast the whole\nas the first step. But for the jsonb or array data type, I think it is\nmore often. However if we design toasting like that, I'm not sure if it\nwould slow down other user case. for example detoasting the whole piece\nuse case. I am justing feeling both way has its user case, kind of heap\nstore and columna store. \n\nFWIW, as I shown in the test case, this feature is not only helpful for\nbig datum, it is also helpful for small toast datum. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 21:22:39 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nOn 2/26/24 14:22, Andy Fan wrote:\n> \n>...\n>\n>> Also, toasted values\n>> are not always being used immediately and as a whole, i.e. jsonb values are fully\n>> detoasted (we're working on this right now) to extract the smallest value from\n>> big json, and these values are not worth keeping in memory. For text values too,\n>> we often do not need the whole value to be detoasted.\n> \n> I'm not sure how often this is true, espeically you metioned text data\n> type. I can't image why people acquire a piece of text and how can we\n> design a toasting system to fulfill such case without detoast the whole\n> as the first step. But for the jsonb or array data type, I think it is\n> more often. However if we design toasting like that, I'm not sure if it\n> would slow down other user case. for example detoasting the whole piece\n> use case. I am justing feeling both way has its user case, kind of heap\n> store and columna store. \n> \n\nAny substr/starts_with call benefits from this optimization, and such\ncalls don't seem that uncommon. I certainly can imagine people doing\nthis fairly often. In any case, it's a long-standing behavior /\noptimization, and I don't think we can just dismiss it this quickly.\n\nIs there a way to identify cases that are likely to benefit from this\nslicing, and disable the detoasting for them? We already disable it for\nother cases, so maybe we can do this here too?\n\nOr maybe there's a way to do the detoasting lazily, and only keep the\ndetoasted value when it's requesting the whole value. Or perhaps even\nbetter, remember what part we detoasted, and maybe it'll be enough for\nfuture requests?\n\nI'm not sure how difficult would this be with the proposed approach,\nwhich eagerly detoasts the value into tts_values. I think it'd be easier\nto implement with the TOAST cache I briefly described ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Feb 2024 15:13:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nTomas, we already have a working jsonb partial detoast prototype,\nand currently I'm porting it to the recent master. Due to the size\n of the changes and very invasive nature it takes a lot of effort,\nbut it is already done. I'm also trying to make the core patch\nless invasive. Actually, it is a subject for a separate topic, as soon\nas I make it work on the current master we'll propose it\nto the community.\n\nAndy, thank you! I'll check the last patch set out and reply in a day or\ntwo.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Tomas, we already have a working jsonb partial detoast prototype,and currently I'm porting it to the recent master. Due to the size of the changes and very invasive nature it takes a lot of effort,but it is already done. I'm also trying to make the core patchless invasive. Actually, it is a subject for a separate topic, as soonas I make it work on the current master we'll propose itto the community.Andy, thank you! I'll check the last patch set out and reply in a day or two.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 26 Feb 2024 17:29:49 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n> On 2/26/24 14:22, Andy Fan wrote:\n>> \n>>...\n>>\n>>> Also, toasted values\n>>> are not always being used immediately and as a whole, i.e. jsonb values are fully\n>>> detoasted (we're working on this right now) to extract the smallest value from\n>>> big json, and these values are not worth keeping in memory. For text values too,\n>>> we often do not need the whole value to be detoasted.\n>> \n>> I'm not sure how often this is true, espeically you metioned text data\n>> type. I can't image why people acquire a piece of text and how can we\n>> design a toasting system to fulfill such case without detoast the whole\n>> as the first step. But for the jsonb or array data type, I think it is\n>> more often. However if we design toasting like that, I'm not sure if it\n>> would slow down other user case. for example detoasting the whole piece\n>> use case. I am justing feeling both way has its user case, kind of heap\n>> store and columna store. \n>> \n>\n> Any substr/starts_with call benefits from this optimization, and such\n> calls don't seem that uncommon. I certainly can imagine people doing\n> this fairly often.\n\nThis leads me to pay more attention to pg_detoast_datum_slice user case\nwhich I did overlook and caused some regression in this case. Thanks!\n\nAs you said later:\n\n> Is there a way to identify cases that are likely to benefit from this\n> slicing, and disable the detoasting for them? We already disable it for\n> other cases, so maybe we can do this here too?\n\nI think the answer is yes, if our goal is to disable the whole toast for\nsome speical calls like substr/starts_with. In the current patch, we have:\n\n/*\n * increase_level_for_pre_detoast\n *\tCheck if the given Expr could detoast a Var directly, if yes,\n * increase the level and return true. otherwise return false;\n */\nstatic inline void\nincrease_level_for_pre_detoast(Node *node, intermediate_level_context *ctx)\n{\n\t/* The following nodes is impossible to detoast a Var directly. */\n\tif (IsA(node, List) || IsA(node, TargetEntry) || IsA(node, NullTest))\n\t{\n\t\tctx->level_added = false;\n\t}\n\telse if (IsA(node, FuncExpr) && castNode(FuncExpr, node)->funcid == F_PG_COLUMN_COMPRESSION)\n\t{\n\t\t/* let's not detoast first so that pg_column_compression works. */\n\t\tctx->level_added = false;\n\t}\n\nwhile it works, but the blacklist looks not awesome.\n\n> In any case, it's a long-standing behavior /\n> optimization, and I don't think we can just dismiss it this quickly.\n\nI agree. So I said both have their user case. and when I say the *both*, I\nmean the other method is \"partial detoast prototype\", which Nikita has\ntold me before. while I am sure it has many user case, I'm also feeling\nit is complex and have many questions in my mind, but I'd like to see\nthe design before asking them.\n\n> Or maybe there's a way to do the detoasting lazily, and only keep the\n> detoasted value when it's requesting the whole value. Or perhaps even\n> better, remember what part we detoasted, and maybe it'll be enough for\n> future requests?\n>\n> I'm not sure how difficult would this be with the proposed approach,\n> which eagerly detoasts the value into tts_values. I think it'd be\n> easier to implement with the TOAST cache I briefly described ... \n\nI can understand the benefits of the TOAST cache, but the following\nissues looks not good to me IIUC: \n\n1). we can't put the result to tts_values[*] since without the planner\ndecision, we don't know if this will break small_tlist logic. But if we\nput it into the cache only, which means a cache-lookup as a overhead is\nunavoidable.\n\n2). It is hard to decide which entry should be evicted without attaching\nit to the TupleTableSlot's life-cycle. then we can't grantee the entry\nwe keep is the entry we will reuse soon?\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 23:29:33 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nNikita Malakhov <[email protected]> writes:\n\n> Hi,\n>\n> Tomas, we already have a working jsonb partial detoast prototype,\n> and currently I'm porting it to the recent master.\n\nThis is really awesome! Acutally when I talked to MySQL guys, they said\nMySQL already did this and I admit it can resolve some different issues\nthan the current patch. Just I have many implemetion questions in my\nmind. \n\n> Andy, thank you! I'll check the last patch set out and reply in a day\n> or two.\n\nThank you for your attention!\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 00:08:33 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi, Andy!\n\nSorry for the delay, I have had long flights this week.\nI've reviewed the patch set, thank you for your efforts.\nI have several notes about patch set code, but first of\nI'm not sure the overall approach is the best for the task.\n\nAs Tomas wrote above, the approach is very invasive\nand spreads code related to detoasting among many\nparts of code.\n\nHave you considered another one - to alter pg_detoast_datum\n(actually, it would be detoast_attr function) and save\ndetoasted datums in the detoast context derived\nfrom the query context?\n\nWe have just enough information at this step to identify\nthe datum - toast relation id and value id, and could\nkeep links to these detoasted values in a, say, linked list\n or hash table. Thus we would avoid altering the executor\ncode and all detoast-related code would reside within\nthe detoast source files?\n\nI'd check this approach in several days and would\nreport on the result here.\n\nThere are also comments on the code itself, I'd write them\na bit later.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi, Andy!Sorry for the delay, I have had long flights this week.I've reviewed the patch set, thank you for your efforts.I have several notes about patch set code, but first ofI'm not sure the overall approach is the best for the task.As Tomas wrote above, the approach is very invasiveand spreads code related to detoasting among manyparts of code.Have you considered another one - to alter pg_detoast_datum(actually, it would be detoast_attr function) and savedetoasted datums in the detoast context derivedfrom the query context? We have just enough information at this step to identifythe datum - toast relation id and value id, and couldkeep links to these detoasted values in a, say, linked list or hash table. Thus we would avoid altering the executorcode and all detoast-related code would reside withinthe detoast source files?I'd check this approach in several days and wouldreport on the result here.There are also comments on the code itself, I'd write thema bit later.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Sat, 2 Mar 2024 15:33:26 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nHi Nikita,\n\n>\n> Have you considered another one - to alter pg_detoast_datum\n> (actually, it would be detoast_attr function) and save\n> detoasted datums in the detoast context derived\n> from the query context? \n>\n> We have just enough information at this step to identify\n> the datum - toast relation id and value id, and could\n> keep links to these detoasted values in a, say, linked list\n> or hash table. Thus we would avoid altering the executor\n> code and all detoast-related code would reside within\n> the detoast source files?\n\nI think you are talking about the way Tomas provided. I am really\nafraid that I was thought of too self-opinionated, but I do have some\nconcerns about this approch as I stated here [1], looks my concerns is\nstill not addressed, or the concerns itself are too absurd which is\nreally possible I think? \n\n[1] https://www.postgresql.org/message-id/875xyb1a6q.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sun, 03 Mar 2024 09:52:16 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nHere is a updated version, the main changes are:\n\n1. an shared_detoast_datum.org file which shows the latest desgin and\npending items during discussion. \n2. I removed the slot->pre_detoast_attrs totally.\n3. handle some pg_detoast_datum_slice use case.\n4. Some implementation improvement.\n\ncommit 66c64c197a5dab97a563be5a291127e4c5d6841d (HEAD -> shared_detoast_value)\nAuthor: yizhi.fzh <[email protected]>\nDate: Sun Mar 3 13:48:25 2024 +0800\n\n shared detoast datum\n \n See the overall design & alternative design & testing in\n shared_detoast_datum.org\n\nIn the shared_detoast_datum.org, I added the alternative design part for\nthe idea of TOAST cache.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 03 Mar 2024 14:10:57 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/3/24 02:52, Andy Fan wrote:\n> \n> Hi Nikita,\n> \n>>\n>> Have you considered another one - to alter pg_detoast_datum\n>> (actually, it would be detoast_attr function) and save\n>> detoasted datums in the detoast context derived\n>> from the query context? \n>>\n>> We have just enough information at this step to identify\n>> the datum - toast relation id and value id, and could\n>> keep links to these detoasted values in a, say, linked list\n>> or hash table. Thus we would avoid altering the executor\n>> code and all detoast-related code would reside within\n>> the detoast source files?\n> \n> I think you are talking about the way Tomas provided. I am really\n> afraid that I was thought of too self-opinionated, but I do have some\n> concerns about this approch as I stated here [1], looks my concerns is\n> still not addressed, or the concerns itself are too absurd which is\n> really possible I think? \n> \n\nI'm not sure I understand your concerns. I can't speak for others, but I\ndid not consider you and your proposals too self-opinionated. You did\npropose a solution that you consider the right one. That's fine. People\nwill question that and suggest possible alternatives. That's fine too,\nit's why we have this list in the first place.\n\nFWIW I'm not somehow sure the approach I suggested is guaranteed to be\nbetter than \"your\" approach. Maybe there's some issue that I missed, for\nexample.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 3 Mar 2024 21:08:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n\nOn 2/26/24 16:29, Andy Fan wrote:\n>\n> ...>\n> I can understand the benefits of the TOAST cache, but the following\n> issues looks not good to me IIUC: \n> \n> 1). we can't put the result to tts_values[*] since without the planner\n> decision, we don't know if this will break small_tlist logic. But if we\n> put it into the cache only, which means a cache-lookup as a overhead is\n> unavoidable.\n\nTrue - if you're comparing having the detoasted value in tts_values[*]\ndirectly with having to do a lookup in a cache, then yes, there's a bit\nof an overhead.\n\nBut I think from the discussion it's clear having to detoast the value\ninto tts_values[*] has some weaknesses too, in particular:\n\n- It requires decisions which attributes to detoast eagerly, which is\nquite invasive (having to walk the plan, ...).\n\n- I'm sure there will be cases where we choose to not detoast, but it\nwould be beneficial to detoast.\n\n- Detoasting just the initial slices does not seem compatible with this.\n\nIMHO the overhead of the cache lookup would be negligible compared to\nthe repeated detoasting of the value (which is the current baseline). I\nsomewhat doubt the difference compared to tts_values[*] will be even\nmeasurable.\n\n> \n> 2). It is hard to decide which entry should be evicted without attaching\n> it to the TupleTableSlot's life-cycle. then we can't grantee the entry\n> we keep is the entry we will reuse soon?\n> \n\nTrue. But is that really a problem? I imagined we'd set some sort of\nmemory limit on the cache (work_mem?), and evict oldest entries. So the\nentries would eventually get evicted, and the memory limit would ensure\nwe don't consume arbitrary amounts of memory.\n\nWe could also add some \"slot callback\" but that seems unnecessary.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 3 Mar 2024 21:29:42 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/3/24 07:10, Andy Fan wrote:\n> \n> Hi,\n> \n> Here is a updated version, the main changes are:\n> \n> 1. an shared_detoast_datum.org file which shows the latest desgin and\n> pending items during discussion. \n> 2. I removed the slot->pre_detoast_attrs totally.\n> 3. handle some pg_detoast_datum_slice use case.\n> 4. Some implementation improvement.\n> \n\nI only very briefly skimmed the patch, and I guess most of my earlier\ncomments still apply. But I'm a bit surprised the patch needs to pass a\nMemoryContext to so many places as a function argument - that seems to\ngo against how we work with memory contexts. Doubly so because it seems\nto only ever pass CurrentMemoryContext anyway. So what's that about?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 3 Mar 2024 21:41:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n> On 3/3/24 02:52, Andy Fan wrote:\n>> \n>> Hi Nikita,\n>> \n>>>\n>>> Have you considered another one - to alter pg_detoast_datum\n>>> (actually, it would be detoast_attr function) and save\n>>> detoasted datums in the detoast context derived\n>>> from the query context? \n>>>\n>>> We have just enough information at this step to identify\n>>> the datum - toast relation id and value id, and could\n>>> keep links to these detoasted values in a, say, linked list\n>>> or hash table. Thus we would avoid altering the executor\n>>> code and all detoast-related code would reside within\n>>> the detoast source files?\n>> \n>> I think you are talking about the way Tomas provided. I am really\n>> afraid that I was thought of too self-opinionated, but I do have some\n>> concerns about this approch as I stated here [1], looks my concerns is\n>> still not addressed, or the concerns itself are too absurd which is\n>> really possible I think? \n>> \n>\n> I can't speak for others, but I did not consider you and your\n> proposals too self-opinionated. \n\nThis would be really great to know:)\n\n> You did\n> propose a solution that you consider the right one. That's fine. People\n> will question that and suggest possible alternatives. That's fine too,\n> it's why we have this list in the first place.\n\nI agree.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 09:17:27 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n> On 2/26/24 16:29, Andy Fan wrote:\n>>\n>> ...>\n>> I can understand the benefits of the TOAST cache, but the following\n>> issues looks not good to me IIUC: \n>> \n>> 1). we can't put the result to tts_values[*] since without the planner\n>> decision, we don't know if this will break small_tlist logic. But if we\n>> put it into the cache only, which means a cache-lookup as a overhead is\n>> unavoidable.\n>\n> True - if you're comparing having the detoasted value in tts_values[*]\n> directly with having to do a lookup in a cache, then yes, there's a bit\n> of an overhead.\n>\n> But I think from the discussion it's clear having to detoast the value\n> into tts_values[*] has some weaknesses too, in particular:\n>\n> - It requires decisions which attributes to detoast eagerly, which is\n> quite invasive (having to walk the plan, ...).\n>\n> - I'm sure there will be cases where we choose to not detoast, but it\n> would be beneficial to detoast.\n>\n> - Detoasting just the initial slices does not seem compatible with this.\n>\n> IMHO the overhead of the cache lookup would be negligible compared to\n> the repeated detoasting of the value (which is the current baseline). I\n> somewhat doubt the difference compared to tts_values[*] will be even\n> measurable.\n>\n>> \n>> 2). It is hard to decide which entry should be evicted without attaching\n>> it to the TupleTableSlot's life-cycle. then we can't grantee the entry\n>> we keep is the entry we will reuse soon?\n>> \n>\n> True. But is that really a problem? I imagined we'd set some sort of\n> memory limit on the cache (work_mem?), and evict oldest entries. So the\n> entries would eventually get evicted, and the memory limit would ensure\n> we don't consume arbitrary amounts of memory.\n>\n\nHere is a copy from the shared_detoast_datum.org in the patch. I am\nfeeling about when / which entry to free is a key problem and run-time\n(detoast_attr) overhead vs createplan.c overhead is a small difference\nas well. the overhead I paid for createplan.c/setref.c looks not huge as\nwell. \n\n\"\"\"\nA alternative design: toast cache\n---------------------------------\n\nThis method is provided by Tomas during the review process. IIUC, this\nmethod would maintain a local HTAB which map a toast datum to a detoast\ndatum and the entry is maintained / used in detoast_attr\nfunction. Within this method, the overall design is pretty clear and the\ncode modification can be controlled in toasting system only.\n\nI assumed that releasing all of the memory at the end of executor once\nis not an option since it may consumed too many memory. Then, when and\nwhich entry to release becomes a trouble for me. For example:\n\n QUERY PLAN\n------------------------------\n Nested Loop\n Join Filter: (t1.a = t2.a)\n -> Seq Scan on t1\n -> Seq Scan on t2\n(4 rows)\n\nIn this case t1.a needs a longer lifespan than t2.a since it is\nin outer relation. Without the help from slot's life-cycle system, I\ncan't think out a answer for the above question.\n\nAnother difference between the 2 methods is my method have many\nmodification on createplan.c/setref.c/execExpr.c/execExprInterp.c, but\nit can save some run-time effort like hash_search find / enter run-time\nin method 2 since I put them directly into tts_values[*].\n\nI'm not sure the factor 2 makes some real measurable difference in real\ncase, so my current concern mainly comes from factor 1.\n\"\"\"\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 09:23:56 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n> On 3/3/24 07:10, Andy Fan wrote:\n>> \n>> Hi,\n>> \n>> Here is a updated version, the main changes are:\n>> \n>> 1. an shared_detoast_datum.org file which shows the latest desgin and\n>> pending items during discussion. \n>> 2. I removed the slot->pre_detoast_attrs totally.\n>> 3. handle some pg_detoast_datum_slice use case.\n>> 4. Some implementation improvement.\n>> \n>\n> I only very briefly skimmed the patch, and I guess most of my earlier\n> comments still apply.\n\nYes, the overall design is not changed.\n\n> But I'm a bit surprised the patch needs to pass a\n> MemoryContext to so many places as a function argument - that seems to\n> go against how we work with memory contexts. Doubly so because it seems\n> to only ever pass CurrentMemoryContext anyway. So what's that about?\n\nI think you are talking about the argument like this:\n \n /* ----------\n- * detoast_attr -\n+ * detoast_attr_ext -\n *\n *\tPublic entry point to get back a toasted value from compression\n *\tor external storage. The result is always non-extended varlena form.\n *\n+ * ctx: The memory context which the final value belongs to.\n+ *\n * Note some callers assume that if the input is an EXTERNAL or COMPRESSED\n * datum, the result will be a pfree'able chunk.\n * ----------\n */\n\n+extern struct varlena *\n+detoast_attr_ext(struct varlena *attr, MemoryContext ctx)\n\nThis is mainly because 'detoast_attr' will apply more memory than the\n\"final detoast datum\" , for example the code to scan toast relation\nneeds some memory as well, and what I want is just keeping the memory\nfor the final detoast datum so that other memory can be released sooner,\nso I added the function argument for that. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 09:29:53 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n\nOn 3/4/24 02:29, Andy Fan wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n> \n>> On 3/3/24 07:10, Andy Fan wrote:\n>>>\n>>> Hi,\n>>>\n>>> Here is a updated version, the main changes are:\n>>>\n>>> 1. an shared_detoast_datum.org file which shows the latest desgin and\n>>> pending items during discussion. \n>>> 2. I removed the slot->pre_detoast_attrs totally.\n>>> 3. handle some pg_detoast_datum_slice use case.\n>>> 4. Some implementation improvement.\n>>>\n>>\n>> I only very briefly skimmed the patch, and I guess most of my earlier\n>> comments still apply.\n> \n> Yes, the overall design is not changed.\n> \n>> But I'm a bit surprised the patch needs to pass a\n>> MemoryContext to so many places as a function argument - that seems to\n>> go against how we work with memory contexts. Doubly so because it seems\n>> to only ever pass CurrentMemoryContext anyway. So what's that about?\n> \n> I think you are talking about the argument like this:\n> \n> /* ----------\n> - * detoast_attr -\n> + * detoast_attr_ext -\n> *\n> *\tPublic entry point to get back a toasted value from compression\n> *\tor external storage. The result is always non-extended varlena form.\n> *\n> + * ctx: The memory context which the final value belongs to.\n> + *\n> * Note some callers assume that if the input is an EXTERNAL or COMPRESSED\n> * datum, the result will be a pfree'able chunk.\n> * ----------\n> */\n> \n> +extern struct varlena *\n> +detoast_attr_ext(struct varlena *attr, MemoryContext ctx)\n> \n> This is mainly because 'detoast_attr' will apply more memory than the\n> \"final detoast datum\" , for example the code to scan toast relation\n> needs some memory as well, and what I want is just keeping the memory\n> for the final detoast datum so that other memory can be released sooner,\n> so I added the function argument for that. \n> \n\nWhat exactly does detoast_attr allocate in order to scan toast relation?\nDoes that happen in master, or just with the patch? If with master, I\nsuggest to ignore that / treat that as a separate issue and leave it for\na different patch.\n\nIn any case, the custom is to allocate results in the context that is\nset in CurrentMemoryContext at the moment of the call, and if there's\nsubstantial amount of allocations that we want to free soon, we either\ndo that by explicit pfree() calls, or create a small temporary context\nin the function (but that's more expensive).\n\nI don't think we should invent a new convention where the context is\npassed as an argument, unless absolutely necessary.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Mar 2024 13:07:58 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n>>> But I'm a bit surprised the patch needs to pass a\n>>> MemoryContext to so many places as a function argument - that seems to\n>>> go against how we work with memory contexts. Doubly so because it seems\n>>> to only ever pass CurrentMemoryContext anyway. So what's that about?\n>> \n>> I think you are talking about the argument like this:\n>> \n>> /* ----------\n>> - * detoast_attr -\n>> + * detoast_attr_ext -\n>> *\n>> *\tPublic entry point to get back a toasted value from compression\n>> *\tor external storage. The result is always non-extended varlena form.\n>> *\n>> + * ctx: The memory context which the final value belongs to.\n>> + *\n>> * Note some callers assume that if the input is an EXTERNAL or COMPRESSED\n>> * datum, the result will be a pfree'able chunk.\n>> * ----------\n>> */\n>> \n>> +extern struct varlena *\n>> +detoast_attr_ext(struct varlena *attr, MemoryContext ctx)\n>> \n>> This is mainly because 'detoast_attr' will apply more memory than the\n>> \"final detoast datum\" , for example the code to scan toast relation\n>> needs some memory as well, and what I want is just keeping the memory\n>> for the final detoast datum so that other memory can be released sooner,\n>> so I added the function argument for that. \n>> \n>\n> What exactly does detoast_attr allocate in order to scan toast relation?\n> Does that happen in master, or just with the patch?\n\nIt is in the current master, for example the TupleTableSlot creation\nneeded by scanning the toast relation needs a memory allocating. \n\n> If with master, I\n> suggest to ignore that / treat that as a separate issue and leave it for\n> a different patch.\n\nOK, I can make it as a seperate commit in the next version. \n\n> In any case, the custom is to allocate results in the context that is\n> set in CurrentMemoryContext at the moment of the call, and if there's\n> substantial amount of allocations that we want to free soon, we either\n> do that by explicit pfree() calls, or create a small temporary context\n> in the function (but that's more expensive).\n>\n> I don't think we should invent a new convention where the context is\n> passed as an argument, unless absolutely necessary.\n\nHmm, in this case, if we don't add the new argument, we have to get the\ndetoast datum in Context-1 and copy it to the desired memory context,\nwhich is the thing I want to avoid. I think we have a same decision to\nmake on the TOAST cache method as well.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 20:13:20 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/4/24 02:23, Andy Fan wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n> \n>> On 2/26/24 16:29, Andy Fan wrote:\n>>>\n>>> ...>\n>>> I can understand the benefits of the TOAST cache, but the following\n>>> issues looks not good to me IIUC: \n>>>\n>>> 1). we can't put the result to tts_values[*] since without the planner\n>>> decision, we don't know if this will break small_tlist logic. But if we\n>>> put it into the cache only, which means a cache-lookup as a overhead is\n>>> unavoidable.\n>>\n>> True - if you're comparing having the detoasted value in tts_values[*]\n>> directly with having to do a lookup in a cache, then yes, there's a bit\n>> of an overhead.\n>>\n>> But I think from the discussion it's clear having to detoast the value\n>> into tts_values[*] has some weaknesses too, in particular:\n>>\n>> - It requires decisions which attributes to detoast eagerly, which is\n>> quite invasive (having to walk the plan, ...).\n>>\n>> - I'm sure there will be cases where we choose to not detoast, but it\n>> would be beneficial to detoast.\n>>\n>> - Detoasting just the initial slices does not seem compatible with this.\n>>\n>> IMHO the overhead of the cache lookup would be negligible compared to\n>> the repeated detoasting of the value (which is the current baseline). I\n>> somewhat doubt the difference compared to tts_values[*] will be even\n>> measurable.\n>>\n>>>\n>>> 2). It is hard to decide which entry should be evicted without attaching\n>>> it to the TupleTableSlot's life-cycle. then we can't grantee the entry\n>>> we keep is the entry we will reuse soon?\n>>>\n>>\n>> True. But is that really a problem? I imagined we'd set some sort of\n>> memory limit on the cache (work_mem?), and evict oldest entries. So the\n>> entries would eventually get evicted, and the memory limit would ensure\n>> we don't consume arbitrary amounts of memory.\n>>\n> \n> Here is a copy from the shared_detoast_datum.org in the patch. I am\n> feeling about when / which entry to free is a key problem and run-time\n> (detoast_attr) overhead vs createplan.c overhead is a small difference\n> as well. the overhead I paid for createplan.c/setref.c looks not huge as\n> well. \n\nI decided to whip up a PoC patch implementing this approach, to get a\nbetter idea of how it compares to the original proposal, both in terms\nof performance and code complexity.\n\nAttached are two patches that both add a simple cache in detoast.c, but\ndiffer in when exactly the caching happens.\n\ntoast-cache-v1 - caching happens in toast_fetch_datum, which means this\nhappens before decompression\n\ntoast-cache-v2 - caching happens in detoast_attr, after decompression\n\nI started with v1, but that did not help with the example workloads\n(from the original post) at all. Only after I changed this to cache\ndecompressed data (in v2) it became competitive.\n\nThere's GUC to enable the cache (enable_toast_cache, off by default), to\nmake experimentation easier.\n\nThe cache is invalidated at the end of a transaction - I think this\nshould be OK, because the rows may be deleted but can't be cleaned up\n(or replaced with a new TOAST value). This means the cache could cover\nmultiple queries - not sure if that's a good thing or bad thing.\n\nI haven't implemented any sort of cache eviction. I agree that's a hard\nproblem in general, but I doubt we can do better than LRU. I also didn't\nimplement any memory limit.\n\nFWIW I think it's a fairly serious issue Andy's approach does not have\nany way to limit the memory. It will detoasts the values eagerly, but\nwhat if the rows have multiple large values? What if we end up keeping\nmultiple such rows (from different parts of the plan) at once?\n\nI also haven't implemented caching for sliced data. I think modifying\nthe code to support this should not be difficult - it'd require tracking\nhow much data we have in the cache, and considering that during lookup\nand when adding entries to cache.\n\nI've implemented the cache as \"global\". Maybe it should be tied to query\nor executor state, but then it'd be harder to access it from detoast.c\n(we'd have to pass the cache pointer in some way, and it wouldn't work\nfor use cases that don't go through executor).\n\nI think implementation-wise this is pretty non-invasive.\n\nPerformance-wise, these patches are slower than with Andy's patch. For\nexample for the jsonb Q1, I see master ~500ms, Andy's patch ~100ms and\nv2 at ~150ms. I'm sure there's a number of optimization opportunities\nand v2 could get v2 closer to the 100ms.\n\nv1 is not competitive at all in this jsonb/Q1 test - it's just as slow\nas master, because the data set is small enough to be fully cached, so\nthere's no I/O and it's the decompression is the actual bottleneck.\n\nThat being said, I'm not convinced v1 would be a bad choice for some\ncases. If there's memory pressure enough to evict TOAST, it's quite\npossible the I/O would become the bottleneck. At which point it might be\na good trade off to cache compressed data, because then we'd cache more\ndetoasted values.\n\nOTOH it's likely the access to TOAST values is localized (in temporal\nsense), i.e. we read it from disk, detoast it a couple times in a short\ntime interval, and then move to the next row. That'd mean v2 is probably\nthe correct trade off.\n\nNot sure.\n\n> \n> \"\"\"\n> A alternative design: toast cache\n> ---------------------------------\n> \n> This method is provided by Tomas during the review process. IIUC, this\n> method would maintain a local HTAB which map a toast datum to a detoast\n> datum and the entry is maintained / used in detoast_attr\n> function. Within this method, the overall design is pretty clear and the\n> code modification can be controlled in toasting system only.\n> \n\nRight.\n\n> I assumed that releasing all of the memory at the end of executor once\n> is not an option since it may consumed too many memory. Then, when and\n> which entry to release becomes a trouble for me. For example:\n> \n> QUERY PLAN\n> ------------------------------\n> Nested Loop\n> Join Filter: (t1.a = t2.a)\n> -> Seq Scan on t1\n> -> Seq Scan on t2\n> (4 rows)\n> \n> In this case t1.a needs a longer lifespan than t2.a since it is\n> in outer relation. Without the help from slot's life-cycle system, I\n> can't think out a answer for the above question.\n> \n\nThis is true, but how likely such plans are? I mean, surely no one would\ndo nested loop with sequential scans on reasonably large tables, so how\nrepresentative this example is?\n\nAlso, this leads me to the need of having some sort of memory limit. If\nwe may be keeping entries for extended periods of time, and we don't\nhave any way to limit the amount of memory, that does not seem great.\n\nAFAIK if we detoast everything into tts_values[] there's no way to\nimplement and enforce such limit. What happens if there's a row with\nmultiple large-ish TOAST values? What happens if those rows are in\ndifferent (and distant) parts of the plan?\n\nIt seems far easier to limit the memory with the toast cache.\n\n\n> Another difference between the 2 methods is my method have many\n> modification on createplan.c/setref.c/execExpr.c/execExprInterp.c, but\n> it can save some run-time effort like hash_search find / enter run-time\n> in method 2 since I put them directly into tts_values[*].\n> \n> I'm not sure the factor 2 makes some real measurable difference in real\n> case, so my current concern mainly comes from factor 1.\n> \"\"\"\n> \n\nThis seems a bit dismissive of the (possible) issue. It'd be good to do\nsome measurements, especially on simple queries that can't benefit from\nthe detoasting (e.g. because there's no varlena attribute).\n\nIn any case, my concern is more about having to do this when creating\nthe plan at all, the code complexity etc. Not just because it might have\nperformance impact.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 4 Mar 2024 17:06:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n>\n> I decided to whip up a PoC patch implementing this approach, to get a\n> better idea of how it compares to the original proposal, both in terms\n> of performance and code complexity.\n>\n> Attached are two patches that both add a simple cache in detoast.c, but\n> differ in when exactly the caching happens.\n>\n> toast-cache-v1 - caching happens in toast_fetch_datum, which means this\n> happens before decompression\n>\n> toast-cache-v2 - caching happens in detoast_attr, after decompression\n...\n>\n> I think implementation-wise this is pretty non-invasive.\n..\n>\n> Performance-wise, these patches are slower than with Andy's patch. For\n> example for the jsonb Q1, I see master ~500ms, Andy's patch ~100ms and\n> v2 at ~150ms. I'm sure there's a number of optimization opportunities\n> and v2 could get v2 closer to the 100ms.\n>\n> v1 is not competitive at all in this jsonb/Q1 test - it's just as slow\n> as master, because the data set is small enough to be fully cached, so\n> there's no I/O and it's the decompression is the actual bottleneck.\n>\n> That being said, I'm not convinced v1 would be a bad choice for some\n> cases. If there's memory pressure enough to evict TOAST, it's quite\n> possible the I/O would become the bottleneck. At which point it might be\n> a good trade off to cache compressed data, because then we'd cache more\n> detoasted values.\n>\n> OTOH it's likely the access to TOAST values is localized (in temporal\n> sense), i.e. we read it from disk, detoast it a couple times in a short\n> time interval, and then move to the next row. That'd mean v2 is probably\n> the correct trade off.\n\nI don't have different thought about the above statement and Thanks for\nthe PoC code which would make later testing much easier. \n\n>> \n>> \"\"\"\n>> A alternative design: toast cache\n>> ---------------------------------\n>> \n>> This method is provided by Tomas during the review process. IIUC, this\n>> method would maintain a local HTAB which map a toast datum to a detoast\n>> datum and the entry is maintained / used in detoast_attr\n>> function. Within this method, the overall design is pretty clear and the\n>> code modification can be controlled in toasting system only.\n>> \n>\n> Right.\n>\n>> I assumed that releasing all of the memory at the end of executor once\n>> is not an option since it may consumed too many memory. Then, when and\n>> which entry to release becomes a trouble for me. For example:\n>> \n>> QUERY PLAN\n>> ------------------------------\n>> Nested Loop\n>> Join Filter: (t1.a = t2.a)\n>> -> Seq Scan on t1\n>> -> Seq Scan on t2\n>> (4 rows)\n>> \n>> In this case t1.a needs a longer lifespan than t2.a since it is\n>> in outer relation. Without the help from slot's life-cycle system, I\n>> can't think out a answer for the above question.\n>> \n>\n> This is true, but how likely such plans are? I mean, surely no one would\n> do nested loop with sequential scans on reasonably large tables, so how\n> representative this example is?\n\nAcutally this is a simplest Join case, we still have same problem like\nNested Loop + Index Scan which will be pretty common. \n\n> Also, this leads me to the need of having some sort of memory limit. If\n> we may be keeping entries for extended periods of time, and we don't\n> have any way to limit the amount of memory, that does not seem great.\n>\n> AFAIK if we detoast everything into tts_values[] there's no way to\n> implement and enforce such limit. What happens if there's a row with\n> multiple large-ish TOAST values? What happens if those rows are in\n> different (and distant) parts of the plan?\n\nI think this can be done by tracking the memory usage on EState level\nor global variable level and disable it if it exceeds the limits and\nresume it when we free the detoast datum when we don't need it. I think\nno other changes need to be done.\n\n> It seems far easier to limit the memory with the toast cache.\n\nI think the memory limit and entry eviction is the key issue now. IMO,\nthere are still some difference when both methods can support memory\nlimit. The reason is my patch can grantee the cached memory will be\nreused, so if we set the limit to 10MB, we know all the 10MB is\nuseful, but the TOAST cache method, probably can't grantee that, then\nwhen we want to make it effecitvely, we have to set a higher limit for\nthis.\n\n\n>> Another difference between the 2 methods is my method have many\n>> modification on createplan.c/setref.c/execExpr.c/execExprInterp.c, but\n>> it can save some run-time effort like hash_search find / enter run-time\n>> in method 2 since I put them directly into tts_values[*].\n>> \n>> I'm not sure the factor 2 makes some real measurable difference in real\n>> case, so my current concern mainly comes from factor 1.\n>> \"\"\"\n>> \n>\n> This seems a bit dismissive of the (possible) issue.\n\nHmm, sorry about this, that is not my intention:(\n\n> It'd be good to do\n> some measurements, especially on simple queries that can't benefit from\n> the detoasting (e.g. because there's no varlena attribute).\n\nThis testing for the queries which have no varlena attribute was done at\nthe very begining of this thread, and for now, the code is much more\nnicer for this sistuation. all the changes in execExpr.c\nexecExprInterp.c has no impact on this. Only the plan walker codes\nmatter. Here is a test based on tenk1. \n\nq1: explain select count(*) from tenk1 where odd > 10 and even > 30;\nq2: select count(*) from tenk1 where odd > 10 and even > 30;\n\npgbench -n -f qx.sql regression -T10\n\n| Query | master (ms) | patched (ms) |\n|-------+-------------+--------------|\n| q1 | 0.129 | 0.129 |\n| q2 | 1.876 | 1.870 |\n\nthere are some error for patched-q2 combination, but at least it can\nshow it doesn't cause noticable regression.\n\n> In any case, my concern is more about having to do this when creating\n> the plan at all, the code complexity etc. Not just because it might have\n> performance impact.\n\nI think the main trade-off is TOAST cache method is pretty non-invasive\nbut can't control the eviction well, the impacts includes:\n1. may evicting the datum we want and kept the datum we don't need.\n2. more likely to use up all the memory which is allowed. for example:\nif we set the limit to 1MB, then we kept more data which will be not\nused and then consuming all of the 1MB. \n\nMy method is resolving this with some helps from other modules (kind of\ninvasive) but can control the eviction well and use the memory as less\nas possible.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 01:08:29 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/4/24 18:08, Andy Fan wrote:\n> ...\n>>\n>>> I assumed that releasing all of the memory at the end of executor once\n>>> is not an option since it may consumed too many memory. Then, when and\n>>> which entry to release becomes a trouble for me. For example:\n>>>\n>>> QUERY PLAN\n>>> ------------------------------\n>>> Nested Loop\n>>> Join Filter: (t1.a = t2.a)\n>>> -> Seq Scan on t1\n>>> -> Seq Scan on t2\n>>> (4 rows)\n>>>\n>>> In this case t1.a needs a longer lifespan than t2.a since it is\n>>> in outer relation. Without the help from slot's life-cycle system, I\n>>> can't think out a answer for the above question.\n>>>\n>>\n>> This is true, but how likely such plans are? I mean, surely no one would\n>> do nested loop with sequential scans on reasonably large tables, so how\n>> representative this example is?\n> \n> Acutally this is a simplest Join case, we still have same problem like\n> Nested Loop + Index Scan which will be pretty common. \n> \n\nYes, I understand there are cases where LRU eviction may not be the best\nchoice - like here, where the \"t1\" should stay in the case. But there\nare also cases where this is the wrong choice, and LRU would be better.\n\nFor example a couple paragraphs down you suggest to enforce the memory\nlimit by disabling detoasting if the memory limit is reached. That means\nthe detoasting can get disabled because there's a single access to the\nattribute somewhere \"up the plan tree\". But what if the other attributes\n(which now won't be detoasted) are accessed many times until then?\n\nI think LRU is a pretty good \"default\" algorithm if we don't have a very\ngood idea of the exact life span of the values, etc. Which is why other\nnodes (e.g. Memoize) use LRU too.\n\nBut I wonder if there's a way to count how many times an attribute is\naccessed (or is likely to be). That might be used to inform a better\neviction strategy.\n\nAlso, we don't need to evict the whole entry - we could evict just the\ndata part (guaranteed to be fairly large), but keep the header, and keep\nthe counts, expected number of hits, and other info. And use this to\ne.g. release entries that reached the expected number of hits. But I'd\nprobably start with LRU and only do this as an improvement later.\n\n>> Also, this leads me to the need of having some sort of memory limit. If\n>> we may be keeping entries for extended periods of time, and we don't\n>> have any way to limit the amount of memory, that does not seem great.\n>>\n>> AFAIK if we detoast everything into tts_values[] there's no way to\n>> implement and enforce such limit. What happens if there's a row with\n>> multiple large-ish TOAST values? What happens if those rows are in\n>> different (and distant) parts of the plan?\n> \n> I think this can be done by tracking the memory usage on EState level\n> or global variable level and disable it if it exceeds the limits and\n> resume it when we free the detoast datum when we don't need it. I think\n> no other changes need to be done.\n> \n\nThat seems like a fair amount of additional complexity. And what if the\ntoasted values are accessed in context without EState (I haven't checked\nhow common / important that is)?\n\nAnd I'm not sure just disabling the detoast as a way to enforce a memory\nlimit, as explained earlier.\n\n>> It seems far easier to limit the memory with the toast cache.\n> \n> I think the memory limit and entry eviction is the key issue now. IMO,\n> there are still some difference when both methods can support memory\n> limit. The reason is my patch can grantee the cached memory will be\n> reused, so if we set the limit to 10MB, we know all the 10MB is\n> useful, but the TOAST cache method, probably can't grantee that, then\n> when we want to make it effecitvely, we have to set a higher limit for\n> this.\n> \n\nCan it actually guarantee that? It can guarantee the slot may be used,\nbut I don't see how could it guarantee the detoasted value will be used.\nWe may be keeping the slot for other reasons. And even if it could\nguarantee the detoasted value will be used, does that actually prove\nit's better to keep that value? What if it's only used once, but it's\nblocking detoasting of values that will be used 10x that?\n\nIf we detoast a 10MB value in the outer side of the Nest Loop, what if\nthe inner path has multiple accesses to another 10MB value that now\ncan't be detoasted (as a shared value)?\n\n> \n>>> Another difference between the 2 methods is my method have many\n>>> modification on createplan.c/setref.c/execExpr.c/execExprInterp.c, but\n>>> it can save some run-time effort like hash_search find / enter run-time\n>>> in method 2 since I put them directly into tts_values[*].\n>>>\n>>> I'm not sure the factor 2 makes some real measurable difference in real\n>>> case, so my current concern mainly comes from factor 1.\n>>> \"\"\"\n>>>\n>>\n>> This seems a bit dismissive of the (possible) issue.\n> \n> Hmm, sorry about this, that is not my intention:(\n> \n\nNo need to apologize.\n\n>> It'd be good to do\n>> some measurements, especially on simple queries that can't benefit from\n>> the detoasting (e.g. because there's no varlena attribute).\n> \n> This testing for the queries which have no varlena attribute was done at\n> the very begining of this thread, and for now, the code is much more\n> nicer for this sistuation. all the changes in execExpr.c\n> execExprInterp.c has no impact on this. Only the plan walker codes\n> matter. Here is a test based on tenk1. \n> \n> q1: explain select count(*) from tenk1 where odd > 10 and even > 30;\n> q2: select count(*) from tenk1 where odd > 10 and even > 30;\n> \n> pgbench -n -f qx.sql regression -T10\n> \n> | Query | master (ms) | patched (ms) |\n> |-------+-------------+--------------|\n> | q1 | 0.129 | 0.129 |\n> | q2 | 1.876 | 1.870 |\n> \n> there are some error for patched-q2 combination, but at least it can\n> show it doesn't cause noticable regression.\n> \n\nOK, I'll try to do some experiments with these cases too.\n\n>> In any case, my concern is more about having to do this when creating\n>> the plan at all, the code complexity etc. Not just because it might have\n>> performance impact.\n> \n> I think the main trade-off is TOAST cache method is pretty non-invasive\n> but can't control the eviction well, the impacts includes:\n> 1. may evicting the datum we want and kept the datum we don't need.\n\nThis applies to any eviction algorithm, not just LRU. Ultimately what\nmatters is whether we have in the cache the most often used values, i.e.\nthe hit ratio (perhaps in combination with how expensive detoasting that\nparticular entry was).\n\n> 2. more likely to use up all the memory which is allowed. for example:\n> if we set the limit to 1MB, then we kept more data which will be not\n> used and then consuming all of the 1MB. \n> \n> My method is resolving this with some helps from other modules (kind of\n> invasive) but can control the eviction well and use the memory as less\n> as possible.\n> \n\nIs the memory usage really an issue? Sure, it'd be nice to evict entries\nas soon as we can prove they are not needed anymore, but if the cache\nlimit is set to 1MB it's not really a problem to use 1MB.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Mar 2024 21:07:04 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n>\n>> 2. more likely to use up all the memory which is allowed. for example:\n>> if we set the limit to 1MB, then we kept more data which will be not\n>> used and then consuming all of the 1MB. \n>> \n>> My method is resolving this with some helps from other modules (kind of\n>> invasive) but can control the eviction well and use the memory as less\n>> as possible.\n>> \n>\n> Is the memory usage really an issue? Sure, it'd be nice to evict entries\n> as soon as we can prove they are not needed anymore, but if the cache\n> limit is set to 1MB it's not really a problem to use 1MB.\n\nThis might be a key point which leads us to some different directions, so\nI want to explain more about this, to see if we can get some agreement\nhere.\n\nIt is a bit hard to decide which memory limit to set, 1MB, 10MB or 40MB,\n100MB. In my current case it is 40MB at least. Less memory limit \ncauses cache ineffecitve, high memory limit cause potential memory\nuse-up issue in the TOAST cache design. But in my method, even we set a\nhigher value, it just limits the user case it really (nearly) needed,\nand it would not cache more values util the limit is hit. This would\nmake a noticable difference when we want to set a high limit and we have\nsome high active sessions, like 100 * 40MB = 4GB. \n\n> On 3/4/24 18:08, Andy Fan wrote:\n>> ...\n>>>\n>>>> I assumed that releasing all of the memory at the end of executor once\n>>>> is not an option since it may consumed too many memory. Then, when and\n>>>> which entry to release becomes a trouble for me. For example:\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------\n>>>> Nested Loop\n>>>> Join Filter: (t1.a = t2.a)\n>>>> -> Seq Scan on t1\n>>>> -> Seq Scan on t2\n>>>> (4 rows)\n>>>>\n>>>> In this case t1.a needs a longer lifespan than t2.a since it is\n>>>> in outer relation. Without the help from slot's life-cycle system, I\n>>>> can't think out a answer for the above question.\n>>>>\n>>>\n>>> This is true, but how likely such plans are? I mean, surely no one would\n>>> do nested loop with sequential scans on reasonably large tables, so how\n>>> representative this example is?\n>> \n>> Acutally this is a simplest Join case, we still have same problem like\n>> Nested Loop + Index Scan which will be pretty common. \n>> \n>\n> Yes, I understand there are cases where LRU eviction may not be the best\n> choice - like here, where the \"t1\" should stay in the case. But there\n> are also cases where this is the wrong choice, and LRU would be better.\n>\n> For example a couple paragraphs down you suggest to enforce the memory\n> limit by disabling detoasting if the memory limit is reached. That means\n> the detoasting can get disabled because there's a single access to the\n> attribute somewhere \"up the plan tree\". But what if the other attributes\n> (which now won't be detoasted) are accessed many times until then?\n\nI am not sure I can't follow up here, but I want to explain more about\nthe disable-detoast-sharing logic when the memory limit is hit. When\nthis happen, the detoast sharing is disabled, but since the detoast\ndatum will be released every soon when the slot->tts_values[*] is\ndiscard, then the 'disable' turn to 'enable' quickly. So It is not \ntrue that once it is get disabled, it can't be enabled any more for the\ngiven query.\n\n> I think LRU is a pretty good \"default\" algorithm if we don't have a very\n> good idea of the exact life span of the values, etc. Which is why other\n> nodes (e.g. Memoize) use LRU too.\n\n> But I wonder if there's a way to count how many times an attribute is\n> accessed (or is likely to be). That might be used to inform a better\n> eviction strategy.\n\nYes, but in current issue we can get a better esitimation with the help\nof plan shape and Memoize depends on some planner information as well.\nIf we bypass the planner information and try to resolve it at the \ncache level, the code may become to complex as well and all the cost is\nrun time overhead while the other way is a planning timie overhead.\n\n> Also, we don't need to evict the whole entry - we could evict just the\n> data part (guaranteed to be fairly large), but keep the header, and keep\n> the counts, expected number of hits, and other info. And use this to\n> e.g. release entries that reached the expected number of hits. But I'd\n> probably start with LRU and only do this as an improvement later.\n\nA great lession learnt here, thanks for sharing this!\n\nAs for the current user case what I want to highlight is in the current\nuser case, we are \"caching\" \"user data\" \"locally\".\n\nUSER DATA indicates it might be very huge, it is not common to have a\n1M tables, but it is much common we have 1M Tuples in one scan, so\nkeeping the header might extra memory usage as well, like 10M * 24 bytes\n= 240MB. LOCALLY means it is not friend to multi active sessions. CACHE\nindicates it is hard to evict correctly. My method also have the USER\nDATA, LOCALLY attributes, but it would be better at eviction. eviction\nthen have lead to memory usage issue which is discribed at the beginning\nof this writing. \n\n>>> Also, this leads me to the need of having some sort of memory limit. If\n>>> we may be keeping entries for extended periods of time, and we don't\n>>> have any way to limit the amount of memory, that does not seem great.\n>>>\n>>> AFAIK if we detoast everything into tts_values[] there's no way to\n>>> implement and enforce such limit. What happens if there's a row with\n>>> multiple large-ish TOAST values? What happens if those rows are in\n>>> different (and distant) parts of the plan?\n>> \n>> I think this can be done by tracking the memory usage on EState level\n>> or global variable level and disable it if it exceeds the limits and\n>> resume it when we free the detoast datum when we don't need it. I think\n>> no other changes need to be done.\n>> \n>\n> That seems like a fair amount of additional complexity. And what if the\n> toasted values are accessed in context without EState (I haven't checked\n> how common / important that is)?\n>\n> And I'm not sure just disabling the detoast as a way to enforce a memory\n> limit, as explained earlier.\n>\n>>> It seems far easier to limit the memory with the toast cache.\n>> \n>> I think the memory limit and entry eviction is the key issue now. IMO,\n>> there are still some difference when both methods can support memory\n>> limit. The reason is my patch can grantee the cached memory will be\n>> reused, so if we set the limit to 10MB, we know all the 10MB is\n>> useful, but the TOAST cache method, probably can't grantee that, then\n>> when we want to make it effecitvely, we have to set a higher limit for\n>> this.\n>> \n>\n> Can it actually guarantee that? It can guarantee the slot may be used,\n> but I don't see how could it guarantee the detoasted value will be used.\n> We may be keeping the slot for other reasons. And even if it could\n> guarantee the detoasted value will be used, does that actually prove\n> it's better to keep that value? What if it's only used once, but it's\n> blocking detoasting of values that will be used 10x that?\n>\n> If we detoast a 10MB value in the outer side of the Nest Loop, what if\n> the inner path has multiple accesses to another 10MB value that now\n> can't be detoasted (as a shared value)?\n\nGrarantee may be wrong word. The difference in my mind are:\n1. plan shape have better potential to know the user case of datum,\nsince we know the plan tree and knows the rows pass to a given node.\n2. Planning time effort is cheaper than run-time effort.\n3. eviction in my method is not as important as it is in TOAST cache\nmethod since it is reset per slot, so usually it doesn't hit limit in\nfact. But as a cache, it does. \n4. use up to memory limit we set in TOAST cache case. \n\n>>> In any case, my concern is more about having to do this when creating\n>>> the plan at all, the code complexity etc. Not just because it might have\n>>> performance impact.\n>> \n>> I think the main trade-off is TOAST cache method is pretty non-invasive\n>> but can't control the eviction well, the impacts includes:\n>> 1. may evicting the datum we want and kept the datum we don't need.\n>\n> This applies to any eviction algorithm, not just LRU. Ultimately what\n> matters is whether we have in the cache the most often used values, i.e.\n> the hit ratio (perhaps in combination with how expensive detoasting that\n> particular entry was).\n\nCorrect, just that I am doubtful about design a LOCAL CACHE for USER\nDATA with the reason I described above.\n\nAt last, thanks for your attention, really appreciated about it!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 09:44:33 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nTomas, sorry for confusion, in my previous message I meant exactly\nthe same approach you've posted above, and came up with almost\nthe same implementation.\n\nThank you very much for your attention to this thread!\n\nI've asked Andy about this approach because of the same reasons you\nmentioned - it keeps cache code small, localized and easy to maintain.\n\nThe question that worries me is the memory limit, and I think that cache\nlookup and entry invalidation should be done in toast_tuple_externalize\ncode, for the case the value has been detoasted previously is updated\nin the same query, like\nUPDATE test SET value = value || '...';\n\nI've added cache entry invalidation and data removal on delete and update\nof the toasted values and am currently experimenting with large values.\n\nOn Tue, Mar 5, 2024 at 5:59 AM Andy Fan <[email protected]> wrote:\n\n>\n> >\n> >> 2. more likely to use up all the memory which is allowed. for example:\n> >> if we set the limit to 1MB, then we kept more data which will be not\n> >> used and then consuming all of the 1MB.\n> >>\n> >> My method is resolving this with some helps from other modules (kind of\n> >> invasive) but can control the eviction well and use the memory as less\n> >> as possible.\n> >>\n> >\n> > Is the memory usage really an issue? Sure, it'd be nice to evict entries\n> > as soon as we can prove they are not needed anymore, but if the cache\n> > limit is set to 1MB it's not really a problem to use 1MB.\n>\n> This might be a key point which leads us to some different directions, so\n> I want to explain more about this, to see if we can get some agreement\n> here.\n>\n> It is a bit hard to decide which memory limit to set, 1MB, 10MB or 40MB,\n> 100MB. In my current case it is 40MB at least. Less memory limit\n> causes cache ineffecitve, high memory limit cause potential memory\n> use-up issue in the TOAST cache design. But in my method, even we set a\n> higher value, it just limits the user case it really (nearly) needed,\n> and it would not cache more values util the limit is hit. This would\n> make a noticable difference when we want to set a high limit and we have\n> some high active sessions, like 100 * 40MB = 4GB.\n>\n> > On 3/4/24 18:08, Andy Fan wrote:\n> >> ...\n> >>>\n> >>>> I assumed that releasing all of the memory at the end of executor once\n> >>>> is not an option since it may consumed too many memory. Then, when and\n> >>>> which entry to release becomes a trouble for me. For example:\n> >>>>\n> >>>> QUERY PLAN\n> >>>> ------------------------------\n> >>>> Nested Loop\n> >>>> Join Filter: (t1.a = t2.a)\n> >>>> -> Seq Scan on t1\n> >>>> -> Seq Scan on t2\n> >>>> (4 rows)\n> >>>>\n> >>>> In this case t1.a needs a longer lifespan than t2.a since it is\n> >>>> in outer relation. Without the help from slot's life-cycle system, I\n> >>>> can't think out a answer for the above question.\n> >>>>\n> >>>\n> >>> This is true, but how likely such plans are? I mean, surely no one\n> would\n> >>> do nested loop with sequential scans on reasonably large tables, so how\n> >>> representative this example is?\n> >>\n> >> Acutally this is a simplest Join case, we still have same problem like\n> >> Nested Loop + Index Scan which will be pretty common.\n> >>\n> >\n> > Yes, I understand there are cases where LRU eviction may not be the best\n> > choice - like here, where the \"t1\" should stay in the case. But there\n> > are also cases where this is the wrong choice, and LRU would be better.\n> >\n> > For example a couple paragraphs down you suggest to enforce the memory\n> > limit by disabling detoasting if the memory limit is reached. That means\n> > the detoasting can get disabled because there's a single access to the\n> > attribute somewhere \"up the plan tree\". But what if the other attributes\n> > (which now won't be detoasted) are accessed many times until then?\n>\n> I am not sure I can't follow up here, but I want to explain more about\n> the disable-detoast-sharing logic when the memory limit is hit. When\n> this happen, the detoast sharing is disabled, but since the detoast\n> datum will be released every soon when the slot->tts_values[*] is\n> discard, then the 'disable' turn to 'enable' quickly. So It is not\n> true that once it is get disabled, it can't be enabled any more for the\n> given query.\n>\n> > I think LRU is a pretty good \"default\" algorithm if we don't have a very\n> > good idea of the exact life span of the values, etc. Which is why other\n> > nodes (e.g. Memoize) use LRU too.\n>\n> > But I wonder if there's a way to count how many times an attribute is\n> > accessed (or is likely to be). That might be used to inform a better\n> > eviction strategy.\n>\n> Yes, but in current issue we can get a better esitimation with the help\n> of plan shape and Memoize depends on some planner information as well.\n> If we bypass the planner information and try to resolve it at the\n> cache level, the code may become to complex as well and all the cost is\n> run time overhead while the other way is a planning timie overhead.\n>\n> > Also, we don't need to evict the whole entry - we could evict just the\n> > data part (guaranteed to be fairly large), but keep the header, and keep\n> > the counts, expected number of hits, and other info. And use this to\n> > e.g. release entries that reached the expected number of hits. But I'd\n> > probably start with LRU and only do this as an improvement later.\n>\n> A great lession learnt here, thanks for sharing this!\n>\n> As for the current user case what I want to highlight is in the current\n> user case, we are \"caching\" \"user data\" \"locally\".\n>\n> USER DATA indicates it might be very huge, it is not common to have a\n> 1M tables, but it is much common we have 1M Tuples in one scan, so\n> keeping the header might extra memory usage as well, like 10M * 24 bytes\n> = 240MB. LOCALLY means it is not friend to multi active sessions. CACHE\n> indicates it is hard to evict correctly. My method also have the USER\n> DATA, LOCALLY attributes, but it would be better at eviction. eviction\n> then have lead to memory usage issue which is discribed at the beginning\n> of this writing.\n>\n> >>> Also, this leads me to the need of having some sort of memory limit. If\n> >>> we may be keeping entries for extended periods of time, and we don't\n> >>> have any way to limit the amount of memory, that does not seem great.\n> >>>\n> >>> AFAIK if we detoast everything into tts_values[] there's no way to\n> >>> implement and enforce such limit. What happens if there's a row with\n> >>> multiple large-ish TOAST values? What happens if those rows are in\n> >>> different (and distant) parts of the plan?\n> >>\n> >> I think this can be done by tracking the memory usage on EState level\n> >> or global variable level and disable it if it exceeds the limits and\n> >> resume it when we free the detoast datum when we don't need it. I think\n> >> no other changes need to be done.\n> >>\n> >\n> > That seems like a fair amount of additional complexity. And what if the\n> > toasted values are accessed in context without EState (I haven't checked\n> > how common / important that is)?\n> >\n> > And I'm not sure just disabling the detoast as a way to enforce a memory\n> > limit, as explained earlier.\n> >\n> >>> It seems far easier to limit the memory with the toast cache.\n> >>\n> >> I think the memory limit and entry eviction is the key issue now. IMO,\n> >> there are still some difference when both methods can support memory\n> >> limit. The reason is my patch can grantee the cached memory will be\n> >> reused, so if we set the limit to 10MB, we know all the 10MB is\n> >> useful, but the TOAST cache method, probably can't grantee that, then\n> >> when we want to make it effecitvely, we have to set a higher limit for\n> >> this.\n> >>\n> >\n> > Can it actually guarantee that? It can guarantee the slot may be used,\n> > but I don't see how could it guarantee the detoasted value will be used.\n> > We may be keeping the slot for other reasons. And even if it could\n> > guarantee the detoasted value will be used, does that actually prove\n> > it's better to keep that value? What if it's only used once, but it's\n> > blocking detoasting of values that will be used 10x that?\n> >\n> > If we detoast a 10MB value in the outer side of the Nest Loop, what if\n> > the inner path has multiple accesses to another 10MB value that now\n> > can't be detoasted (as a shared value)?\n>\n> Grarantee may be wrong word. The difference in my mind are:\n> 1. plan shape have better potential to know the user case of datum,\n> since we know the plan tree and knows the rows pass to a given node.\n> 2. Planning time effort is cheaper than run-time effort.\n> 3. eviction in my method is not as important as it is in TOAST cache\n> method since it is reset per slot, so usually it doesn't hit limit in\n> fact. But as a cache, it does.\n> 4. use up to memory limit we set in TOAST cache case.\n>\n> >>> In any case, my concern is more about having to do this when creating\n> >>> the plan at all, the code complexity etc. Not just because it might\n> have\n> >>> performance impact.\n> >>\n> >> I think the main trade-off is TOAST cache method is pretty non-invasive\n> >> but can't control the eviction well, the impacts includes:\n> >> 1. may evicting the datum we want and kept the datum we don't need.\n> >\n> > This applies to any eviction algorithm, not just LRU. Ultimately what\n> > matters is whether we have in the cache the most often used values, i.e.\n> > the hit ratio (perhaps in combination with how expensive detoasting that\n> > particular entry was).\n>\n> Correct, just that I am doubtful about design a LOCAL CACHE for USER\n> DATA with the reason I described above.\n>\n> At last, thanks for your attention, really appreciated about it!\n>\n> --\n> Best Regards\n> Andy Fan\n>\n>\n>\n>\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Tomas, sorry for confusion, in my previous message I meant exactlythe same approach you've posted above, and came up with almostthe same implementation.Thank you very much for your attention to this thread!I've asked Andy about this approach because of the same reasons youmentioned - it keeps cache code small, localized and easy to maintain.The question that worries me is the memory limit, and I think that cachelookup and entry invalidation should be done in toast_tuple_externalizecode, for the case the value has been detoasted previously is updatedin the same query, likeUPDATE test SET value = value || '...';I've added cache entry invalidation and data removal on delete and updateof the toasted values and am currently experimenting with large values.On Tue, Mar 5, 2024 at 5:59 AM Andy Fan <[email protected]> wrote:\n>\n>> 2. more likely to use up all the memory which is allowed. for example:\n>> if we set the limit to 1MB, then we kept more data which will be not\n>> used and then consuming all of the 1MB. \n>> \n>> My method is resolving this with some helps from other modules (kind of\n>> invasive) but can control the eviction well and use the memory as less\n>> as possible.\n>> \n>\n> Is the memory usage really an issue? Sure, it'd be nice to evict entries\n> as soon as we can prove they are not needed anymore, but if the cache\n> limit is set to 1MB it's not really a problem to use 1MB.\n\nThis might be a key point which leads us to some different directions, so\nI want to explain more about this, to see if we can get some agreement\nhere.\n\nIt is a bit hard to decide which memory limit to set, 1MB, 10MB or 40MB,\n100MB. In my current case it is 40MB at least. Less memory limit \ncauses cache ineffecitve, high memory limit cause potential memory\nuse-up issue in the TOAST cache design. But in my method, even we set a\nhigher value, it just limits the user case it really (nearly) needed,\nand it would not cache more values util the limit is hit. This would\nmake a noticable difference when we want to set a high limit and we have\nsome high active sessions, like 100 * 40MB = 4GB. \n\n> On 3/4/24 18:08, Andy Fan wrote:\n>> ...\n>>>\n>>>> I assumed that releasing all of the memory at the end of executor once\n>>>> is not an option since it may consumed too many memory. Then, when and\n>>>> which entry to release becomes a trouble for me. For example:\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------\n>>>> Nested Loop\n>>>> Join Filter: (t1.a = t2.a)\n>>>> -> Seq Scan on t1\n>>>> -> Seq Scan on t2\n>>>> (4 rows)\n>>>>\n>>>> In this case t1.a needs a longer lifespan than t2.a since it is\n>>>> in outer relation. Without the help from slot's life-cycle system, I\n>>>> can't think out a answer for the above question.\n>>>>\n>>>\n>>> This is true, but how likely such plans are? I mean, surely no one would\n>>> do nested loop with sequential scans on reasonably large tables, so how\n>>> representative this example is?\n>> \n>> Acutally this is a simplest Join case, we still have same problem like\n>> Nested Loop + Index Scan which will be pretty common. \n>> \n>\n> Yes, I understand there are cases where LRU eviction may not be the best\n> choice - like here, where the \"t1\" should stay in the case. But there\n> are also cases where this is the wrong choice, and LRU would be better.\n>\n> For example a couple paragraphs down you suggest to enforce the memory\n> limit by disabling detoasting if the memory limit is reached. That means\n> the detoasting can get disabled because there's a single access to the\n> attribute somewhere \"up the plan tree\". But what if the other attributes\n> (which now won't be detoasted) are accessed many times until then?\n\nI am not sure I can't follow up here, but I want to explain more about\nthe disable-detoast-sharing logic when the memory limit is hit. When\nthis happen, the detoast sharing is disabled, but since the detoast\ndatum will be released every soon when the slot->tts_values[*] is\ndiscard, then the 'disable' turn to 'enable' quickly. So It is not \ntrue that once it is get disabled, it can't be enabled any more for the\ngiven query.\n\n> I think LRU is a pretty good \"default\" algorithm if we don't have a very\n> good idea of the exact life span of the values, etc. Which is why other\n> nodes (e.g. Memoize) use LRU too.\n\n> But I wonder if there's a way to count how many times an attribute is\n> accessed (or is likely to be). That might be used to inform a better\n> eviction strategy.\n\nYes, but in current issue we can get a better esitimation with the help\nof plan shape and Memoize depends on some planner information as well.\nIf we bypass the planner information and try to resolve it at the \ncache level, the code may become to complex as well and all the cost is\nrun time overhead while the other way is a planning timie overhead.\n\n> Also, we don't need to evict the whole entry - we could evict just the\n> data part (guaranteed to be fairly large), but keep the header, and keep\n> the counts, expected number of hits, and other info. And use this to\n> e.g. release entries that reached the expected number of hits. But I'd\n> probably start with LRU and only do this as an improvement later.\n\nA great lession learnt here, thanks for sharing this!\n\nAs for the current user case what I want to highlight is in the current\nuser case, we are \"caching\" \"user data\" \"locally\".\n\nUSER DATA indicates it might be very huge, it is not common to have a\n1M tables, but it is much common we have 1M Tuples in one scan, so\nkeeping the header might extra memory usage as well, like 10M * 24 bytes\n= 240MB. LOCALLY means it is not friend to multi active sessions. CACHE\nindicates it is hard to evict correctly. My method also have the USER\nDATA, LOCALLY attributes, but it would be better at eviction. eviction\nthen have lead to memory usage issue which is discribed at the beginning\nof this writing. \n\n>>> Also, this leads me to the need of having some sort of memory limit. If\n>>> we may be keeping entries for extended periods of time, and we don't\n>>> have any way to limit the amount of memory, that does not seem great.\n>>>\n>>> AFAIK if we detoast everything into tts_values[] there's no way to\n>>> implement and enforce such limit. What happens if there's a row with\n>>> multiple large-ish TOAST values? What happens if those rows are in\n>>> different (and distant) parts of the plan?\n>> \n>> I think this can be done by tracking the memory usage on EState level\n>> or global variable level and disable it if it exceeds the limits and\n>> resume it when we free the detoast datum when we don't need it. I think\n>> no other changes need to be done.\n>> \n>\n> That seems like a fair amount of additional complexity. And what if the\n> toasted values are accessed in context without EState (I haven't checked\n> how common / important that is)?\n>\n> And I'm not sure just disabling the detoast as a way to enforce a memory\n> limit, as explained earlier.\n>\n>>> It seems far easier to limit the memory with the toast cache.\n>> \n>> I think the memory limit and entry eviction is the key issue now. IMO,\n>> there are still some difference when both methods can support memory\n>> limit. The reason is my patch can grantee the cached memory will be\n>> reused, so if we set the limit to 10MB, we know all the 10MB is\n>> useful, but the TOAST cache method, probably can't grantee that, then\n>> when we want to make it effecitvely, we have to set a higher limit for\n>> this.\n>> \n>\n> Can it actually guarantee that? It can guarantee the slot may be used,\n> but I don't see how could it guarantee the detoasted value will be used.\n> We may be keeping the slot for other reasons. And even if it could\n> guarantee the detoasted value will be used, does that actually prove\n> it's better to keep that value? What if it's only used once, but it's\n> blocking detoasting of values that will be used 10x that?\n>\n> If we detoast a 10MB value in the outer side of the Nest Loop, what if\n> the inner path has multiple accesses to another 10MB value that now\n> can't be detoasted (as a shared value)?\n\nGrarantee may be wrong word. The difference in my mind are:\n1. plan shape have better potential to know the user case of datum,\nsince we know the plan tree and knows the rows pass to a given node.\n2. Planning time effort is cheaper than run-time effort.\n3. eviction in my method is not as important as it is in TOAST cache\nmethod since it is reset per slot, so usually it doesn't hit limit in\nfact. But as a cache, it does. \n4. use up to memory limit we set in TOAST cache case. \n\n>>> In any case, my concern is more about having to do this when creating\n>>> the plan at all, the code complexity etc. Not just because it might have\n>>> performance impact.\n>> \n>> I think the main trade-off is TOAST cache method is pretty non-invasive\n>> but can't control the eviction well, the impacts includes:\n>> 1. may evicting the datum we want and kept the datum we don't need.\n>\n> This applies to any eviction algorithm, not just LRU. Ultimately what\n> matters is whether we have in the cache the most often used values, i.e.\n> the hit ratio (perhaps in combination with how expensive detoasting that\n> particular entry was).\n\nCorrect, just that I am doubtful about design a LOCAL CACHE for USER\nDATA with the reason I described above.\n\nAt last, thanks for your attention, really appreciated about it!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Tue, 5 Mar 2024 12:03:35 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nIn addition to the previous message - for the toast_fetch_datum_slice\nthe first (seems obvious) way is to detoast the whole value, save it\nto cache and get slices from it on demand. I have another one on my\nmind, but have to play with it first.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,In addition to the previous message - for the toast_fetch_datum_slicethe first (seems obvious) way is to detoast the whole value, save itto cache and get slices from it on demand. I have another one on mymind, but have to play with it first.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 6 Mar 2024 09:09:33 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/6/24 07:09, Nikita Malakhov wrote:\n> Hi,\n> \n> In addition to the previous message - for the toast_fetch_datum_slice\n> the first (seems obvious) way is to detoast the whole value, save it\n> to cache and get slices from it on demand. I have another one on my\n> mind, but have to play with it first.\n> \n\nWhat if you only need the first slice? In that case decoding everything\nwill be a clear regression.\n\nIMHO this needs to work like this:\n\n1) check if the cache already has sufficiently large slice detoasted\n(and use the cached value if yes)\n\n2) if there's no slice detoasted, or if it's too small, detoast a\nsufficiently large slice and store it in the cache (possibly evicting\nthe already detoasted slice)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Mar 2024 21:42:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\n\nOn 3/5/24 10:03, Nikita Malakhov wrote:\n> Hi,\n> \n> Tomas, sorry for confusion, in my previous message I meant exactly\n> the same approach you've posted above, and came up with almost\n> the same implementation.\n> \n> Thank you very much for your attention to this thread!\n> \n> I've asked Andy about this approach because of the same reasons you\n> mentioned - it keeps cache code small, localized and easy to maintain.\n> \n> The question that worries me is the memory limit, and I think that cache\n> lookup and entry invalidation should be done in toast_tuple_externalize\n> code, for the case the value has been detoasted previously is updated\n> in the same query, like\n> \n\nI haven't thought very much about when we need to invalidate entries and\nwhere exactly should that happen. My patch is a very rough PoC that I\nwrote in ~1h, and it's quite possible it will have to move or should be\nrefactored in some way.\n\nBut I don't quite understand the problem with this query:\n\n> UPDATE test SET value = value || '...';\n\nSurely the update creates a new TOAST value, with a completely new TOAST\npointer, new rows in the TOAST table etc. It's not updated in-place. So\nit'd end up with two independent entries in the TOAST cache.\n\nOr are you interested just in evicting the old value simply to free the\nmemory, because we're not going to need that (now invisible) value? That\nseems interesting, but if it's too invasive or complex I'd just leave it\nup to the LRU eviction (at least for v1).\n\nI'm not sure why should anything happen in toast_tuple_externalize, that\nseems like a very low-level piece of code. But I realized there's also\ntoast_delete_external, and maybe that's a good place to invalidate the\ndeleted TOAST values (which would also cover the UPDATE).\n\n> I've added cache entry invalidation and data removal on delete and update\n> of the toasted values and am currently experimenting with large values.\n> \n\nI haven't done anything about large values in the PoC patch, but I think\nit might be a good idea to either ignore values that are too large (so\nthat it does not push everything else from the cache) or store them in\ncompressed form (assuming it's small enough, to at least save the I/O).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Mar 2024 21:58:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi!\n\nTomas, I thought about this issue -\n>What if you only need the first slice? In that case decoding everything\n>will be a clear regression.\nAnd completely agree with you, I'm currently working on it and will post\na patch a bit later. Another issue we have here - if we have multiple\nslices requested - then we have to update cached slice from both sides,\nthe beginning and the end.\n\nOn update, yep, you're right\n>Surely the update creates a new TOAST value, with a completely new TOAST\n>pointer, new rows in the TOAST table etc. It's not updated in-place. So\n>it'd end up with two independent entries in the TOAST cache.\n\n>Or are you interested just in evicting the old value simply to free the\n>memory, because we're not going to need that (now invisible) value? That\n>seems interesting, but if it's too invasive or complex I'd just leave it\n>up to the LRU eviction (at least for v1).\nAgain, yes, we do not need the old value after it was updated and\nit is better to take care of it explicitly. It's a simple and not invasive\naddition to your patch.\n\nCould not agree with you about large values - it makes sense\nto cache large values because the larger it is the more benefit\nwe will have from not detoasting it again and again. One way\nis to keep them compressed, but what if we have data with very\nlow compression rate?\n\nI also changed HASHACTION field value from HASH_ENTER to\nHASH_ENTER_NULL to softly deal with case when we do not\nhave enough memory and added checks for if the value was stored\nor not.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Tomas, I thought about this issue ->What if you only need the first slice? In that case decoding everything>will be a clear regression.And completely agree with you, I'm currently working on it and will posta patch a bit later. Another issue we have here - if we have multipleslices requested - then we have to update cached slice from both sides,the beginning and the end.On update, yep, you're right>Surely the update creates a new TOAST value, with a completely new TOAST>pointer, new rows in the TOAST table etc. It's not updated in-place. So>it'd end up with two independent entries in the TOAST cache.>Or are you interested just in evicting the old value simply to free the>memory, because we're not going to need that (now invisible) value? That>seems interesting, but if it's too invasive or complex I'd just leave it>up to the LRU eviction (at least for v1).Again, yes, we do not need the old value after it was updated andit is better to take care of it explicitly. It's a simple and not invasiveaddition to your patch.Could not agree with you about large values - it makes senseto cache large values because the larger it is the more benefitwe will have from not detoasting it again and again. One wayis to keep them compressed, but what if we have data with verylow compression rate?I also changed HASHACTION field value from HASH_ENTER toHASH_ENTER_NULL to softly deal with case when we do nothave enough memory and added checks for if the value was storedor not.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 7 Mar 2024 10:33:18 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On 3/7/24 08:33, Nikita Malakhov wrote:\n> Hi!\n> \n> Tomas, I thought about this issue -\n>> What if you only need the first slice? In that case decoding everything\n>> will be a clear regression.\n> And completely agree with you, I'm currently working on it and will post\n> a patch a bit later.\n\nCool. I don't plan to work on improving my patch - it was merely a PoC,\nso if you're interested in working on that, that's good.\n\n> Another issue we have here - if we have multiple slices requested -\n> then we have to update cached slice from both sides, the beginning\n> and the end.\n> \n\nNo opinion. I haven't thought about how to handle slices too much.\n\n> On update, yep, you're right\n>> Surely the update creates a new TOAST value, with a completely new TOAST\n>> pointer, new rows in the TOAST table etc. It's not updated in-place. So\n>> it'd end up with two independent entries in the TOAST cache.\n> \n>> Or are you interested just in evicting the old value simply to free the\n>> memory, because we're not going to need that (now invisible) value? That\n>> seems interesting, but if it's too invasive or complex I'd just leave it\n>> up to the LRU eviction (at least for v1).\n> Again, yes, we do not need the old value after it was updated and\n> it is better to take care of it explicitly. It's a simple and not invasive\n> addition to your patch.\n> \n\nOK\n\n> Could not agree with you about large values - it makes sense\n> to cache large values because the larger it is the more benefit\n> we will have from not detoasting it again and again. One way\n> is to keep them compressed, but what if we have data with very\n> low compression rate?\n> \n\nI'm not against caching large values, but I also think it's not\ndifficult to construct cases where it's a losing strategy.\n\n> I also changed HASHACTION field value from HASH_ENTER to\n> HASH_ENTER_NULL to softly deal with case when we do not\n> have enough memory and added checks for if the value was stored\n> or not.\n> \n\nI'm not sure about this. HASH_ENTER_NULL is only being used in three\nvery specific places (two are lock management, one is shmeminitstruct).\nThis hash table is not unique in any way, so why not to use HASH_ENTER\nlike 99% other hash tables?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Mar 2024 12:44:41 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi!\n\nHere's a slightly improved version of patch Tomas provided above (v2),\nwith cache invalidations and slices caching added, still as PoC.\n\nThe main issue I've encountered during tests is that when the same query\nretrieves both slices and full value - slices, like substring(t,...) the\norder of\nthe values is not predictable, with text fields substrings are retrieved\nbefore the full value and we cannot benefit from cache full value first\nand use slices from cached value.\n\nYet the cache code is still very compact and affects only sources related\nto detoasting.\n\nTomas, about HASH_ENTER - according to comments it could throw\nan OOM error, so I've changed it to HASH_ENTER_NULL to avoid\nnew errors. In this case we would just have the value not cached\nwithout an error.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/",
"msg_date": "Fri, 15 Mar 2024 11:21:41 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nAndy Fan asked me off-list for some feedback about this proposal. I\nhave hesitated to comment on it for lack of having studied the matter\nin any detail, but since I've been asked for my input, here goes:\n\nI agree that there's a problem here, but I suspect this is not the\nright way to solve it. Let's compare this to something like the\nsyscache. Specifically, let's think about the syscache on\npg_class.relname.\n\nIn the case of the syscache on pg_class.relname, there are two reasons\nwhy we might repeatedly look up the same values in the cache. One is\nthat the code might do multiple name lookups when it really ought to\ndo only one. Doing multiple lookups is bad for security and bad for\nperformance, but we have code like that in some places and some of it\nis hard to fix. The other reason is that it is likely that the user\nwill mention the same table names repeatedly; each time they do, they\nwill trigger a lookup on the same name. By caching the result of the\nlookup, we can make it much faster. An important point to note here is\nthat the queries that the user will issue in the future are unknown to\nus. In a certain sense, we cannot even know whether the same table\nname will be mentioned more than once in the same query: when we reach\nthe first instance of the table name and look it up, we do not have\nany good way of knowing whether it will be mentioned again later, say\ndue to a self-join. Hence, the pattern of cache lookups is\nfundamentally unknowable.\n\nBut that's not the case in the motivating example that started this\nthread. In that case, the target list includes three separate\nreferences to the same column. We can therefore predict that if the\ncolumn value is toasted, we're going to de-TOAST it exactly three\ntimes. If, on the other hand, the column value were mentioned only\nonce, it would be detoasted just once. In that latter case, which is\nprobably quite common, this whole cache idea seems like it ought to\njust waste cycles, which makes me suspect that no matter what is done\nto this patch, there will always be cases where it causes significant\nregressions. In the former case, where the column reference is\nrepeated, it will win, but it will also hold onto the detoasted value\nafter the third instance of detoasting, even though there will never\nbe any more cache hits. Here, unlike the syscache case, the cache is\nthrown away at the end of the query, so future queries can't benefit.\nEven if we could find a way to preserve the cache in some cases, it's\nnot very likely to pay off. People are much more likely to hit the\nsame table more than once than to hit the same values in the same\ntable more than once in the same session.\n\nSuppose we had a design where, when a value got detoasted, the\ndetoasted value went into the place where the toasted value had come\nfrom. Then this problem would be handled pretty much perfectly: the\ndetoasted value would last until the scan advanced to the next row,\nand then it would be thrown out. So there would be no query-lifespan\nmemory leak. Now, I don't really know how this would work, because the\nTOAST pointer is coming out of a slot and we can't necessarily write\none attribute of any arbitrary slot type without a bunch of extra\noverhead, but maybe there's some way. If it could be done cheaply\nenough, it would gain all the benefits of this proposal a lot more\ncheaply and with fewer downsides. Alternatively, maybe there's a way\nto notice the multiple references and introduce some kind of\nintermediate slot or other holding area where the detoasted value can\nbe stored.\n\nIn other words, instead of computing\n\nbig_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\n\nMaybe we ought to be trying to compute this:\n\nbig_jsonb_col=detoast(big_jsonb_col)\nbig_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\n\nOr perhaps this:\n\ntmp=detoast(big_jsonb_col)\ntmp->'a', tmp->'b', tmp->'c'\n\nIn still other words, a query-lifespan cache for this information\nfeels like it's tackling the problem at the wrong level. I do suspect\nthere may be query shapes where the same value gets detoasted multiple\ntimes in ways that the proposals I just made wouldn't necessarily\ncatch, such as in the case of a self-join. But I think those cases are\nrare. In most cases, repeated detoasting is probably very localized,\nwith the same exact TOAST pointer being de-TOASTed a couple of times\nin quick succession, so a query-lifespan cache seems like the wrong\nthing. Also, in most cases, repeated detoasting is probably quite\npredictable: we could infer it from static analysis of the query. So\nthrowing ANY kind of cache at the problem seems like the wrong\napproach unless the cache is super-cheap, which doesn't seem to be the\ncase with this proposal, because a cache caters to cases where we\ncan't know whether we're going to recompute the same result, and here,\nthat seems like something we could probably figure out.\n\nBecause I don't see any way to avoid regressions in the common case\nwhere detoasting isn't repeated, I think this patch is likely never\ngoing to be committed no matter how much time anyone spends on it.\n\nBut, to repeat the disclaimer from the top, all of the above is just a\nrelatively uneducated opinion without any detailed study of the\nmatter.\n\n...Robert\n\n\n",
"msg_date": "Mon, 20 May 2024 11:12:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nHi Robert, \n\n> Andy Fan asked me off-list for some feedback about this proposal. I\n> have hesitated to comment on it for lack of having studied the matter\n> in any detail, but since I've been asked for my input, here goes:\n\nThanks for doing this! Since we have two totally different designs\nbetween Tomas and me and I think we both understand the two sides of\neach design, just that the following things are really puzzled to me.\n\n- which factors we should care more.\n- the possiblility to improve the design-A/B for its badness.\n\nBut for the point 2, we probably needs some prefer design first.\n\nfor now let's highlight that there are 2 totally different method to\nhandle this problem. Let's call:\n\ndesign a: detoast the datum into slot->tts_values (and manage the\nmemory with the lifespan of *slot*). From me.\n\ndesign b: detost the datum into a CACHE (and manage the memory with\nCACHE eviction and at released the end-of-query). From Tomas.\n\nCurrently I pefer design a, I'm not sure if I want desgin a because a is\nreally good or just because design a is my own design. So I will\nsummarize the the current state for the easier discussion. I summarize\nthem so that you don't need to go to the long discussion, and I summarize\nthe each point with the link to the original post since I may\nmisunderstand some points and the original post can be used as a double\ncheck. \n\n> I agree that there's a problem here, but I suspect this is not the\n> right way to solve it. Let's compare this to something like the\n> syscache. Specifically, let's think about the syscache on\n> pg_class.relname.\n\nOK.\n\n> In the case of the syscache on pg_class.relname, there are two reasons\n> why we might repeatedly look up the same values in the cache. One is\n> that the code might do multiple name lookups when it really ought to\n> do only one. Doing multiple lookups is bad for security and bad for\n> performance, but we have code like that in some places and some of it\n> is hard to fix. The other reason is that it is likely that the user\n> will mention the same table names repeatedly; each time they do, they\n> will trigger a lookup on the same name. By caching the result of the\n> lookup, we can make it much faster. An important point to note here is\n> that the queries that the user will issue in the future are unknown to\n> us. In a certain sense, we cannot even know whether the same table\n> name will be mentioned more than once in the same query: when we reach\n> the first instance of the table name and look it up, we do not have\n> any good way of knowing whether it will be mentioned again later, say\n> due to a self-join. Hence, the pattern of cache lookups is\n> fundamentally unknowable.\n\nTrue. \n\n> But that's not the case in the motivating example that started this\n> thread. In that case, the target list includes three separate\n> references to the same column. We can therefore predict that if the\n> column value is toasted, we're going to de-TOAST it exactly three\n> times.\n\nTrue.\n\n> If, on the other hand, the column value were mentioned only\n> once, it would be detoasted just once. In that latter case, which is\n> probably quite common, this whole cache idea seems like it ought to\n> just waste cycles, which makes me suspect that no matter what is done\n> to this patch, there will always be cases where it causes significant\n> regressions.\n\nI agree.\n\n> In the former case, where the column reference is\n> repeated, it will win, but it will also hold onto the detoasted value\n> after the third instance of detoasting, even though there will never\n> be any more cache hits.\n\nTrue.\n\n> Here, unlike the syscache case, the cache is\n> thrown away at the end of the query, so future queries can't benefit.\n> Even if we could find a way to preserve the cache in some cases, it's\n> not very likely to pay off. People are much more likely to hit the\n> same table more than once than to hit the same values in the same\n> table more than once in the same session.\n\nI agree.\n\nOne more things I want to highlight it \"syscache\" is used for metadata\nand *detoast cache* is used for user data. user data is more \nlikely bigger than metadata, so cache size control (hence eviction logic\nrun more often) is more necessary in this case. The first graph in [1]\n(including the quota text) highlight this idea.\n\n> Suppose we had a design where, when a value got detoasted, the\n> detoasted value went into the place where the toasted value had come\n> from. Then this problem would be handled pretty much perfectly: the\n> detoasted value would last until the scan advanced to the next row,\n> and then it would be thrown out. So there would be no query-lifespan\n> memory leak.\n\nI think that is same idea as design a.\n\n> Now, I don't really know how this would work, because the\n> TOAST pointer is coming out of a slot and we can't necessarily write\n> one attribute of any arbitrary slot type without a bunch of extra\n> overhead, but maybe there's some way.\n\nI think the key points includes:\n\n1. Where to put the *detoast value* so that ExprEval system can find out\nit cheaply. The soluation in A slot->tts_values[*].\n\n2. How to manage the memory for holding the detoast value.\n\nThe soluation in A is:\n\n- using a lazy created MemoryContext in slot.\n- let the detoast system detoast the datum into the designed\nMemoryContext.\n\n3. How to let Expr evalution run the extra cycles when needed.\n\nThe soluation in A is:\n\nEEOP_XXX_TOAST step is designed to get riding of some *run-time* *if\n(condition)\" . I found the existing ExprEval system have many\nspecialized steps.\n\nI will post a detailed overall design later and explain the its known\nbadness so far.\n\n> If it could be done cheaply enough, it would gain all the benefits of\n> this proposal a lot more cheaply and with fewer\n> downsides.\n\nI agree. \n\n> Alternatively, maybe there's a way \n> to notice the multiple references and introduce some kind of\n> intermediate slot or other holding area where the detoasted value can\n> be stored.\n>\n> In other words, instead of computing\n>\n> big_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\n>\n> Maybe we ought to be trying to compute this:\n>\n> big_jsonb_col=detoast(big_jsonb_col)\n> big_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\n>\n> Or perhaps this:\n>\n> tmp=detoast(big_jsonb_col)\n> tmp->'a', tmp->'b', tmp->'c'\n\nCurrent the design a doesn't use 'tmp' so that the existing ExprEval\nengine can find out \"detoast datum\" easily, so it is more like:\n\n> big_jsonb_col=detoast(big_jsonb_col)\n> big_jsonb_col->'a', big_jsonb_col->'b', big_jsonb_col->'c'\n\n\n[1] https://www.postgresql.org/message-id/87ttllbd00.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 22 May 2024 09:02:55 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nAndy Fan <[email protected]> writes:\n\n> Hi Robert, \n>\n>> Andy Fan asked me off-list for some feedback about this proposal. I\n>> have hesitated to comment on it for lack of having studied the matter\n>> in any detail, but since I've been asked for my input, here goes:\n>\n> Thanks for doing this! Since we have two totally different designs\n> between Tomas and me and I think we both understand the two sides of\n> each design, just that the following things are really puzzled to me.\n\nmy emacs was stuck and somehow this incompleted message was sent out,\nPlease ignore this post for now. Sorry for this.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 22 May 2024 10:47:43 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Hi,\n\nRobert, thank you very much for your response to this thread.\nI agree with most of things you've mentioned, but this proposal\nis a PoC, and anyway has a long way to go to be presented\n(if it ever would) as something to be committed.\n\nAndy, glad you've not lost interest in this work, I'm looking\nforward to your improvements!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Robert, thank you very much for your response to this thread.I agree with most of things you've mentioned, but this proposalis a PoC, and anyway has a long way to go to be presented(if it ever would) as something to be committed.Andy, glad you've not lost interest in this work, I'm lookingforward to your improvements!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 22 May 2024 15:46:21 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On Tue, May 21, 2024 at 10:02 PM Andy Fan <[email protected]> wrote:\n> One more things I want to highlight it \"syscache\" is used for metadata\n> and *detoast cache* is used for user data. user data is more\n> likely bigger than metadata, so cache size control (hence eviction logic\n> run more often) is more necessary in this case. The first graph in [1]\n> (including the quota text) highlight this idea.\n\nYes.\n\n> I think that is same idea as design a.\n\nSounds similar.\n\n> I think the key points includes:\n>\n> 1. Where to put the *detoast value* so that ExprEval system can find out\n> it cheaply. The soluation in A slot->tts_values[*].\n>\n> 2. How to manage the memory for holding the detoast value.\n>\n> The soluation in A is:\n>\n> - using a lazy created MemoryContext in slot.\n> - let the detoast system detoast the datum into the designed\n> MemoryContext.\n>\n> 3. How to let Expr evalution run the extra cycles when needed.\n\nI don't understand (3) but I agree that (1) and (2) are key points. In\nmany - nearly all - circumstances just overwriting slot->tts_values is\nunsafe and will cause problems. To make this work, we've got to find\nsome way of doing this that is compatible with general design of\nTupleTableSlot, and I think in that API, just about the only time you\ncan touch slot->tts_values is when you're trying to populate a virtual\nslot. But often we'll have some other kind of slot. And even if we do\nhave a virtual slot, I think you're only allowed to manipulate\nslot->tts_values up until you call ExecStoreVirtualTuple.\n\nThat's why I mentioned using something temporary. You could legally\n(a) materialize the existing slot, (b) create a new virtual slot, (c)\ncopy over the tts_values and tts_isnull arrays, (c) detoast any datums\nyou like from tts_values and store the new datums back into the array,\n(d) call ExecStoreVirtualTuple, and then (e) use the new slot in place\nof the original one. That would avoid repeated detoasting, but it\nwould also have a bunch of overhead. Having to fully materialize the\nexisting slot is a cost that you wouldn't always need to pay if you\ndidn't do this, but you have to do it to make this work. Creating the\nnew virtual slot costs something. Copying the arrays costs something.\nDetoasting costs quite a lot potentially, if you don't end up using\nthe values. If you end up needing a heap tuple representation of the\nslot, you're going to now have to make a new one rather than just\nusing the one that came straight from the buffer.\n\nSo I don't think we could do something like this unconditionally.\nWe'd have to do it only when there was a reasonable expectation that\nit would work out, which means we'd have to be able to predict fairly\naccurately whether it was going to work out. But I do agree with you\nthat *if* we could make it work, it would probably be better than a\nlonger-lived cache.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 May 2024 11:16:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n\n> On Tue, May 21, 2024 at 10:02 PM Andy Fan <[email protected]> wrote:\n>> One more things I want to highlight it \"syscache\" is used for metadata\n>> and *detoast cache* is used for user data. user data is more\n>> likely bigger than metadata, so cache size control (hence eviction logic\n>> run more often) is more necessary in this case. The first graph in [1]\n>> (including the quota text) highlight this idea.\n>\n> Yes.\n>\n>> I think that is same idea as design a.\n>\n> Sounds similar.\n>\n\nThis is really great to know!\n\n>> I think the key points includes:\n>>\n>> 1. Where to put the *detoast value* so that ExprEval system can find out\n>> it cheaply. The soluation in A slot->tts_values[*].\n>>\n>> 2. How to manage the memory for holding the detoast value.\n>>\n>> The soluation in A is:\n>>\n>> - using a lazy created MemoryContext in slot.\n>> - let the detoast system detoast the datum into the designed\n>> MemoryContext.\n>>\n>> 3. How to let Expr evalution run the extra cycles when needed.\n>\n> I don't understand (3) but I agree that (1) and (2) are key points. In\n> many - nearly all - circumstances just overwriting slot->tts_values is\n> unsafe and will cause problems. To make this work, we've got to find\n> some way of doing this that is compatible with general design of\n> TupleTableSlot, and I think in that API, just about the only time you\n> can touch slot->tts_values is when you're trying to populate a virtual\n> slot. But often we'll have some other kind of slot. And even if we do\n> have a virtual slot, I think you're only allowed to manipulate\n> slot->tts_values up until you call ExecStoreVirtualTuple.\n\nPlease give me one more chance to explain this. What I mean is:\n\nTake SELECT f(a) FROM t1 join t2...; for example,\n\nWhen we read the Datum of a Var, we read it from tts->tts_values[*], no\nmatter what kind of TupleTableSlot is. So if we can put the \"detoast\ndatum\" back to the \"original \" tts_values, then nothing need to be\nchanged. \n\nAside from efficiency considerations, security and rationality are also\nimportant considerations. So what I think now is:\n\n- tts_values[*] means the Datum from the slot, and it has its operations\n like slot_getsomeattrs, ExecMaterializeSlot, ExecCopySlot,\n ExecClearTuple and so on. All of the characters have nothing with what\n kind of slot it is. \n\n- Saving the \"detoast datum\" version into tts_values[*] doesn't break\n the above semantic and the ExprEval engine just get a detoast version\n so it doesn't need to detoast it again.\n\n- The keypoint is the memory management and effeiciency. for now I think\n all the places where \"slot->tts_nvalid\" is set to 0 means the\n tts_values[*] is no longer validate, then this is the place we should\n release the memory for the \"detoast datum\". All the other places like\n ExecMaterializeSlot or ExecCopySlot also need to think about the\n \"detoast datum\" as well.\n\n> That's why I mentioned using something temporary. You could legally\n> (a) materialize the existing slot, (b) create a new virtual slot, (c)\n> copy over the tts_values and tts_isnull arrays, (c) detoast any datums\n> you like from tts_values and store the new datums back into the array,\n> (d) call ExecStoreVirtualTuple, and then (e) use the new slot in place\n> of the original one. That would avoid repeated detoasting, but it\n> would also have a bunch of overhead.\n\nYes, I agree with the overhead, so I perfer to know if the my current\nstrategy has principle issue first.\n\n> Having to fully materialize the\n> existing slot is a cost that you wouldn't always need to pay if you\n> didn't do this, but you have to do it to make this work. Creating the\n> new virtual slot costs something. Copying the arrays costs something.\n> Detoasting costs quite a lot potentially, if you don't end up using\n> the values. If you end up needing a heap tuple representation of the\n> slot, you're going to now have to make a new one rather than just\n> using the one that came straight from the buffer.\n\n> But I do agree with you\n> that *if* we could make it work, it would probably be better than a\n> longer-lived cache.\n\n\nI noticed the \"if\" and great to know that:), speically for the efficiecy\n& memory usage control purpose. \n\nAnother big challenge of this is how to decide if a Var need this\npre-detoasting strategy, we have to consider:\n\n- The in-place tts_values[*] update makes the slot bigger, which is\n harmful for CP_SMALL_TLIST (createplan.c) purpose. \n- ExprEval runtime checking if the Var is toastable is an overhead. It\n is must be decided ExecInitExpr time. \n \nThe code complexity because of the above 2 factors makes people (include\nme) unhappy.\n\n===\nYesterday I did some worst case testing for the current strategy, and\nthe first case show me ~1% *unexpected* performance regression and I\ntried to find out it with perf record/report, and no lucky. that's the\nreason why I didn't send a complete post yesterday.\n\nAs for now, since we are talking about the high level design, I think\nit is OK to post the improved design document and the incompleted worst\ncase testing data and my planning. \n\nCurrent Design\n--------------\n\nThe high level design is let createplan.c and setref.c decide which\nVars can use this feature, and then the executor save the detoast\ndatum back slot->to tts_values[*] during the ExprEvalStep of\nEEOP_{INNER|OUTER|SCAN}_VAR_TOAST. The reasons includes:\n\n- The existing expression engine read datum from tts_values[*], no any\n extra work need to be done.\n- Reuse the lifespan of TupleTableSlot system to manage memory. It\n is natural to think the detoast datum is a tts_value just that it is\n in a detoast format. Since we have a clear lifespan for TupleTableSlot\n already, like ExecClearTuple, ExecCopySlot et. We are easy to reuse\n them for managing the datoast datum's memory.\n- The existing projection method can copy the datoasted datum (int64)\n automatically to the next node's slot, but keeping the ownership\n unchanged, so only the slot where the detoast really happen take the\n charge of it's lifespan.\n\nAssuming which Var should use this feature has been decided in\ncreateplan.c and setref.c already. The following strategy is used to\nreduce the run time overhead.\n\n1. Putting the detoast datum into tts->tts_values[*]. then the ExprEval\n system doesn't need any extra effort to find them.\n\nstatic inline void\nExecSlotDetoastDatum(TupleTableSlot *slot, int attnum)\n{\n if (!slot->tts_isnull[attnum] &&\n VARATT_IS_EXTENDED(slot->tts_values[attnum]))\n {\n if (unlikely(slot->tts_data_mctx == NULL))\n slot->tts_data_mctx = GenerationContextCreate(slot->tts_mcxt,\n \"tts_value_ctx\",\n ALLOCSET_START_SMALL_SIZES);\n slot->tts_values[attnum] = PointerGetDatum(detoast_attr_ext((struct varlena *) slot->tts_values[attnum],\n /* save the detoast value to the given MemoryContext. */\n slot->tts_data_mctx));\n }\n}\n\n2. Using a dedicated lazy created memory context under TupleTableSlot so\n that the memory can be released effectively.\n\nstatic inline void\nExecFreePreDetoastDatum(TupleTableSlot *slot)\n{\n if (slot->tts_data_mctx)\n MemoryContextResetOnly(slot->tts_data_mctx);\n}\n\n3. Control the memory usage by releasing them whenever the\n slot->tts_values[*] is deemed as invalidate, so the lifespan is\n likely short.\n\n4. Reduce the run-time overhead for checking if a VAR should use\n pre-detoast feature, 3 new steps are introduced.\n EEOP_{INNER,OUTER,SCAN}_VAR_TOAST, so that other Var is totally non\n related.\n\nNow comes to the createplan.c/setref.c part, which decides which Vars\nshould use the shared detoast feature. The guideline of this is:\n\n1. It needs a detoast for a given expression in the previous logic.\n2. It should not breaks the CP_SMALL_TLIST design. Since we saved the\n detoast datum back to tts_values[*], which make tuple bigger. if we\n do this blindly, it would be harmful to the ORDER / HASH style nodes.\n\nA high level data flow is:\n\n1. at the createplan.c, we walk the plan tree go gather the\n CP_SMALL_TLIST because of SORT/HASH style nodes, information and save\n it to Plan.forbid_pre_detoast_vars via the function\n set_plan_forbid_pre_detoast_vars_recurse.\n\n2. at the setrefs.c, fix_{scan|join}_expr will recurse to Var for each\n expression, so it is a good time to track the attribute number and\n see if the Var is directly or indirectly accessed. Usually the\n indirectly access a Var means a detoast would happens, for\n example an expression like a > 3. However some known expressions is\n ignored. for example: NullTest, pg_column_compression which needs the\n raw datum, start_with/sub_string which needs a slice\n detoasting. Currently there is some hard code here, we may needs a\n pg_proc.detoasting_requirement flags to make this generic. The\n output is {Scan|Join}.xxx_reference_attrs;\n\nNote that here I used '_reference_' rather than '_detoast_' is because\nat this part, I still don't know if it is a toastable attrbiute, which\nis known at the MakeTupleTableSlot stage.\n\n3. At the InitPlan Stage, we calculate the final xxx_pre_detoast_attrs\n in ScanState & JoinState, which will be passed into expression\n engine in the ExecInitExprRec stage and EEOP_{INNER|OUTER|SCAN}\n _VAR_TOAST steps are generated finally then everything is connected\n with ExecSlotDetoastDatum!\n\nBad case testing of current design:\n====================================\n\n- The extra effort of createplan.c & setref.c & execExpr.c & InitPlan :\n\nCREATE TABLE t(a int, b numeric);\n\nq1:\nexplain select * from t where a > 3;\n\nq2:\nset enable_hashjoin to off;\nset enable_mergejoin to off;\nset enable_nestloop to on;\n\nexplain select * from t join t t2 using(a);\n\nq3:\nset enable_hashjoin to off;\nset enable_mergejoin to on;\nset enable_nestloop to off;\nexplain select * from t join t t2 using(a);\n\n\nq4:\nset enable_hashjoin to on;\nset enable_mergejoin to off;\nset enable_nestloop to off;\nexplain select * from t join t t2 using(a);\n\n\npgbench -f x.sql -n postgres -M simple -T10\n\n| query ID | patched (ms) | master (ms) | regression(%) |\n|----------+-------------------------------------+-------------------------------------+---------------|\n| 1 | {0.088, 0.088, 0.088, 0.087, 0.089} | {0.087, 0.086, 0.086, 0.087, 0.087} | 1.14% |\n| 2 | not-done-yet | | |\n| 3 | not-done-yet | | |\n| 4 | not-done-yet | | |\n\n- The extra effort of ExecInterpExpr\n\ninsert into t select i, i FROM generate_series(1, 100000) i;\n\nSELECT count(b) from t WHERE b > 0;\n\nIn this query, the extra execution code is run but it doesn't buy anything.\n\n| master | patched | regression |\n|--------+---------+------------|\n| | | |\n\nWhat I am planning now is:\n- Gather more feedback on the overall strategy.\n- figure out the 1% unexpected regression. I don't have a clear\ndirection for now, my current expression is analyzing the perf report,\nand I can't find out the root cause. and reading the code can't find out\nthe root cause as well.\n- Simplifying the \"planner part\" for deciding which Var should use this\nstrategy. I don't have clear direction as well.\n- testing and improving more worst case.\n\nAttached is a appliable and testingable version against current master\n(e87e73245).\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Thu, 23 May 2024 08:27:41 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nNikita Malakhov <[email protected]> writes:\n\n> Hi,\n\n> Andy, glad you've not lost interest in this work, I'm looking\n> forward to your improvements!\n\nThanks for your words, I've adjusted to the rhythm of the community and\nwelcome more feedback:)\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 23 May 2024 09:46:15 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "On Wed, May 22, 2024 at 9:46 PM Andy Fan <[email protected]> wrote:\n> Please give me one more chance to explain this. What I mean is:\n>\n> Take SELECT f(a) FROM t1 join t2...; for example,\n>\n> When we read the Datum of a Var, we read it from tts->tts_values[*], no\n> matter what kind of TupleTableSlot is. So if we can put the \"detoast\n> datum\" back to the \"original \" tts_values, then nothing need to be\n> changed.\n\nYeah, I don't think you can do that.\n\n> - Saving the \"detoast datum\" version into tts_values[*] doesn't break\n> the above semantic and the ExprEval engine just get a detoast version\n> so it doesn't need to detoast it again.\n\nI don't think this is true. If it is true, it needs to be extremely\nwell-justified. Even if this seems to work in simple cases, I suspect\nthere will be cases where it breaks badly, resulting in memory leaks\nor server crashes or some other kind of horrible problem that I can't\nquite imagine. Unfortunately, my knowledge of this code isn't\nfantastic, so I can't say exactly what bad thing will happen, and I\ncan't even say with 100% certainty that anything bad will happen, but\nI bet it will. It seems like it goes against everything I understand\nabout how TupleTableSlots are supposed to be used. (Andres would be\nthe best person to give a definitive answer here.)\n\n> - The keypoint is the memory management and effeiciency. for now I think\n> all the places where \"slot->tts_nvalid\" is set to 0 means the\n> tts_values[*] is no longer validate, then this is the place we should\n> release the memory for the \"detoast datum\". All the other places like\n> ExecMaterializeSlot or ExecCopySlot also need to think about the\n> \"detoast datum\" as well.\n\nThis might be a way of addressing some of the issues with this idea,\nbut I doubt it will be acceptable. I don't think we want to complicate\nthe slot API for this feature. There's too much downside to doing\nthat, in terms of performance and understandability.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 May 2024 09:29:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nRobert Haas <[email protected]> writes:\n\n> On Wed, May 22, 2024 at 9:46 PM Andy Fan <[email protected]> wrote:\n>> Please give me one more chance to explain this. What I mean is:\n>>\n>> Take SELECT f(a) FROM t1 join t2...; for example,\n>>\n>> When we read the Datum of a Var, we read it from tts->tts_values[*], no\n>> matter what kind of TupleTableSlot is. So if we can put the \"detoast\n>> datum\" back to the \"original \" tts_values, then nothing need to be\n>> changed.\n>\n> Yeah, I don't think you can do that.\n>\n>> - Saving the \"detoast datum\" version into tts_values[*] doesn't break\n>> the above semantic and the ExprEval engine just get a detoast version\n>> so it doesn't need to detoast it again.\n>\n> I don't think this is true. If it is true, it needs to be extremely\n> well-justified. Even if this seems to work in simple cases, I suspect\n> there will be cases where it breaks badly, resulting in memory leaks\n> or server crashes or some other kind of horrible problem that I can't\n> quite imagine. Unfortunately, my knowledge of this code isn't\n> fantastic, so I can't say exactly what bad thing will happen, and I\n> can't even say with 100% certainty that anything bad will happen, but\n> I bet it will. It seems like it goes against everything I understand\n> about how TupleTableSlots are supposed to be used. (Andres would be\n> the best person to give a definitive answer here.)\n>\n>> - The keypoint is the memory management and effeiciency. for now I think\n>> all the places where \"slot->tts_nvalid\" is set to 0 means the\n>> tts_values[*] is no longer validate, then this is the place we should\n>> release the memory for the \"detoast datum\". All the other places like\n>> ExecMaterializeSlot or ExecCopySlot also need to think about the\n>> \"detoast datum\" as well.\n>\n> This might be a way of addressing some of the issues with this idea,\n> but I doubt it will be acceptable. I don't think we want to complicate\n> the slot API for this feature. There's too much downside to doing\n> that, in terms of performance and understandability.\n\nThanks for the doubt! Let's see if Andres has time to look at this. I\nfeel great about the current state. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 24 May 2024 02:46:38 +0000",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
},
{
"msg_contents": "\nHi,\n\nLet's me clarify the current state of this patch and what kind of help\nis needed. You can check [1] for what kind of problem it try to solve\nand what is the current approach.\n\nThe current blocker is if the approach is safe (potential to memory leak\ncrash etc.). And I want to have some agreement on this step at least.\n\n\"\"\"\nWhen we access [some desired] toast column in EEOP_{SCAN|INNER_OUTER}_VAR\nsteps, we can detoast it immediately and save it back to\nslot->tts_values. so the later ExprState can use the same detoast version\nall the time. this should be more the most effective and simplest way.\n\"\"\"\n\nIMO, it doesn't break any design of TupleTableSlot and should be\nsafe. Here is my understanding of TupleTableSlot, please correct me if\nanything I missed.\n\n(a). TupleTableSlot define tts_values[*] / tts_isnulls[*] at the high level.\n When the upper layer want a attnum, it just access tts_values/nulls[n].\n\n(b). TupleTableSlot has many different variants, depends on how to get\n tts_values[*] from them, like VirtualTupleTableSlot, HeapTupleTableSlot,\n how to manage its resource (mainly memory) when the life cycle is\n done, for example BufferHeapTupleTableSlot. \n \n(c). TupleTableSlot defines a lot operation against on that, like\ngetsomeattrs, copy, clear and so on, for the different variants. \n \nthe impacts of my proposal on the above factor:\n\n(a). it doesn't break it clearly,\n(b). it adds a 'detoast' step for step (b) when fill the tts_values[*],\n(this is done conditionally, see [1] for the overall design, we\ncurrently focus the safety). What's more, I only added the step in \nEEOP_{SCAN|INNER_OUTER}_VAR_TOAST step, it reduce the impact in another\nstep.\n(c). a special MemoryContext in TupleTableSlot is introduced for the\ndetoast datum, it is reset whenever the TupleTableSlot.tts_nvalid\n= 0. and I also double check every operation defined in\nTupleTableSlotOps, looks nothing should be specially handled.\n\nRobert has expressed his concern as below and wanted Andres to have a\nlook at, but unluckly Andres doesn't have such time so far:( \n\n> Robert Haas <[email protected]> writes:\n>\n>> On Wed, May 22, 2024 at 9:46 PM Andy Fan <[email protected]> wrote:\n>>> Please give me one more chance to explain this. What I mean is:\n>>>\n>>> Take SELECT f(a) FROM t1 join t2...; for example,\n>>>\n>>> When we read the Datum of a Var, we read it from tts->tts_values[*], no\n>>> matter what kind of TupleTableSlot is. So if we can put the \"detoast\n>>> datum\" back to the \"original \" tts_values, then nothing need to be\n>>> changed.\n>>\n>> Yeah, I don't think you can do that.\n>>\n>>> - Saving the \"detoast datum\" version into tts_values[*] doesn't break\n>>> the above semantic and the ExprEval engine just get a detoast version\n>>> so it doesn't need to detoast it again.\n>>\n>> I don't think this is true. If it is true, it needs to be extremely\n>> well-justified. Even if this seems to work in simple cases, I suspect\n>> there will be cases where it breaks badly, resulting in memory leaks\n>> or server crashes or some other kind of horrible problem that I can't\n>> quite imagine. Unfortunately, my knowledge of this code isn't\n>> fantastic, so I can't say exactly what bad thing will happen, and I\n>> can't even say with 100% certainty that anything bad will happen, but\n>> I bet it will. It seems like it goes against everything I understand\n>> about how TupleTableSlots are supposed to be used. (Andres would be\n>> the best person to give a definitive answer here.)\n\nI think having a discussion of the design of TupleTableSlots would be\nhelpful for the progress since in my knowldege it doesn't against\nanything of the TupleTaleSlots :(, as I stated above. I'm sure I'm\npretty open on this. \n\n>>\n>>> - The keypoint is the memory management and effeiciency. for now I think\n>>> all the places where \"slot->tts_nvalid\" is set to 0 means the\n>>> tts_values[*] is no longer validate, then this is the place we should\n>>> release the memory for the \"detoast datum\". All the other places like\n>>> ExecMaterializeSlot or ExecCopySlot also need to think about the\n>>> \"detoast datum\" as well.\n>>\n>> This might be a way of addressing some of the issues with this idea,\n>> but I doubt it will be acceptable. I don't think we want to complicate\n>> the slot API for this feature. There's too much downside to doing\n>> that, in terms of performance and understandability.\n\nComplexity: I double check the code, it has very limitted changes on the\nTupleTupleSlot APIs (less than 10 rows). see tuptable.h.\n\nPerformance: by design, it just move the chance of detoast action\n*conditionly* and put it back to tts_values[*], there is no extra\noverhead to find out the detoast version for sharing.\n\nUnderstandability: based on how do we understand TupleTableSlot:) \n\n[1] https://www.postgresql.org/message-id/87il4jrk1l.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sat, 22 Jun 2024 11:10:41 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared detoast Datum proposal"
}
] |
[
{
"msg_contents": "Postgres currently requires all variables to be declared at the top of\nthe function, because it specifies -Wdeclaration-after-statement. One\nof the reasons that we had this warning was because C89 required this\nstyle of declaration. Requiring it everywhere made backporting easier,\nsince some of our older supported PG versions needed to compile on\nC89. Now that we have dropped support for PG11 that reason goes away,\nsince now all supported Postgres versions require C99. So, I think\nit's worth reconsidering if we want this warning to be enabled or not.\n\nPersonally I would very much prefer the warning to be disabled for the\nfollowing reasons:\n1. It allows Asserts, ereports and other checks at the top of a\nfunction, making it clear to the reader if there are any requirements\non the arguments that the caller should uphold.\n2. By declaring variables at their first usage you limit the scope of\nthe variable. This reduces the amount of code that a reader of the\ncode has to look at to see if the variable was changed between its\ndeclaration and the usage location that they are interested in.\n3. By declaring variables at their first usage, often you can\nimmediately see the type of the variable that you are looking at.\nSince most variables are not used in the whole function, their first\nusage is pretty much their only usage.\n4. By declaring variables at their first usage, you can often\ninitialize them with a useful value right away. That way you don't\nhave to worry about using it uninitialized. It also reduces the lines\nof code, since initialization and declaration can be done in the same\nline.\n5. By declaring variables at their first usage, it is made clear to\nthe reader why the variable needs to exist. Oftentimes when I read a\npostgres function from top to bottom, it's unclear to me what purpose\nsome of the declared variables at the top have.\n6. I run into this warning over and over again when writing code in\npostgres. This is because all of the other programming languages I\nwrite code in don't have this restriction. Even many C projects I work\nin don't have this restriction on where variables can be declared.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 12:48:40 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> Postgres currently requires all variables to be declared at the top of\n> the function, because it specifies -Wdeclaration-after-statement. One\n> of the reasons that we had this warning was because C89 required this\n> style of declaration. Requiring it everywhere made backporting easier,\n> since some of our older supported PG versions needed to compile on\n> C89. Now that we have dropped support for PG11 that reason goes away,\n> since now all supported Postgres versions require C99. So, I think\n> it's worth reconsidering if we want this warning to be enabled or not.\n\nThis has already been debated, and the conclusion was that we would\nstick to the existing style for consistency reasons. The fact that\nback-portable patches required it was only one of the arguments, and\nnot the decisive one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 10:05:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Wed, 27 Dec 2023 at 16:05, Tom Lane <[email protected]> wrote:\n> This has already been debated, and the conclusion was that we would\n> stick to the existing style for consistency reasons.\n\nI looked through the archives quite a bit, but I couldn't find any\nconclusive debate about the current declaration style. Definitely not\none with \"consistency reasons\" as being the conclusion. Could you\npoint me to the place where that conclusion was reached? Or could you\nat least clarify what consistency you believe is lost by removing the\nwarning? The threads discussing this warning that I did find were the\nfollowing:\n\nThe initial addition of the warning flag[1], which has very little discussion.\n\nIntroducing the C99 requirement[2]. Robert and you both preferred the\ncurrent declaration style. Andrew and Andres both would want to accept\nthe new declaration style.\n\nAnother where removal of this warning was suggested[3], and where\nAndres said he was in favor of removing the warning. But he didn't\nthink fighting for it was worth the effort at the time to fight you\nand Robert, when he was trying to get the general C99 requirement in.\n\nAnd finally, one that was started by me, where I suggest an automated\nrefactor[4]. This change got shot down because it would cause lots of\nbackpatching problems (and because it was using perl regexes instead\nof an AST parser to do the automated refactor). Ranier and you were\nproponents of the current declaration style. Chapman was in favor of\nthe new declaration style. Andrew seems neutral.\n\nP.S. Note, that I'm not suggesting a complete refactor this time. I'm\nonly proposing to relax the rules, and disable the warning, so newly\nwritten code can benefit. But if the only reason not to remove the\nwarning is that then there would be two styles of declaration in the\ncodebase, then I'm happy to create another refactoring script that\nmoves declarations down to their first usage. (Which could then be run\non all backbranches to make sure there is no backpatching pain)\n\n[1]: https://www.postgresql.org/message-id/flat/417263F8.4060102%40samurai.com\n[2]: https://www.postgresql.org/message-id/flat/CA%2BTgmoYvHzFkwChsamwbBrLNJRcRq%2BfyTwveFaN_YOWUsRnfpw%40mail.gmail.com#931f4c68237caf4c60b4dc298236aef1\n[3]: https://www.postgresql.org/message-id/flat/20181213210012.i7iihamlbi7vfdiw%40alap3.anarazel.de#00304f9dfc039da87383fed30be62cff\n[4]: https://www.postgresql.org/message-id/flat/AM5PR83MB0178E68E4FF1BAF9C66DF0D1F7C09%40AM5PR83MB0178.EURPRD83.prod.outlook.com\n\n\n",
"msg_date": "Wed, 27 Dec 2023 18:59:47 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "I feel like this is the type of change where there's not much\ndiscussion to be had. And the only way to resolve it is to use some\nvoting to gauge community opinion.\n\nSo my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n+1 to indicate support against/for the change.\n\nI'll start: +1\n\nAttached is a patch that removes -Wdeclaration-after-statement in the\ncodebase. This is mainly to be able to add it to the commitfest, to\nhopefully get a decent amount of responses.",
"msg_date": "Mon, 29 Jan 2024 16:03:44 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n\n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n\n> I'll start: +1\n\n[ shrug... ] -1 here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jan 2024 10:23:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "\n\n> On Jan 29, 2024, at 7:03 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n\n-1 for me.\n\n-Infinity for refactoring the entire codebase and backpatching.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 07:31:38 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 10:31, Mark Dilger <[email protected]>\nwrote:\n\n>\n>\n> > On Jan 29, 2024, at 7:03 AM, Jelte Fennema-Nio <[email protected]>\n> wrote:\n> >\n> > So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> > +1 to indicate support against/for the change.\n>\n> -1 for me.\n>\n> -Infinity for refactoring the entire codebase and backpatching.\n>\n\nI don't think anybody is proposing re-working the existing codebase. I\nunderstand this to be only about allowing new code to use the newer style.\nPersonally, I like, as much as possible, to use initializations to const\nvariables and avoid assignment operators, so I much prefer to declare at\nfirst (and usually only) assignment.\n\nBut not voting because the amount of code I’ve actually contributed is\nminuscule.\n\nOn Mon, 29 Jan 2024 at 10:31, Mark Dilger <[email protected]> wrote:\n\n> On Jan 29, 2024, at 7:03 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n\n-1 for me.\n\n-Infinity for refactoring the entire codebase and backpatching.I don't think anybody is proposing re-working the existing codebase. I understand this to be only about allowing new code to use the newer style. Personally, I like, as much as possible, to use initializations to const variables and avoid assignment operators, so I much prefer to declare at first (and usually only) assignment.But not voting because the amount of code I’ve actually contributed is minuscule.",
"msg_date": "Mon, 29 Jan 2024 10:35:03 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 10:23:38AM -0500, Tom Lane wrote:\n> Jelte Fennema-Nio <[email protected]> writes:\n>> I feel like this is the type of change where there's not much\n>> discussion to be had. And the only way to resolve it is to use some\n>> voting to gauge community opinion.\n> \n>> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n>> +1 to indicate support against/for the change.\n> \n>> I'll start: +1\n> \n> [ shrug... ] -1 here.\n\n-1 here, too. I don't think one style is enormously better than the other,\nbut I do like having the declarations in a predictable location. I\nactually find the alternative less readable, but that could just be because\nI spend so much time looking at Postgres code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 09:37:57 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "\n\n> On Jan 29, 2024, at 7:35 AM, Isaac Morland <[email protected]> wrote:\n> \n> On Mon, 29 Jan 2024 at 10:31, Mark Dilger <[email protected]> wrote:\n> \n> -Infinity for refactoring the entire codebase and backpatching.\n> \n> I don't think anybody is proposing re-working the existing codebase. I understand this to be only about allowing new code to use the newer style. Personally, I like, as much as possible, to use initializations to const variables and avoid assignment operators, so I much prefer to declare at first (and usually only) assignment.\n\nI was responding to Jelte's paragraph upthread:\n\n> On Dec 27, 2023, at 9:59 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> But if the only reason not to remove the\n> warning is that then there would be two styles of declaration in the\n> codebase, then I'm happy to create another refactoring script that\n> moves declarations down to their first usage. (Which could then be run\n> on all backbranches to make sure there is no backpatching pain)\n\nI find that kind of gratuitous code churn highly unpleasant.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 07:42:00 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Em seg., 29 de jan. de 2024 às 12:03, Jelte Fennema-Nio <[email protected]>\nescreveu:\n\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n>\n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n>\n> I'll start: +1\n>\n> Attached is a patch that removes -Wdeclaration-after-statement in the\n> codebase.\n\n-1\nC89 IMO is the best for C readability.\nNone surprise, once you read the block declaration,\nYou know all about the vars in the function.\n\nBest regards,\nRanier Vilela\n\nEm seg., 29 de jan. de 2024 às 12:03, Jelte Fennema-Nio <[email protected]> escreveu:I feel like this is the type of change where there's not much\ndiscussion to be had. And the only way to resolve it is to use some\nvoting to gauge community opinion.\n\nSo my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n+1 to indicate support against/for the change.\n\nI'll start: +1\n\nAttached is a patch that removes -Wdeclaration-after-statement in the\ncodebase. -1C89 IMO is the best for C readability.None surprise, once you read the block declaration,You know all about the vars in the function.Best regards,Ranier Vilela",
"msg_date": "Mon, 29 Jan 2024 13:39:32 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 10:03 AM Jelte Fennema-Nio <[email protected]> wrote:\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n>\n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n>\n> I'll start: +1\n\n-1. I occasionally run into situations where I'm like \"ah, it would be\nnicer to declare this in the middle of the block\". But for every 1\ntime that happens, there are probably 10 times where it's helpful to\nme to be able to look at the top of the block and see all of the\nvariable declarations in one place. Plus, a lot of times, this urge to\ndeclare mid-block is a sign that I've made that block too big and\ncomplex and I need to refactor and simplify.\n\nThe fact that all of our code uses a consistent style is awfully nice, too.\n\nThe main argument I see for changing anything is that we do a lot of\nthings on this project that many people consider old-fashioned, and it\nmay discourage some younger developers from getting involved in the\nproject. I doubt that this is anywhere close to the biggest problem we\nhave in that area, but if we do end up changing it I'll console myself\nwith the thought that maybe we're usefully modernizing something.\n\nPersonally, though, I prefer the status quo, where the correct\nlocation of a declaration for a particular identifier is largely an\nobjective question rather than a subjective question. We are not in\nneed of more bikeshedding targets.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:07:35 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 10:42, Mark Dilger <[email protected]>\nwrote:\n\n> I don't think anybody is proposing re-working the existing codebase. I\n> understand this to be only about allowing new code to use the newer style.\n> Personally, I like, as much as possible, to use initializations to const\n> variables and avoid assignment operators, so I much prefer to declare at\n> first (and usually only) assignment.\n>\n> I was responding to Jelte's paragraph upthread:\n>\n> > On Dec 27, 2023, at 9:59 AM, Jelte Fennema-Nio <[email protected]>\n> wrote:\n> >\n> > But if the only reason not to remove the\n> > warning is that then there would be two styles of declaration in the\n> > codebase, then I'm happy to create another refactoring script that\n> > moves declarations down to their first usage. (Which could then be run\n> > on all backbranches to make sure there is no backpatching pain)\n>\n> I find that kind of gratuitous code churn highly unpleasant.\n>\n\nI stand corrected, and agree completely. It’s hard to imagine a change of\nsuch a global nature that would be a big enough improvement that it would\nbe a good idea to apply to existing code. Personally I’m fine with code of\ndifferent vintages using different styles, as long as it’s understood why\nthe difference exists — in this case because tons of code has already been\nwritten and isn’t going to be re-styled except possibly as part of other\nchanges.\n\nOn Mon, 29 Jan 2024 at 10:42, Mark Dilger <[email protected]> wrote:> I don't think anybody is proposing re-working the existing codebase. I understand this to be only about allowing new code to use the newer style. Personally, I like, as much as possible, to use initializations to const variables and avoid assignment operators, so I much prefer to declare at first (and usually only) assignment.\n\nI was responding to Jelte's paragraph upthread:\n\n> On Dec 27, 2023, at 9:59 AM, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> But if the only reason not to remove the\n> warning is that then there would be two styles of declaration in the\n> codebase, then I'm happy to create another refactoring script that\n> moves declarations down to their first usage. (Which could then be run\n> on all backbranches to make sure there is no backpatching pain)\n\nI find that kind of gratuitous code churn highly unpleasant.I stand corrected, and agree completely. It’s hard to imagine a change of such a global nature that would be a big enough improvement that it would be a good idea to apply to existing code. Personally I’m fine with code of different vintages using different styles, as long as it’s understood why the difference exists — in this case because tons of code has already been written and isn’t going to be re-styled except possibly as part of other changes.",
"msg_date": "Mon, 29 Jan 2024 12:12:24 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On 29/01/2024 19:07, Robert Haas wrote:\n> On Mon, Jan 29, 2024 at 10:03 AM Jelte Fennema-Nio <[email protected]> wrote:\n>> I feel like this is the type of change where there's not much\n>> discussion to be had. And the only way to resolve it is to use some\n>> voting to gauge community opinion.\n>>\n>> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n>> +1 to indicate support against/for the change.\n>>\n>> I'll start: +1\n> \n> -1. I occasionally run into situations where I'm like \"ah, it would be\n> nicer to declare this in the middle of the block\". But for every 1\n> time that happens, there are probably 10 times where it's helpful to\n> me to be able to look at the top of the block and see all of the\n> variable declarations in one place. Plus, a lot of times, this urge to\n> declare mid-block is a sign that I've made that block too big and\n> complex and I need to refactor and simplify.\n\n-0.5 from me, for exactly those reasons Robert said. I wouldn't mind \nremoving the compiler flag as long as we mostly keep the current style \nof declarations at top, with exceptions when it really makes sense. But \nin practice it would open the floodgates and make things worse overall.\n\nYou can also add curly braces to introduce a block like this:\n\n do_stuff();\n {\n int i = 123;\n\n do_more_stuff(i);\n ...\n }\n\nI know many people dislike that too, though. I think it's usually better \nthan declaring a variable in the middle of a block, because it also \nmakes you think how long the variable needs to be in scope.\n\n> The main argument I see for changing anything is that we do a lot of\n> things on this project that many people consider old-fashioned, and it\n> may discourage some younger developers from getting involved in the\n> project. I doubt that this is anywhere close to the biggest problem we\n> have in that area, but if we do end up changing it I'll console myself\n> with the thought that maybe we're usefully modernizing something.\n\nPeople writing extensions or hacking on their own forks of PostgreSQL \nare free to do whatever they like. And I don't mind reviewing patches \nthat don't follow the usual guidelines to the letter, stylistic things \nlike this are easy to fix before committing. I don't feel like we're \nforcing these rules upon others.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 20:37:58 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024, at 12:03 PM, Jelte Fennema-Nio wrote:\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n> \n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n> \n> I'll start: +1\n\n-1 here. Unless you have a huge amount of variables, (1) is an issue. The most\nannoying argument against the current style (declarations at the top of the\nfunction) is (2). However, people usually say it is a code smell to have a big\nchunk of code into a single function and that you need to split the code into\nmultiple functions. Having said that, it is easier to check if the variable V\nis used because the number of lines you need to read is small. And it imposes\nsimilar effort to inspect the code than your argument (3), (4), and (5).\n\nOne argument against it is if you use the \"declare on first use\" style, you\nwill spend more time inspecting the code if you are refactoring it. You\nidentify the code block you want to move and than starts the saga. Check every\nsingle variable used by this code block and search for its declaration. It is\nsuch a time consuming task. However, if you are using the \"declare at the top\"\nstyle, it is a matter of checking at the top.\n\nKeep both styles can be rather confusing (in particular for newbies). And as\nNathan said I don't see huge benefits moving from one style to the other.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Jan 29, 2024, at 12:03 PM, Jelte Fennema-Nio wrote:I feel like this is the type of change where there's not muchdiscussion to be had. And the only way to resolve it is to use somevoting to gauge community opinion.So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or+1 to indicate support against/for the change.I'll start: +1-1 here. Unless you have a huge amount of variables, (1) is an issue. The mostannoying argument against the current style (declarations at the top of thefunction) is (2). However, people usually say it is a code smell to have a bigchunk of code into a single function and that you need to split the code intomultiple functions. Having said that, it is easier to check if the variable Vis used because the number of lines you need to read is small. And it imposessimilar effort to inspect the code than your argument (3), (4), and (5).One argument against it is if you use the \"declare on first use\" style, youwill spend more time inspecting the code if you are refactoring it. Youidentify the code block you want to move and than starts the saga. Check everysingle variable used by this code block and search for its declaration. It issuch a time consuming task. However, if you are using the \"declare at the top\"style, it is a matter of checking at the top.Keep both styles can be rather confusing (in particular for newbies). And asNathan said I don't see huge benefits moving from one style to the other.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 29 Jan 2024 16:30:59 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-27 12:48:40 +0100, Jelte Fennema-Nio wrote:\n> Postgres currently requires all variables to be declared at the top of\n> the function, because it specifies -Wdeclaration-after-statement. One\n> of the reasons that we had this warning was because C89 required this\n> style of declaration. Requiring it everywhere made backporting easier,\n> since some of our older supported PG versions needed to compile on\n> C89. Now that we have dropped support for PG11 that reason goes away,\n> since now all supported Postgres versions require C99. So, I think\n> it's worth reconsidering if we want this warning to be enabled or not.\n\n+1 for allowing declarations to be intermixed with code, -infinity for\nchanging existing code to do so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jan 2024 11:58:04 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 1:38 PM Heikki Linnakangas <[email protected]> wrote:\n> -0.5 from me, for exactly those reasons Robert said. I wouldn't mind\n> removing the compiler flag as long as we mostly keep the current style\n> of declarations at top, with exceptions when it really makes sense. But\n> in practice it would open the floodgates and make things worse overall.\n\nYeah, this, for sure. If it were done judiciously I wouldn't care, but\nin practice it wouldn't be. Different people would do wildly different\nthings, ranging from never putting anything mid-block at all, at one\nextreme, to rearranging the whole flow of the function to allow for\nmore mid-block declarations, at the other.\n\nTBH, I wish we could get more consistent about our coding style\noverall, and clean up some of our historical baggage. We have such\nbeautiful code in so many places, and such ugly code in others. I\nstill can't get over how ugly xlog.c is in particular. Multiple people\nhave attempted to split that file up, or clean it up in other ways,\nbut it's still a soup of unclear global variables and identifier names\npulled out of a hat. And it still baffles me why we allow everyone to\npick their own system for capitalizing identifiers out of a hat,\nwithout even insisting on consistency from one end of the same\nidentifier to the other.\n\n> You can also add curly braces to introduce a block like this:\n>\n> do_stuff();\n> {\n> int i = 123;\n>\n> do_more_stuff(i);\n> ...\n> }\n>\n> I know many people dislike that too, though. I think it's usually better\n> than declaring a variable in the middle of a block, because it also\n> makes you think how long the variable needs to be in scope.\n\nI think this style can be appropriate for assertions or debugging\ncode, where you only need the variable if some compiler symbol is\ndefined, and you isolate it to the same block where it's used. I don't\ntend to like this style for other cases. It looks like you were too\nlazy to go back to the top of the function and just add the\ndeclaration there. I've also found that when I'm uncomfortable moving\nthe variable to the beginning of the block because it doesn't seem to\nfit with the other stuff declared there, it's usually a sign that I'm\ngoing to end up making the block conditional or turning it into a\nseparate function -- and of course if I do either of those things,\nthen suddenly I have a natural scope for my declarations.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 15:01:06 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "\nOn 2024-01-29 Mo 14:58, Andres Freund wrote:\n> Hi,\n>\n> On 2023-12-27 12:48:40 +0100, Jelte Fennema-Nio wrote:\n>> Postgres currently requires all variables to be declared at the top of\n>> the function, because it specifies -Wdeclaration-after-statement. One\n>> of the reasons that we had this warning was because C89 required this\n>> style of declaration. Requiring it everywhere made backporting easier,\n>> since some of our older supported PG versions needed to compile on\n>> C89. Now that we have dropped support for PG11 that reason goes away,\n>> since now all supported Postgres versions require C99. So, I think\n>> it's worth reconsidering if we want this warning to be enabled or not.\n> +1 for allowing declarations to be intermixed with code,\n\n\nI'm about +0.5.\n\nMany Java, C++, Perl, and indeed C programmers might find it surprising \nthat we're having this debate. On the more modern language front the \nsame goes for Go and Rust. It seems clear that the language trend is \nmostly in this direction.\n\nBut it's not something worth having a long and contentious debate over. \nWe have plenty of better candidates for that :-)\n\n\n> -infinity for\n> changing existing code to do so.\n\n\nditto. On that at least I think there's close to unanimous agreement.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 15:18:10 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-29 15:01:06 -0500, Robert Haas wrote:\n> And it still baffles me why we allow everyone to pick their own system for\n> capitalizing identifiers out of a hat, without even insisting on consistency\n> from one end of the same identifier to the other.\n\nYes. Please. I hate some capitalization/underscore styles, but I hate spending\ntime feeling out which capitalization style I should use so much more. Let's\nat least define some minimal naming guidelines for new code.\n\nPersonally I like under_score_style for functions and variables and CamelCase\nfor types.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:38:50 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-01-29 15:01:06 -0500, Robert Haas wrote:\n>> And it still baffles me why we allow everyone to pick their own system for\n>> capitalizing identifiers out of a hat, without even insisting on consistency\n>> from one end of the same identifier to the other.\n\n> Yes. Please. I hate some capitalization/underscore styles, but I hate spending\n> time feeling out which capitalization style I should use so much more. Let's\n> at least define some minimal naming guidelines for new code.\n\nI'm for this for entirely-new code, but I think when adding code in\nexisting modules we're better off with the rule of \"make it match\nnearby code\". I admit it might be hard to draw a clear line between\nthe two cases, plus there might be local inconsistency already.\nBut let's try to avoid making local style inconsistencies worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jan 2024 17:09:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "Hi, \n\nOn January 29, 2024 2:09:23 PM PST, Tom Lane <[email protected]> wrote:\n>Andres Freund <[email protected]> writes:\n>> On 2024-01-29 15:01:06 -0500, Robert Haas wrote:\n>>> And it still baffles me why we allow everyone to pick their own system for\n>>> capitalizing identifiers out of a hat, without even insisting on consistency\n>>> from one end of the same identifier to the other.\n>\n>> Yes. Please. I hate some capitalization/underscore styles, but I hate spending\n>> time feeling out which capitalization style I should use so much more. Let's\n>> at least define some minimal naming guidelines for new code.\n>\n>I'm for this for entirely-new code, but I think when adding code in\n>existing modules we're better off with the rule of \"make it match\n>nearby code\". I admit it might be hard to draw a clear line between\n>the two cases, plus there might be local inconsistency already.\n>But let's try to avoid making local style inconsistencies worse.\n\nYeah, completely agreed. I think using it as a tie breaker when extending already inconsistent code, of which we have plenty, is the extent of the role it should have when extending existing code.\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 29 Jan 2024 14:26:05 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On 29.01.24 16:03, Jelte Fennema-Nio wrote:\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n> \n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n> \n> I'll start: +1\n> \n> Attached is a patch that removes -Wdeclaration-after-statement in the\n> codebase. This is mainly to be able to add it to the commitfest, to\n> hopefully get a decent amount of responses.\n\n-1, mostly for the various reasons explained by others.\n\n\n\n",
"msg_date": "Wed, 31 Jan 2024 07:48:51 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 11:04 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n>\n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n>\n> I'll start: +1\n>\n> Attached is a patch that removes -Wdeclaration-after-statement in the\n> codebase. This is mainly to be able to add it to the commitfest, to\n> hopefully get a decent amount of responses.\n\n\nI am new to c.\ngcc has many options (https://man7.org/linux/man-pages/man1/gcc.1.html)\nIIUC, postgres have some required CFLAGS,\nWhile doing the test (http://cfbot.cputube.org/next.html), we also\nhave some CFLAGS\nI am not sure they are the same.\n\nif we can list these CFLAGS information in the\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\nwould be helpful.\n\nBut not voting because I am not sure of the implication.\n\n\n",
"msg_date": "Sun, 4 Feb 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 04:03:44PM +0100, Jelte Fennema-Nio wrote:\n> I feel like this is the type of change where there's not much\n> discussion to be had. And the only way to resolve it is to use some\n> voting to gauge community opinion.\n> \n> So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> +1 to indicate support against/for the change.\n\nI'm +1 for the change, for these reasons:\n\n- Fewer back-patch merge conflicts. The decls section of long functions is a\n classic conflict point.\n- A mid-func decl demonstrates that its var is unused in the first half of the\n func.\n- We write Perl in the mixed decls style, without problems.\n\nFor me personally, the \"inconsistency\" concern is negligible. We allowed \"for\n(int i = 0\", and that inconsistency has been invisible to me.\n\n\n",
"msg_date": "Wed, 7 Feb 2024 16:55:41 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 7:55 PM Noah Misch <[email protected]> wrote:\n> > So my suggestion is for people to respond with -1, -0.5, +-0, +0.5, or\n> > +1 to indicate support against/for the change.\n>\n> I'm +1 for the change, for these reasons:\n>\n> - Fewer back-patch merge conflicts. The decls section of long functions is a\n> classic conflict point.\n> - A mid-func decl demonstrates that its var is unused in the first half of the\n> func.\n> - We write Perl in the mixed decls style, without problems.\n>\n> For me personally, the \"inconsistency\" concern is negligible. We allowed \"for\n> (int i = 0\", and that inconsistency has been invisible to me.\n\nThis thread was interesting as an opinion poll, but it seems clear\nthat the consensus is still against the proposed change, so I've\nmarked the CommitFest entry rejected.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 10:36:39 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove -Wdeclaration-after-statement?"
}
] |
[
{
"msg_contents": "Hi,\n\nSQLsmith found a failing Assertion in joininfo.c on master. I can\nreproduce it on an assertion-enabled build like this:\n\ncreate table t1(a int primary key);\ncreate table t2(a int);\n\nselect * from t2 right join\n (t1 as t1a inner join t1 as t1b on t1a.a = t1b.a)\n on t1a.a is not null and exists (select);\n\n-- TRAP: failed Assert(\"list_member_ptr(rel->joininfo, restrictinfo)\"), File: \"joininfo.c\", Line: 144, PID: 777839\n\nregards,\nAndreas\n\n\n",
"msg_date": "Wed, 27 Dec 2023 12:54:22 +0100",
"msg_from": "Andreas Seltenreich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Failed assertion in joininfo.c, remove_join_clause_from_rels"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 1:54 PM Andreas Seltenreich <[email protected]> wrote:\n> SQLsmith found a failing Assertion in joininfo.c on master. I can\n> reproduce it on an assertion-enabled build like this:\n>\n> create table t1(a int primary key);\n> create table t2(a int);\n>\n> select * from t2 right join\n> (t1 as t1a inner join t1 as t1b on t1a.a = t1b.a)\n> on t1a.a is not null and exists (select);\n>\n> -- TRAP: failed Assert(\"list_member_ptr(rel->joininfo, restrictinfo)\"), File: \"joininfo.c\", Line: 144, PID: 777839\n\nThank you for pointing this out. I'm investigating.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Dec 2023 14:00:09 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failed assertion in joininfo.c, remove_join_clause_from_rels"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 8:00 PM Alexander Korotkov <[email protected]>\nwrote:\n\n> On Wed, Dec 27, 2023 at 1:54 PM Andreas Seltenreich <[email protected]>\n> wrote:\n> > SQLsmith found a failing Assertion in joininfo.c on master. I can\n> > reproduce it on an assertion-enabled build like this:\n> >\n> > create table t1(a int primary key);\n> > create table t2(a int);\n> >\n> > select * from t2 right join\n> > (t1 as t1a inner join t1 as t1b on t1a.a = t1b.a)\n> > on t1a.a is not null and exists (select);\n> >\n> > -- TRAP: failed Assert(\"list_member_ptr(rel->joininfo, restrictinfo)\"),\n> File: \"joininfo.c\", Line: 144, PID: 777839\n>\n> Thank you for pointing this out. I'm investigating.\n\n\nThis is the same issue with [1] and has been just fixed by e0477837ce.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com\n\nThanks\nRichard\n\nOn Wed, Dec 27, 2023 at 8:00 PM Alexander Korotkov <[email protected]> wrote:On Wed, Dec 27, 2023 at 1:54 PM Andreas Seltenreich <[email protected]> wrote:\n> SQLsmith found a failing Assertion in joininfo.c on master. I can\n> reproduce it on an assertion-enabled build like this:\n>\n> create table t1(a int primary key);\n> create table t2(a int);\n>\n> select * from t2 right join\n> (t1 as t1a inner join t1 as t1b on t1a.a = t1b.a)\n> on t1a.a is not null and exists (select);\n>\n> -- TRAP: failed Assert(\"list_member_ptr(rel->joininfo, restrictinfo)\"), File: \"joininfo.c\", Line: 144, PID: 777839\n\nThank you for pointing this out. I'm investigating.This is the same issue with [1] and has been just fixed by e0477837ce.[1] https://www.postgresql.org/message-id/flat/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.comThanksRichard",
"msg_date": "Wed, 27 Dec 2023 20:03:57 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failed assertion in joininfo.c, remove_join_clause_from_rels"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 2:04 PM Richard Guo <[email protected]> wrote:\n> On Wed, Dec 27, 2023 at 8:00 PM Alexander Korotkov <[email protected]> wrote:\n>>\n>> On Wed, Dec 27, 2023 at 1:54 PM Andreas Seltenreich <[email protected]> wrote:\n>> > SQLsmith found a failing Assertion in joininfo.c on master. I can\n>> > reproduce it on an assertion-enabled build like this:\n>> >\n>> > create table t1(a int primary key);\n>> > create table t2(a int);\n>> >\n>> > select * from t2 right join\n>> > (t1 as t1a inner join t1 as t1b on t1a.a = t1b.a)\n>> > on t1a.a is not null and exists (select);\n>> >\n>> > -- TRAP: failed Assert(\"list_member_ptr(rel->joininfo, restrictinfo)\"), File: \"joininfo.c\", Line: 144, PID: 777839\n>>\n>> Thank you for pointing this out. I'm investigating.\n>\n>\n> This is the same issue with [1] and has been just fixed by e0477837ce.\n>\n> [1] https://www.postgresql.org/message-id/flat/CAMbWs4_wJthNtYBL%2BSsebpgF-5L2r5zFFk6xYbS0A78GKOTFHw%40mail.gmail.com\n\nI just came to the same conclusion. Thank you, Richard.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Dec 2023 14:21:08 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failed assertion in joininfo.c, remove_join_clause_from_rels"
}
] |
[
{
"msg_contents": "We had this:\n\n< 2023-12-25 04:06:20.062 MST telsasoft >ERROR: could not open file \"pg_tblspc/16395/PG_16_202307071/16384/121010871\": Input/output error\n< 2023-12-25 04:06:20.062 MST telsasoft >STATEMENT: commit\n< 2023-12-25 04:06:20.062 MST telsasoft >WARNING: AbortTransaction while in COMMIT state\n< 2023-12-25 04:06:20.062 MST telsasoft >PANIC: cannot abort transaction 2737414167, it was already committed\n< 2023-12-25 04:06:20.473 MST >LOG: server process (PID 14678) was terminated by signal 6: Aborted\n\nThe application is a daily cronjob which would've just done:\n\nbegin;\nlo_unlink(); -- the client-side function called from pygresql;\nDELETE FROM tbl WHERE col=%s;\ncommit;\n\nThe table being removed would've been a transient (but not \"temporary\")\ntable created ~1 day prior.\n\nIt's possible that the filesystem had an IO error, but I can't find any\nevidence of that. Postgres is running entirely on zfs, which says:\n\nscan: scrub repaired 0B in 00:07:03 with 0 errors on Mon Dec 25 04:49:07 2023\nerrors: No known data errors\n\nMy main question is why an IO error would cause the DB to abort, rather\nthan raising an ERROR.\n\nThis is pg16 compiled at efa8f6064, runing under centos7. ZFS is 2.2.2,\nbut the pool hasn't been upgraded to use the features new since 2.1.\n\n(gdb) bt\n#0 0x00007fc961089387 in raise () from /lib64/libc.so.6\n#1 0x00007fc96108aa78 in abort () from /lib64/libc.so.6\n#2 0x00000000009438b7 in errfinish (filename=filename@entry=0xac8e20 \"xact.c\", lineno=lineno@entry=1742, funcname=funcname@entry=0x9a6600 <__func__.32495> \"RecordTransactionAbort\") at elog.c:604\n#3 0x000000000054d6ab in RecordTransactionAbort (isSubXact=isSubXact@entry=false) at xact.c:1741\n#4 0x000000000054d7bd in AbortTransaction () at xact.c:2814\n#5 0x000000000054e015 in AbortCurrentTransaction () at xact.c:3415\n#6 0x0000000000804e4e in PostgresMain (dbname=0x12ea840 \"ts\", username=0x12ea828 \"telsasoft\") at postgres.c:4354\n#7 0x000000000077bdd6 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4465\n#8 BackendStartup (port=0x12e44c0) at postmaster.c:4193\n#9 ServerLoop () at postmaster.c:1783\n#10 0x000000000077ce9a in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x12ad280) at postmaster.c:1467\n#11 0x00000000004ba8b8 in main (argc=3, argv=0x12ad280) at main.c:198\n\n#3 0x000000000054d6ab in RecordTransactionAbort (isSubXact=isSubXact@entry=false) at xact.c:1741\n xid = 2737414167\n rels = 0x94f549 <hash_seq_init+73>\n ndroppedstats = 0\n droppedstats = 0x0\n\n#4 0x000000000054d7bd in AbortTransaction () at xact.c:2814\n is_parallel_worker = false\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Dec 2023 09:02:25 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 4:02 AM Justin Pryzby <[email protected]> wrote:\n> My main question is why an IO error would cause the DB to abort, rather\n> than raising an ERROR.\n\nIn CommitTransaction() there is a stretch of code beginning s->state =\nTRANS_COMMIT and ending s->state = TRANS_DEFAULT, from which we call\nout to various subsystems' AtEOXact_XXX() functions. There is no way\nto roll back in that state, so anything that throws ERROR from those\nroutines is going to get something much like $SUBJECT. Hmm, we'd know\nwhich exact code path got that EIO from your smoldering core if we'd\nput an explicit critical section there (if we're going to PANIC\nanyway, it might as well not be from a different stack after\nlongjmp()...).\n\nI guess the large object usage isn't directly relevant (that module's\nEOXact stuff seems to be finished before TRANS_COMMIT, but I don't\nknow that code well). Everything later is supposed to be about\nclosing/releasing/cleaning up, and for example smgrDoPendingDeletes()\nreaches code with this relevant comment:\n\n * Note: smgr_unlink must treat deletion failure as a WARNING, not an\n * ERROR, because we've already decided to commit or abort the current\n * xact.\n\nWe don't really have a general ban on ereporting on system call\nfailure, though. We've just singled unlink() out. Only a few lines\nabove that we call DropRelationsAllBuffers(rels, nrels), and that\ncalls smgrnblocks(), and that might need to need to re-open() the\nrelation file to do lseek(SEEK_END), because PostgreSQL itself has no\ntracking of relation size. Hard to say but my best guess is that's\nwhere you might have got your EIO, assuming you dropped the relation\nin this transaction?\n\n> This is pg16 compiled at efa8f6064, runing under centos7. ZFS is 2.2.2,\n> but the pool hasn't been upgraded to use the features new since 2.1.\n\nI've been following recent ZFS stuff from a safe distance as a user.\nAFAIK the extremely hard to hit bug fixed in that very recent release\ndidn't technically require the interesting new feature (namely block\ncloning, though I think that helped people find the root cause after a\nphase of false blame?). Anyway, it had for symptom some bogus zero\nbytes on read, not a spurious EIO.\n\n\n",
"msg_date": "Thu, 28 Dec 2023 11:33:16 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> In CommitTransaction() there is a stretch of code beginning s->state =\n> TRANS_COMMIT and ending s->state = TRANS_DEFAULT, from which we call\n> out to various subsystems' AtEOXact_XXX() functions. There is no way\n> to roll back in that state, so anything that throws ERROR from those\n> routines is going to get something much like $SUBJECT. Hmm, we'd know\n> which exact code path got that EIO from your smoldering core if we'd\n> put an explicit critical section there (if we're going to PANIC\n> anyway, it might as well not be from a different stack after\n> longjmp()...).\n\n+1, there's basically no hope of debugging this sort of problem\nas things stand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 17:42:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 11:33:16AM +1300, Thomas Munro wrote:\n> I guess the large object usage isn't directly relevant (that module's\n> EOXact stuff seems to be finished before TRANS_COMMIT, but I don't\n> know that code well). Everything later is supposed to be about\n> closing/releasing/cleaning up, and for example smgrDoPendingDeletes()\n> reaches code with this relevant comment:\n> \n> * Note: smgr_unlink must treat deletion failure as a WARNING, not an\n> * ERROR, because we've already decided to commit or abort the current\n> * xact.\n> \n> We don't really have a general ban on ereporting on system call\n> failure, though. We've just singled unlink() out. Only a few lines\n> above that we call DropRelationsAllBuffers(rels, nrels), and that\n> calls smgrnblocks(), and that might need to need to re-open() the\n> relation file to do lseek(SEEK_END), because PostgreSQL itself has no\n> tracking of relation size. Hard to say but my best guess is that's\n> where you might have got your EIO, assuming you dropped the relation\n> in this transaction?\n\nYeah. In fact I was confused - this was not lo_unlink().\nThis uses normal tables, so would've done:\n\n\"begin;\"\n\"DROP TABLE IF EXISTS %s\", tablename\n\"DELETE FROM cached_objects WHERE cache_name=%s\", tablename\n\"commit;\"\n\n> > This is pg16 compiled at efa8f6064, runing under centos7. ZFS is 2.2.2,\n> > but the pool hasn't been upgraded to use the features new since 2.1.\n> \n> I've been following recent ZFS stuff from a safe distance as a user.\n> AFAIK the extremely hard to hit bug fixed in that very recent release\n> didn't technically require the interesting new feature (namely block\n> cloning, though I think that helped people find the root cause after a\n> phase of false blame?). Anyway, it had for symptom some bogus zero\n> bytes on read, not a spurious EIO.\n\nThe ZFS bug had to do with bogus bytes which may-or-may-not-be-zero, as\nI understand. The understanding is that the bug was pre-existing but\nbecame more easy to hit in 2.2, and is fixed in 2.2.2 and 2.1.14.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Dec 2023 16:55:34 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 09:02:25AM -0600, Justin Pryzby wrote:\n> We had this:\n> \n> < 2023-12-25 04:06:20.062 MST telsasoft >ERROR: could not open file \"pg_tblspc/16395/PG_16_202307071/16384/121010871\": Input/output error\n> < 2023-12-25 04:06:20.062 MST telsasoft >STATEMENT: commit\n> < 2023-12-25 04:06:20.062 MST telsasoft >WARNING: AbortTransaction while in COMMIT state\n> < 2023-12-25 04:06:20.062 MST telsasoft >PANIC: cannot abort transaction 2737414167, it was already committed\n> < 2023-12-25 04:06:20.473 MST >LOG: server process (PID 14678) was terminated by signal 6: Aborted\n\n> It's possible that the filesystem had an IO error, but I can't find any\n> evidence of that. Postgres is running entirely on zfs, which says:\n\nFYI: the VM which hit this error also just hit:\n\nlog_time | 2024-01-07 05:19:11.611-07\nerror_severity | ERROR\nmessage | could not open file \"pg_tblspc/16395/PG_16_202307071/16384/123270438_vm\": Input/output error\nquery | commit\nlocation | mdopenfork, md.c:663\n\nSince I haven't seen this anywhere else, that's good evidence it's an\nissue with the backing storage (even though the FS/kernel aren't nice\nenough to say so).\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 7 Jan 2024 07:49:48 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 11:42 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > In CommitTransaction() there is a stretch of code beginning s->state =\n> > TRANS_COMMIT and ending s->state = TRANS_DEFAULT, from which we call\n> > out to various subsystems' AtEOXact_XXX() functions. There is no way\n> > to roll back in that state, so anything that throws ERROR from those\n> > routines is going to get something much like $SUBJECT. Hmm, we'd know\n> > which exact code path got that EIO from your smoldering core if we'd\n> > put an explicit critical section there (if we're going to PANIC\n> > anyway, it might as well not be from a different stack after\n> > longjmp()...).\n>\n> +1, there's basically no hope of debugging this sort of problem\n> as things stand.\n\nI was reminded of this thread by Justin's other file system snafu thread.\n\nNaively defining a critical section to match the extent of the\nTRANS_COMMIT state doesn't work, as a bunch of code under there uses\npalloc(). That reminds me of the nearby RelationTruncate() thread,\nand there is possibly even some overlap, plus more in this case...\nugh.\n\nHmm, AtEOXact_RelationMap() is one of those steps, but lives just\noutside the crypto-critical-section created by TRANS_COMMIT, though\nhas its own normal CS for logging. I wonder, given that \"updating the\nmap file is effectively commit of the relocation\", why wouldn't it\nhave a variant of the problem solved by DELAY_CHKPT_START for normal\ncommit records, under diabolical scheduling? It's a stretch, but: You\nlog XLOG_RELMAP_UPDATE, a concurrent checkpoint runs with REDO after\nthat record, you crash before/during durable_rename(), and then you\nperform crash recovery. Now your catalog is still using the old\nrelfilenode on the primary, but any replica following along replays\nXLOG_RELMAP_UPDATE and is using the new relfilenode, frozen in time,\nfor queries, while replaying changes to the old relfilenode. Right?\n\n\n",
"msg_date": "Thu, 9 May 2024 17:19:47 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
},
{
"msg_contents": "On Thu, May 09, 2024 at 05:19:47PM +1200, Thomas Munro wrote:\n> On Thu, Dec 28, 2023 at 11:42 AM Tom Lane <[email protected]> wrote:\n> > Thomas Munro <[email protected]> writes:\n> > > In CommitTransaction() there is a stretch of code beginning s->state =\n> > > TRANS_COMMIT and ending s->state = TRANS_DEFAULT, from which we call\n> > > out to various subsystems' AtEOXact_XXX() functions. There is no way\n> > > to roll back in that state, so anything that throws ERROR from those\n> > > routines is going to get something much like $SUBJECT. Hmm, we'd know\n> > > which exact code path got that EIO from your smoldering core if we'd\n> > > put an explicit critical section there (if we're going to PANIC\n> > > anyway, it might as well not be from a different stack after\n> > > longjmp()...).\n> >\n> > +1, there's basically no hope of debugging this sort of problem\n> > as things stand.\n> \n> I was reminded of this thread by Justin's other file system snafu thread.\n> \n> Naively defining a critical section to match the extent of the\n> TRANS_COMMIT state doesn't work, as a bunch of code under there uses\n> palloc(). That reminds me of the nearby RelationTruncate() thread,\n> and there is possibly even some overlap, plus more in this case...\n> ugh.\n> \n> Hmm, AtEOXact_RelationMap() is one of those steps, but lives just\n> outside the crypto-critical-section created by TRANS_COMMIT, though\n> has its own normal CS for logging. I wonder, given that \"updating the\n> map file is effectively commit of the relocation\", why wouldn't it\n> have a variant of the problem solved by DELAY_CHKPT_START for normal\n> commit records, under diabolical scheduling? It's a stretch, but: You\n> log XLOG_RELMAP_UPDATE, a concurrent checkpoint runs with REDO after\n> that record, you crash before/during durable_rename(), and then you\n> perform crash recovery.\n\nSee the CheckPointRelationMap() header comment for how relmapper behaves like\nDELAY_CHKPT_START without using that flag. I think its mechanism suffices.\n\n\n",
"msg_date": "Wed, 3 Jul 2024 10:17:49 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cannot abort transaction 2737414167, it was already committed"
}
] |
[
{
"msg_contents": "hi.\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/strings.out#n928\n\nSELECT regexp_substr('abcabcabc', 'a.c');\nSELECT regexp_substr('abcabcabc', 'a.c', 2);\nSELECT regexp_substr('abcabcabc', 'a.c', 1, 3);\nSELECT regexp_substr('abcabcabc', 'a.c', 1, 4) IS NULL AS t;\nSELECT regexp_substr('abcabcabc', 'A.C', 1, 2, 'i');\n\nthey all return 'abc', there are 3 'abc ' in string 'abcabcabc'\nexcept IS NULL query.\nmaybe we can change regexp_substr first argument from \"abcabcabc\" to\n\"abcaXcaYc\".\nso the result would be more easier to understand.\n\n\n",
"msg_date": "Thu, 28 Dec 2023 00:13:14 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "change regexp_substr first argument make tests more easier to\n understand."
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 12:13 AM jian he <[email protected]> wrote:\n>\n> hi.\n> https://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/strings.out#n928\n>\n> SELECT regexp_substr('abcabcabc', 'a.c');\n> SELECT regexp_substr('abcabcabc', 'a.c', 2);\n> SELECT regexp_substr('abcabcabc', 'a.c', 1, 3);\n> SELECT regexp_substr('abcabcabc', 'a.c', 1, 4) IS NULL AS t;\n> SELECT regexp_substr('abcabcabc', 'A.C', 1, 2, 'i');\n>\n> they all return 'abc', there are 3 'abc ' in string 'abcabcabc'\n> except IS NULL query.\n> maybe we can change regexp_substr first argument from \"abcabcabc\" to\n> \"abcaXcaYc\".\n> so the result would be more easier to understand.\n\nanyway here is the minor patch to change string from \"abcabcabc\" to\n\"abcaXcaYc\".",
"msg_date": "Thu, 28 Dec 2023 15:17:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: change regexp_substr first argument make tests more easier to\n understand."
},
{
"msg_contents": "Hi,\n\nIf I understand correctly, the problem is that it's not clear which of\nthe 'abc' substrings is matched/returned by the function, right?\n\nI wonder if this is a problem only for understanding the test, or if it\nmakes the tests a bit weaker. I mean, what if the function returns the\nwrong substring? How would we know?\n\nAlso, if we tweak this, shouldn't we tweak also the regext_instr() calls\na bit earlier in the test script?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jul 2024 23:49:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: change regexp_substr first argument make tests more easier to\n understand."
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 5:49 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> If I understand correctly, the problem is that it's not clear which of\n> the 'abc' substrings is matched/returned by the function, right?\n>\n> I wonder if this is a problem only for understanding the test, or if it\n> makes the tests a bit weaker. I mean, what if the function returns the\n> wrong substring? How would we know?\n>\n\nthis is for understanding the test.\npersonally, sometimes, I feel the documentation is too dry, hard to follow.\nso i can based on regress tests better understand the documentation.\nthat was my intention for the changes.\n\n\nwe have more sophisticated regex test at\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/modules/test_regex\n\n> Also, if we tweak this, shouldn't we tweak also the regext_instr() calls\n> a bit earlier in the test script?\n>\nsure.\nplease check attached.",
"msg_date": "Fri, 26 Jul 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: change regexp_substr first argument make tests more easier to\n understand."
},
{
"msg_contents": "Hi everybody\n\nCurrent tests with regexp_instr() and regexp_substr() with string \n'abcabcabc' are really unreadable and you would spend time to understand \nthat happens in these tests and if they are really correct. I'd better \nchange them into \"abcdefghi\" just like in query\n\n SELECT regexp_substr('abcdefghi', 'd.q') IS NULL AS t;\n\nRegards\n\nIlia Evdokimov,\nTantor Labs LLC.\n\n\n\n\n\n",
"msg_date": "Tue, 13 Aug 2024 12:40:00 +0300",
"msg_from": "Ilia Evdokimov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: change regexp_substr first argument make tests more easier to\n understand."
},
{
"msg_contents": "Ilia Evdokimov <[email protected]> writes:\n> Current tests with regexp_instr() and regexp_substr() with string \n> 'abcabcabc' are really unreadable and you would spend time to understand \n> that happens in these tests and if they are really correct. I'd better \n> change them into \"abcdefghi\" just like in query\n\n> SELECT regexp_substr('abcdefghi', 'd.q') IS NULL AS t;\n\nOn looking more closely at these test cases, I think the point of them\nis exactly to show the behavior of the functions with multiple copies\nof the target substring. Thus, what Jian is proposing breaks the\ntests: it's no longer perfectly clear whether the result is because\nthe function did what we expect, or because the pattern failed to\nmatch anywhere else. (Sure, \"a.c\" *should* match \"aXc\", but if it\ndidn't, you wouldn't discover that from this test.) What Ilia\nproposes would break them worse.\n\nI think we should just reject this patch, or at least reject the\nparts of it that change existing test cases. I have no opinion\nabout whether the new test cases add anything useful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2024 13:45:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: change regexp_substr first argument make tests more easier to\n understand."
}
] |
[
{
"msg_contents": "Hi,\n\nwe had a conversation with a customer about security compliance a while\nago and one thing they were concerned about was avoiding brute-force\nattemps for remote password guessing. This is should not be a big\nconcern if reasonably secure passwords are used and increasing SCRAM\niteration count can also help, but generally auth_delay is recommended\nfor this if there are concerns.\n\nThis patch adds exponential backoff so that one can choose a small\ninitial value which gets doubled for each failed authentication attempt\nuntil a maximum wait time (which is 10s by default, but can be disabled\nif so desired).\n\nCurrently, this patch tracks remote hosts but not users, the idea being\nthat a remote attacker likely tries several users from a particular\nhost, but this could in theory be extended to users if there are\nconcerns.\n\nThe patch is partly based on an earlier, more ambitious attempt at\nextending auth_delay by 成之焕 from a year ago:\nhttps://postgr.es/m/AHwAxACqIwIVOEhs5YejpqoG.1.1668569845751.Hmail.zhcheng@ceresdata.com\n\n\nMichael",
"msg_date": "Wed, 27 Dec 2023 17:19:54 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nOn Wed, Dec 27, 2023 at 05:19:54PM +0100, Michael Banck wrote:\n> This patch adds exponential backoff so that one can choose a small\n> initial value which gets doubled for each failed authentication attempt\n> until a maximum wait time (which is 10s by default, but can be disabled\n> if so desired).\n\nHere is a new version, hopefully fixing warnings in the documentation\nbuild, per cfbot.\n\n\nMichael",
"msg_date": "Thu, 4 Jan 2024 08:30:36 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "At 2024-01-04 08:30:36 +0100, [email protected] wrote:\n>\n> +typedef struct AuthConnRecord\n> +{\n> +\tchar\t\tremote_host[NI_MAXHOST];\n> +\tbool\t\tused;\n> +\tdouble\t\tsleep_time;\t\t/* in milliseconds */\n> +} AuthConnRecord;\n\nDo we really need a \"used\" field here? You could avoid it by setting\nremote_host[0] = '\\0' in cleanup_conn_record.\n\n> static void\n> auth_delay_checks(Port *port, int status)\n> {\n> +\tdouble\t\tdelay;\n\nI would just initialise this to auth_delay_milliseconds here, instead of\nthe harder-to-notice \"else\" below.\n\n> @@ -43,8 +69,150 @@ auth_delay_checks(Port *port, int status)\n> \t */\n> \tif (status != STATUS_OK)\n> \t{\n> -\t\tpg_usleep(1000L * auth_delay_milliseconds);\n> +\t\tif (auth_delay_exp_backoff)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Exponential backoff per remote host.\n> +\t\t\t */\n> +\t\t\tdelay = record_failed_conn_auth(port);\n> +\t\t\tif (auth_delay_max_seconds > 0)\n> +\t\t\t\tdelay = Min(delay, 1000L * auth_delay_max_seconds);\n> +\t\t}\n\nI would make this comment more specific, maybe something like \"Delay by\n2^n seconds after each authentication failure from a particular host,\nwhere n is the number of consecutive authentication failures\".\n\nIt's slightly discordant to see the new delay being returned by a\nfunction named \"record_<something>\" (rather than \"calculate_delay\" or\nsimilar). Maybe a name like \"backoff_after_failed_auth\" would be better?\nOr \"increase_delay_on_auth_fail\"?\n\n> +static double\n> +record_failed_conn_auth(Port *port)\n> +{\n> +\tAuthConnRecord *acr = NULL;\n> +\tint\t\t\tj = -1;\n> +\n> +\tacr = find_conn_record(port->remote_host, &j);\n> +\n> +\tif (!acr)\n> +\t{\n> +\t\tif (j == -1)\n> +\n> +\t\t\t/*\n> +\t\t\t * No free space, MAX_CONN_RECORDS reached. Wait as long as the\n> +\t\t\t * largest delay for any remote host.\n> +\t\t\t */\n> +\t\t\treturn find_conn_max_delay();\n\nIn this extraordinary situation (where there are lots of hosts with lots\nof authentication failures), why not delay by auth_delay_max_seconds\nstraightaway, especially when the default is only 10s? I don't see much\npoint in coordinating the delay between fifty known hosts and an unknown\nnumber of others.\n\n> +\t\telog(DEBUG1, \"new connection: %s, index: %d\", acr->remote_host, j);\n\nI think this should be removed, but if you want to leave it in, the\nmessage should be more specific about what this is actually about, and\nwhere the message is from, so as to not confuse debug-log readers.\n\n(The same applies to the other debug messages.)\n\n> +static AuthConnRecord *\n> +find_conn_record(char *remote_host, int *free_index)\n> +{\n> +\tint\t\t\ti;\n> +\n> +\t*free_index = -1;\n> +\tfor (i = 0; i < MAX_CONN_RECORDS; i++)\n> +\t{\n> +\t\tif (!acr_array[i].used)\n> +\t\t{\n> +\t\t\tif (*free_index == -1)\n> +\t\t\t\t/* record unused element */\n> +\t\t\t\t*free_index = i;\n> +\t\t\tcontinue;\n> +\t\t}\n> +\t\tif (strcmp(acr_array[i].remote_host, remote_host) == 0)\n> +\t\t\treturn &acr_array[i];\n> +\t}\n> +\n> +\treturn NULL;\n> +}\n\nIt's not a big deal, but to be honest, I would much prefer to (get rid\nof used, as suggested earlier, and) have separate find_acr_for_host()\nand find_free_acr() functions.\n\n> +static void\n> +record_conn_failure(AuthConnRecord *acr)\n> +{\n> +\tif (acr->sleep_time == 0)\n> +\t\tacr->sleep_time = (double) auth_delay_milliseconds;\n> +\telse\n> +\t\tacr->sleep_time *= 2;\n> +}\n\nI would just roll this function into record_failed_conn_auth (or\nwhatever it's named), but if you want to keep it a separate function, it\nshould at least have a name that's sufficiently different from\nrecord_failed_conn_auth.\n\nIn terms of the logic, it would have been slightly clearer to store the\nnumber of failures and calculate the delay, but it's easier to double\nthe sleep time that way you've written it. I think it's fine.\n\nIt's worth noting that there is no time-based reset of the delay with\nthis feature, which I think is something that people might expect to go\nhand-in-hand with exponential backoff. I think that's probably OK too.\n\n> +static void\n> +auth_delay_shmem_startup(void)\n> +{\n> +\tSize\t\trequired;\n> +\tbool\t\tfound;\n> +\n> +\tif (shmem_startup_next)\n> +\t\tshmem_startup_next();\n> +\n> +\trequired = sizeof(AuthConnRecord) * MAX_CONN_RECORDS;\n> +\tacr_array = ShmemInitStruct(\"Array of AuthConnRecord\", required, &found);\n> +\tif (found)\n> +\t\t/* this should not happen ? */\n> +\t\telog(DEBUG1, \"variable acr_array already exists\");\n> +\t/* all fileds are set to 0 */\n> +\tmemset(acr_array, 0, required);\n> }\n\nI think you can remove the elog and just do the memset if (!found). Also\nif you're changing it anyway, I'd suggest something like \"total_size\"\ninstead of \"required\".\n\n> +\tDefineCustomBoolVariable(\"auth_delay.exp_backoff\",\n> +\t\t\t\t\t\t\t \"Exponential backoff for failed connections, per remote host\",\n> +\t\t\t\t\t\t\t NULL,\n> +\t\t\t\t\t\t\t &auth_delay_exp_backoff,\n> +\t\t\t\t\t\t\t false,\n> +\t\t\t\t\t\t\t PGC_SIGHUP,\n> +\t\t\t\t\t\t\t 0,\n> +\t\t\t\t\t\t\t NULL,\n> +\t\t\t\t\t\t\t NULL,\n> +\t\t\t\t\t\t\t NULL);\n\nMaybe \"Double the delay after each authentication failure from a\nparticular host\". (Note: authentication failed, not connection.)\n\nI would also mildly prefer to spell out \"exponential_backoff\" (but leave\nauth_delay_exp_backoff as-is).\n\n> +\tDefineCustomIntVariable(\"auth_delay.max_seconds\",\n> +\t\t\t\t\t\t\t\"Maximum seconds to wait when login fails during exponential backoff\",\n> +\t\t\t\t\t\t\tNULL,\n> +\t\t\t\t\t\t\t&auth_delay_max_seconds,\n> +\t\t\t\t\t\t\t10,\n> +\t\t\t\t\t\t\t0, INT_MAX,\n> +\t\t\t\t\t\t\tPGC_SIGHUP,\n> +\t\t\t\t\t\t\tGUC_UNIT_S,\n> +\t\t\t\t\t\t\tNULL, NULL, NULL);\n> +\n\nMaybe just \"Maximum delay when exponential backoff is enabled\".\n\n(Parameter indentation doesn't match the earlier block.)\n\nI'm not able to make up my mind if I think 10s is a good default or not.\nIn practice, it means that after the first three consecutive failures,\nwe'll delay by 10s for every subsequent failure. That sounds OK. But is\nis much more useful than, for example, delaying the first three failures\nby auth_delay_milliseconds and then jumping straight to max_seconds?\n\nI can't really imagine wanting to increase max_seconds to, say, 128 and\nkeep a bunch of backends sleeping while someone's trying to brute-force\na password. And with a reasonably short max_seconds, I'm not sure if\nhaving the backoff be _exponential_ is particularly important.\n\nOr maybe because this is a contrib module, we don't have to think about\nit to that extent?\n\n> diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml\n> index 0571f2a99d..2ca9528011 100644\n> --- a/doc/src/sgml/auth-delay.sgml\n> +++ b/doc/src/sgml/auth-delay.sgml\n> @@ -16,6 +16,17 @@\n> connection slots.\n> </para>\n> \n> + <para>\n> + It is optionally possible to let <filename>auth_delay</filename> wait longer\n> + for each successive authentication failure from a particular remote host, if\n> + the configuration parameter <varname>auth_delay.exp_backoff</varname> is\n> + active. Once an authentication succeeded from a remote host, the\n> + authentication delay is reset to the value of\n> + <varname>auth_delay.milliseconds</varname> for this host. The parameter\n> + <varname>auth_delay.max_seconds</varname> sets an upper bound for the delay\n> + in this case.\n> + </para>\n\nHow about something like this…\n\n If you enable exponential_backoff, auth_delay will double the delay\n after each consecutive authentication failure from a particular\n host, up to the given max_seconds (default: 10s). If the host\n authenticates successfully, the delay is reset.\n\n> + <varlistentry>\n> + <term>\n> + <varname>auth_delay.max_seconds</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>auth_delay.max_seconds</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + How many seconds to wait at most if exponential backoff is active.\n> + Setting this parameter to 0 disables it. The default is 10 seconds.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI suggest \"The maximum delay, in seconds, when exponential backoff is\nenabled.\"\n\n-- Abhijit\n\n\n",
"msg_date": "Tue, 16 Jan 2024 12:00:49 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nmany thanks for the review!\n\nI went through your comments (a lot of which pertained to the original\nlarger patch I took code from), attached is a reworked version 2.\n\nOther changes:\n\n1. Ignore STATUS_EOF, this led to auth_delay being applied twice (maybe\ndue to the gss/kerberos auth psql is trying by default? Is that legit\nand should this change be reverted?) - i.e. handle STATUS_OK and\nSTATUS_ERROR explicitly.\n\n2. Guard ShmemInitStruct() with LWLockAcquire(AddinShmemInitLock,\nLW_EXCLUSIVE) / LWLockRelease(AddinShmemInitLock), as is done in\npg_prewarm and pg_stat_statements as well.\n\n3. Added an additional paragraph discussing the value of\nauth_delay.milliseconds when auth_delay.exponential_backoff is on, see\nbelow.\n\nI wonder whether maybe auth_delay.max_seconds should either be renamed\nto auth_delay.exponential_backoff_max_seconds (but then it is rather\nlong) in order to make it clearer it only applies in that context or\nalternatively just apply to auth_delay.milliseconds as well (though that\nwould be somewhat weird).\n\nFurther comments to your comments:\n\nOn Tue, Jan 16, 2024 at 12:00:49PM +0530, Abhijit Menon-Sen wrote:\n> At 2024-01-04 08:30:36 +0100, [email protected] wrote:\n> >\n> > +typedef struct AuthConnRecord\n> > +{\n> > +\tchar\t\tremote_host[NI_MAXHOST];\n> > +\tbool\t\tused;\n> > +\tdouble\t\tsleep_time;\t\t/* in milliseconds */\n> > +} AuthConnRecord;\n> \n> Do we really need a \"used\" field here? You could avoid it by setting\n> remote_host[0] = '\\0' in cleanup_conn_record.\n\nOk, removed.\n\n> > static void\n> > auth_delay_checks(Port *port, int status)\n> > {\n> > +\tdouble\t\tdelay;\n> \n> I would just initialise this to auth_delay_milliseconds here, instead of\n> the harder-to-notice \"else\" below.\n \nDone.\n\n> > @@ -43,8 +69,150 @@ auth_delay_checks(Port *port, int status)\n> > \t */\n> > \tif (status != STATUS_OK)\n> > \t{\n> > -\t\tpg_usleep(1000L * auth_delay_milliseconds);\n> > +\t\tif (auth_delay_exp_backoff)\n> > +\t\t{\n> > +\t\t\t/*\n> > +\t\t\t * Exponential backoff per remote host.\n> > +\t\t\t */\n> > +\t\t\tdelay = record_failed_conn_auth(port);\n> > +\t\t\tif (auth_delay_max_seconds > 0)\n> > +\t\t\t\tdelay = Min(delay, 1000L * auth_delay_max_seconds);\n> > +\t\t}\n> \n> I would make this comment more specific, maybe something like \"Delay by\n> 2^n seconds after each authentication failure from a particular host,\n> where n is the number of consecutive authentication failures\".\n\nDone.\n \n> It's slightly discordant to see the new delay being returned by a\n> function named \"record_<something>\" (rather than \"calculate_delay\" or\n> similar). Maybe a name like \"backoff_after_failed_auth\" would be better?\n> Or \"increase_delay_on_auth_fail\"?\n\nI called it increase_delay_after_failed_conn_auth() now.\n \n> > +static double\n> > +record_failed_conn_auth(Port *port)\n> > +{\n> > +\tAuthConnRecord *acr = NULL;\n> > +\tint\t\t\tj = -1;\n> > +\n> > +\tacr = find_conn_record(port->remote_host, &j);\n> > +\n> > +\tif (!acr)\n> > +\t{\n> > +\t\tif (j == -1)\n> > +\n> > +\t\t\t/*\n> > +\t\t\t * No free space, MAX_CONN_RECORDS reached. Wait as long as the\n> > +\t\t\t * largest delay for any remote host.\n> > +\t\t\t */\n> > +\t\t\treturn find_conn_max_delay();\n> \n> In this extraordinary situation (where there are lots of hosts with lots\n> of authentication failures), why not delay by auth_delay_max_seconds\n> straightaway, especially when the default is only 10s? I don't see much\n> point in coordinating the delay between fifty known hosts and an unknown\n> number of others.\n\nI was a bit worried about legitimate users suffering here if (for some\nreason) a lot of different hosts try to guess passwords, but only once\nor twice or something. But I have changed it now as you suggested as\nthat makes it simpler and I guess the problem I mentioned above is\nrather contrived.\n\n> > +\t\telog(DEBUG1, \"new connection: %s, index: %d\", acr->remote_host, j);\n> \n> I think this should be removed, but if you want to leave it in, the\n> message should be more specific about what this is actually about, and\n> where the message is from, so as to not confuse debug-log readers.\n\nI left it in but mentioned auth_delay in it now. I wonder though whether\nthis might be a useful message to have at some more standard level like\nINFO?\n \n> (The same applies to the other debug messages.)\n\nThose are all gone.\n \n> > +static AuthConnRecord *\n> > +find_conn_record(char *remote_host, int *free_index)\n> > +{\n> > +\tint\t\t\ti;\n> > +\n> > +\t*free_index = -1;\n> > +\tfor (i = 0; i < MAX_CONN_RECORDS; i++)\n> > +\t{\n> > +\t\tif (!acr_array[i].used)\n> > +\t\t{\n> > +\t\t\tif (*free_index == -1)\n> > +\t\t\t\t/* record unused element */\n> > +\t\t\t\t*free_index = i;\n> > +\t\t\tcontinue;\n> > +\t\t}\n> > +\t\tif (strcmp(acr_array[i].remote_host, remote_host) == 0)\n> > +\t\t\treturn &acr_array[i];\n> > +\t}\n> > +\n> > +\treturn NULL;\n> > +}\n> \n> It's not a big deal, but to be honest, I would much prefer to (get rid\n> of used, as suggested earlier, and) have separate find_acr_for_host()\n> and find_free_acr() functions.\n\nDone.\n \n> > +static void\n> > +record_conn_failure(AuthConnRecord *acr)\n> > +{\n> > +\tif (acr->sleep_time == 0)\n> > +\t\tacr->sleep_time = (double) auth_delay_milliseconds;\n> > +\telse\n> > +\t\tacr->sleep_time *= 2;\n> > +}\n> \n> I would just roll this function into record_failed_conn_auth (or\n> whatever it's named), \n\nDone.\n\n> In terms of the logic, it would have been slightly clearer to store the\n> number of failures and calculate the delay, but it's easier to double\n> the sleep time that way you've written it. I think it's fine.\n\nI kept it as-is for now.\n \n> It's worth noting that there is no time-based reset of the delay with\n> this feature, which I think is something that people might expect to go\n> hand-in-hand with exponential backoff. I think that's probably OK too.\n\nYou mean something like \"after 5 minutes, reset the delay to 0 again\"? I\nagree that this would probably be useful, but would also make the change\nmore complex.\n\n> > +static void\n> > +auth_delay_shmem_startup(void)\n> > +{\n> > +\tSize\t\trequired;\n> > +\tbool\t\tfound;\n> > +\n> > +\tif (shmem_startup_next)\n> > +\t\tshmem_startup_next();\n> > +\n> > +\trequired = sizeof(AuthConnRecord) * MAX_CONN_RECORDS;\n> > +\tacr_array = ShmemInitStruct(\"Array of AuthConnRecord\", required, &found);\n> > +\tif (found)\n> > +\t\t/* this should not happen ? */\n> > +\t\telog(DEBUG1, \"variable acr_array already exists\");\n> > +\t/* all fileds are set to 0 */\n> > +\tmemset(acr_array, 0, required);\n> > }\n> \n> I think you can remove the elog and just do the memset if (!found). Also\n> if you're changing it anyway, I'd suggest something like \"total_size\"\n> instead of \"required\".\n\nDone.\n \n> > +\tDefineCustomBoolVariable(\"auth_delay.exp_backoff\",\n> > +\t\t\t\t\t\t\t \"Exponential backoff for failed connections, per remote host\",\n> > +\t\t\t\t\t\t\t NULL,\n> > +\t\t\t\t\t\t\t &auth_delay_exp_backoff,\n> > +\t\t\t\t\t\t\t false,\n> > +\t\t\t\t\t\t\t PGC_SIGHUP,\n> > +\t\t\t\t\t\t\t 0,\n> > +\t\t\t\t\t\t\t NULL,\n> > +\t\t\t\t\t\t\t NULL,\n> > +\t\t\t\t\t\t\t NULL);\n> \n> Maybe \"Double the delay after each authentication failure from a\n> particular host\". (Note: authentication failed, not connection.)\n\nDone.\n \n> I would also mildly prefer to spell out \"exponential_backoff\" (but leave\n> auth_delay_exp_backoff as-is).\n\nDone.\n\n> > +\tDefineCustomIntVariable(\"auth_delay.max_seconds\",\n> > +\t\t\t\t\t\t\t\"Maximum seconds to wait when login fails during exponential backoff\",\n> > +\t\t\t\t\t\t\tNULL,\n> > +\t\t\t\t\t\t\t&auth_delay_max_seconds,\n> > +\t\t\t\t\t\t\t10,\n> > +\t\t\t\t\t\t\t0, INT_MAX,\n> > +\t\t\t\t\t\t\tPGC_SIGHUP,\n> > +\t\t\t\t\t\t\tGUC_UNIT_S,\n> > +\t\t\t\t\t\t\tNULL, NULL, NULL);\n> > +\n> \n> Maybe just \"Maximum delay when exponential backoff is enabled\".\n\nDone.\n \n> (Parameter indentation doesn't match the earlier block.)\n\nI noticed that as well, but pgindent keeps changing it back to this, not\nsure why...\n \n> I'm not able to make up my mind if I think 10s is a good default or not.\n> In practice, it means that after the first three consecutive failures,\n> we'll delay by 10s for every subsequent failure. That sounds OK. But is\n> is much more useful than, for example, delaying the first three failures\n> by auth_delay_milliseconds and then jumping straight to max_seconds?\n\nWhat I had in mind is that admins would lower auth_delay.milliseconds to\nsomething like 100 or 125 when exponential_backoff is on, so that the\nfirst few (possibley honest) auth failures do not get an annoying 1\nseconds penalty, but later ones then wil. In that case, 10 seconds is\nprobably ok cause you'd need more than a handful of auth failures.\n\nI added a paragraph to the documentation to this end.\n \n> I can't really imagine wanting to increase max_seconds to, say, 128 and\n> keep a bunch of backends sleeping while someone's trying to brute-force\n> a password. And with a reasonably short max_seconds, I'm not sure if\n> having the backoff be _exponential_ is particularly important.\n> \n> Or maybe because this is a contrib module, we don't have to think about\n> it to that extent?\n\nWell, not sure. I think something like 10 seconds should be fine for\nmost brute-force attacks in practise, and it is configurable (and turned\noff by default).\n \n> > diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml\n> > index 0571f2a99d..2ca9528011 100644\n> > --- a/doc/src/sgml/auth-delay.sgml\n> > +++ b/doc/src/sgml/auth-delay.sgml\n> > @@ -16,6 +16,17 @@\n> > connection slots.\n> > </para>\n> > \n> > + <para>\n> > + It is optionally possible to let <filename>auth_delay</filename> wait longer\n> > + for each successive authentication failure from a particular remote host, if\n> > + the configuration parameter <varname>auth_delay.exp_backoff</varname> is\n> > + active. Once an authentication succeeded from a remote host, the\n> > + authentication delay is reset to the value of\n> > + <varname>auth_delay.milliseconds</varname> for this host. The parameter\n> > + <varname>auth_delay.max_seconds</varname> sets an upper bound for the delay\n> > + in this case.\n> > + </para>\n> \n> How about something like this…\n> \n> If you enable exponential_backoff, auth_delay will double the delay\n> after each consecutive authentication failure from a particular\n> host, up to the given max_seconds (default: 10s). If the host\n> authenticates successfully, the delay is reset.\n\nDone, mostly.\n \n> > + <varlistentry>\n> > + <term>\n> > + <varname>auth_delay.max_seconds</varname> (<type>integer</type>)\n> > + <indexterm>\n> > + <primary><varname>auth_delay.max_seconds</varname> configuration parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + How many seconds to wait at most if exponential backoff is active.\n> > + Setting this parameter to 0 disables it. The default is 10 seconds.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> \n> I suggest \"The maximum delay, in seconds, when exponential backoff is\n> enabled.\"\n\nDone.\n\n\nMichael",
"msg_date": "Fri, 19 Jan 2024 15:08:36 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 03:08:36PM +0100, Michael Banck wrote:\n> I went through your comments (a lot of which pertained to the original\n> larger patch I took code from), attached is a reworked version 2.\n\nOops, we are supposed to be at version 3, attached.\n\n\nMichael",
"msg_date": "Fri, 19 Jan 2024 15:16:24 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "At 2024-01-19 15:08:36 +0100, [email protected] wrote:\n>\n> I wonder whether maybe auth_delay.max_seconds should either be renamed\n> to auth_delay.exponential_backoff_max_seconds (but then it is rather\n> long) in order to make it clearer it only applies in that context or\n> alternatively just apply to auth_delay.milliseconds as well (though\n> that would be somewhat weird).\n\nI think it's OK as-is. The description/docs are pretty clear.\n\n> I wonder though whether this might be a useful message to have at some\n> more standard level like INFO?\n\nI don't have a strong opinion about this, but I suspect anyone who is\nannoyed enough by repeated authentication failures to use auth_delay\nwill also be happy to have less noise in the logs about it.\n\n> You mean something like \"after 5 minutes, reset the delay to 0 again\"?\n> I agree that this would probably be useful, but would also make the\n> change more complex.\n\nYes, that's the kind of thing I meant.\n\nI agree that it would make this patch more complex, and I don't think\nit's necessary to implement. However, since it's a feature that seems to\ngo hand-in-hand with exponential backoff in general, it _may_ be good to\nmention in the docs that the sleep time for a host is reset only by\nsuccessful authentication, not by any timeout. Not sure.\n\n> What I had in mind is that admins would lower auth_delay.milliseconds to\n> something like 100 or 125 when exponential_backoff is on\n\nAh, that makes a lot of sense. Thanks for explaining.\n\nYour new v3 patch looks fine to me. I'm marking it as ready for\ncommitter.\n\n-- Abhijit\n\n\n",
"msg_date": "Fri, 19 Jan 2024 20:11:16 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nThanks for the patch. I took a closer look at v3, so let me share some\nreview comments. Please push back if you happen to disagree with some of\nit, some of this is going to be more a matter of personal preference.\n\n\n1) I think it's a bit weird to have two options specifying amount of\ntime, but one is in seconds and one in milliseconds. Won't that be\nunnecessarily confusing? Could we do both in milliseconds?\n\n\n2) The C code defines the GUC as auth_delay.exponential_backoff, but the\nSGML docs say <varname>auth_delay.exp_backoff</varname>.\n\n\n3) Do we actually need the exponential_backoff GUC? Wouldn't it be\nsufficient to have the maximum value, and if it's -1 treat that as no\nbackoff?\n\n\n4) I think the SGML docs should more clearly explain that the delay is\ninitialized to auth_delay.milliseconds, and reset to this value after\nsuccessful authentication. The wording kinda implies that, but it's not\nquite clear I think.\n\n\n4) I've been really confused by the change from\n\n if (status != STATUS_OK)\n\n to\n\n if (status == STATUS_ERROR)\n\nin auth_delay_checks, until I realized those two codes are not the only\nones, and we intentionally ignore STATUS_EOF. I think it'd be good to\nmention that in a comment, it's not quite obvious (I only realized it\nbecause the e-mail mentioned it).\n\n\n5) I kinda like the custom that functions in a contrib module start with\na clear common prefix, say auth_delay_ in this case. Matter of personal\npreference, ofc.\n\n\n6) I'm a bit skeptical about some acr_array details. Firstly, why would\n50 entries be enough? Seems like a pretty low and arbitrary number ...\nAlso, what happens if someone attempts to authenticate, fails to do so,\nand then disconnects and never tries again? Or just changes IP? Seems\nlike the entry will remain in the array forever, no?\n\nSeems like a way to cause a \"limited\" DoS - do auth failure from 50\ndifferent hosts, to fill the table, and now everyone has to wait the\nmaximum amount of time (if they fail to authenticate).\n\nI think it'd be good to consider:\n\n- Making the acr_array a hash table, and larger than 50 entries (I\nwonder if that should be dynamic / customizable by GUC?).\n\n- Make sure the entries are eventually expired, based on time (for\nexample after 2*max_delay?).\n\n- It would be a good idea to log something when we get into the \"full\ntable\" and start delaying everyone by max_delay_seconds. (How would\nanyone know that's what's happening, seems rather confusing.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Feb 2024 22:26:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 21, 2024 at 10:26:11PM +0100, Tomas Vondra wrote:\n> Thanks for the patch. I took a closer look at v3, so let me share some\n> review comments. Please push back if you happen to disagree with some of\n> it, some of this is going to be more a matter of personal preference.\n\nThanks! As my patch was based on a previous patch, some of decisions\nwere carry-overs I am not overly attached to.\n \n> 1) I think it's a bit weird to have two options specifying amount of\n> time, but one is in seconds and one in milliseconds. Won't that be\n> unnecessarily confusing? Could we do both in milliseconds?\n\nAlright, I changed that.\n \nSee below for a discussion about the GUCs in general.\n \n> 2) The C code defines the GUC as auth_delay.exponential_backoff, but the\n> SGML docs say <varname>auth_delay.exp_backoff</varname>.\n\nRight, an oversight from the last version where the GUC name got changed\nbut I forgot to change the documentation, fixed.\n \n> 3) Do we actually need the exponential_backoff GUC? Wouldn't it be\n> sufficient to have the maximum value, and if it's -1 treat that as no\n> backoff?\n \nThat is a good question, I guess that makes sense.\n\nThe next question then is: what should the default for (now)\nauth_delay.max_milliseconds be in this case, -1? Or do we say that as\nthe default for auth_delay.milliseconds is 0 anyway, why would somebody\nnot want exponential backoff when they switch it on and keep it at the\ncurrent 10s/10000ms)?\n\nI have not changed that for now, pending further input.\n \n> 4) I think the SGML docs should more clearly explain that the delay is\n> initialized to auth_delay.milliseconds, and reset to this value after\n> successful authentication. The wording kinda implies that, but it's not\n> quite clear I think.\n\nOk, I added some text to that end. I also added a not that\nauth_delay.max_milliseconds will mean that the delay doubling never\nstops.\n \n> 4) I've been really confused by the change from\n> \n> if (status != STATUS_OK)\n> to\n> if (status == STATUS_ERROR)\n> \n> in auth_delay_checks, until I realized those two codes are not the only\n> ones, and we intentionally ignore STATUS_EOF. I think it'd be good to\n> mention that in a comment, it's not quite obvious (I only realized it\n> because the e-mail mentioned it).\n\nYeah I agree, I tried to explain that now.\n \n> 5) I kinda like the custom that functions in a contrib module start with\n> a clear common prefix, say auth_delay_ in this case. Matter of personal\n> preference, ofc.\n\nOk, I changed the functions to have an auth_delay_ prefix throughout..\n \n> 6) I'm a bit skeptical about some acr_array details. Firstly, why would\n> 50 entries be enough? Seems like a pretty low and arbitrary number ...\n> Also, what happens if someone attempts to authenticate, fails to do so,\n> and then disconnects and never tries again? Or just changes IP? Seems\n> like the entry will remain in the array forever, no?\n\nYeah, that is how v3 of this patch worked. I have changed that now, see\nbelow.\n\n> Seems like a way to cause a \"limited\" DoS - do auth failure from 50\n> different hosts, to fill the table, and now everyone has to wait the\n> maximum amount of time (if they fail to authenticate).\n\nRight, though the problem would only exist on authentication failures,\nso it is really rather limited.\n \n> I think it'd be good to consider:\n> \n> - Making the acr_array a hash table, and larger than 50 entries (I\n> wonder if that should be dynamic / customizable by GUC?).\n\nI would say a GUC should be overkill for this as this would mostly be an\nimplementation detail.\n\nMore generally, I think now that entries are expired (see below), this\nshould be less of a concern, so I have not changed this to a hash table\nfor now but doubled MAX_CONN_RECORDS to 100 entries.\n \n> - Make sure the entries are eventually expired, based on time (for\n> example after 2*max_delay?).\n\nI went with 5*max_milliseconds - the AuthConnRecord struct now has a\nlast_failed_auth timestamp member; if we increase the delay for a host,\nwe check if any other host expired in the meantime and remove it if so.\n \n> - It would be a good idea to log something when we get into the \"full\n> table\" and start delaying everyone by max_delay_seconds. (How would\n> anyone know that's what's happening, seems rather confusing.)\n\nRight, I added a log line for that. However, I made it LOG instead of\nWARNING as I don't think the client would ever see it, would he?\n\nAttached is v4 with the above changes.\n\n\nCheers,\n\nMichael",
"msg_date": "Mon, 4 Mar 2024 20:42:55 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 2:43 PM Michael Banck <[email protected]> wrote:\n> > 3) Do we actually need the exponential_backoff GUC? Wouldn't it be\n> > sufficient to have the maximum value, and if it's -1 treat that as no\n> > backoff?\n>\n> That is a good question, I guess that makes sense.\n>\n> The next question then is: what should the default for (now)\n> auth_delay.max_milliseconds be in this case, -1? Or do we say that as\n> the default for auth_delay.milliseconds is 0 anyway, why would somebody\n> not want exponential backoff when they switch it on and keep it at the\n> current 10s/10000ms)?\n>\n> I have not changed that for now, pending further input.\n\nI agree that two GUCs here seems to be one more than necessary, but I\nwonder whether we couldn't just say 0 means no exponential backoff and\nany other value is the maximum time. The idea that 0 means unlimited\ndoesn't seem useful in practice. There's always going to be some\nlimit, at least by the number of bits we have in the data type that\nwe're using to do the calculation. But that limit is basically never\nthe right answer. I mean, I think 2^31 milliseconds is roughly 25\ndays, but it seems unlikely to me that delays measured in days\nhelpfully more secure than delays measured in minutes, and they don't\nseem very convenient for users either, and do you really want a failed\nconnection to linger for days before failing? That seems like you're\nDOSing yourself. If somebody wants to configure a very large value\nexplicitly, cool, they can do as they like, but we don't need to\ncomplicate the interface to make it easier for them to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:50:07 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nOn Mon, Mar 04, 2024 at 03:50:07PM -0500, Robert Haas wrote:\n> I agree that two GUCs here seems to be one more than necessary, but I\n> wonder whether we couldn't just say 0 means no exponential backoff and\n> any other value is the maximum time. \n\nAlright, I have changed it so that auth_delay.milliseconds and\nauth_delay.max_milliseconds are the only GUCs, their default being 0. If\nthe latter is 0, the former's value is always taken. If the latter is\nnon-zero and larger than the former, exponential backoff is applied with\nthe latter's value as maximum delay.\n\nIf the latter is smaller than the former then auth_delay just sets the\ndelay to the latter, I don't think this is problem or confusing, or\nshould this be considered a misconfiguration?\n\n> The idea that 0 means unlimited doesn't seem useful in practice. \n\nYeah, that was more how it was coded than a real policy decision, so\nlet's do away with it.\n\nV5 attached.\n\n\nMichael",
"msg_date": "Mon, 4 Mar 2024 22:58:03 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 4:58 PM Michael Banck <[email protected]> wrote:\n> Alright, I have changed it so that auth_delay.milliseconds and\n> auth_delay.max_milliseconds are the only GUCs, their default being 0. If\n> the latter is 0, the former's value is always taken. If the latter is\n> non-zero and larger than the former, exponential backoff is applied with\n> the latter's value as maximum delay.\n>\n> If the latter is smaller than the former then auth_delay just sets the\n> delay to the latter, I don't think this is problem or confusing, or\n> should this be considered a misconfiguration?\n\nSeems fine to me. We may need to think about what the documentation\nshould say about this, if anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 08:14:20 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 08:42:55PM +0100, Michael Banck wrote:\n> On Wed, Feb 21, 2024 at 10:26:11PM +0100, Tomas Vondra wrote:\n>> I think it'd be good to consider:\n>> \n>> - Making the acr_array a hash table, and larger than 50 entries (I\n>> wonder if that should be dynamic / customizable by GUC?).\n> \n> I would say a GUC should be overkill for this as this would mostly be an\n> implementation detail.\n> \n> More generally, I think now that entries are expired (see below), this\n> should be less of a concern, so I have not changed this to a hash table\n> for now but doubled MAX_CONN_RECORDS to 100 entries.\n\nI don't have a strong opinion about making this configurable, but I do\nthink we should consider making this a hash table so that we can set\nMAX_CONN_RECORDS much higher.\n\nAlso, since these records are stored in shared memory, don't we need to\nlock them when searching/updating?\n\n> +static void\n> +auth_delay_init_state(void *ptr)\n> +{\n> +\tSize\t\tshm_size;\n> +\tAuthConnRecord *array = (AuthConnRecord *) ptr;\n> +\n> +\tshm_size = sizeof(AuthConnRecord) * MAX_CONN_RECORDS;\n> +\n> +\tmemset(array, 0, shm_size);\n> +}\n> +\n> +static void\n> +auth_delay_shmem_startup(void)\n> +{\n> +\tbool\t\tfound;\n> +\tSize\t\tshm_size;\n> +\n> +\tif (shmem_startup_next)\n> +\t\tshmem_startup_next();\n> +\n> +\tshm_size = sizeof(AuthConnRecord) * MAX_CONN_RECORDS;\n> +\tacr_array = GetNamedDSMSegment(\"auth_delay\", shm_size, auth_delay_init_state, &found);\n> +}\n\nGreat to see the DSM registry getting some use. This example makes me\nwonder whether the size of the segment should be passed to the\ninit_callback.\n\n> /*\n> * Module Load Callback\n> */\n> void\n> _PG_init(void)\n> {\n> +\tif (!process_shared_preload_libraries_in_progress)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t errmsg(\"auth_delay must be loaded via shared_preload_libraries\")));\n> +\n\nThis change seems like a good idea independent of this feature.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 15:50:44 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 1:51 PM Nathan Bossart <[email protected]> wrote:\n> I don't have a strong opinion about making this configurable, but I do\n> think we should consider making this a hash table so that we can set\n> MAX_CONN_RECORDS much higher.\n\nI'm curious why? It seems like the higher you make MAX_CONN_RECORDS,\nthe easier it is to put off the brute-force protection. (My assumption\nis that anyone mounting a serious attack is not going to be doing this\nfrom their own computer; they'll be doing it from many devices they\ndon't own -- a botnet, or a series of proxies, or something.)\n\n--\n\nDrive-by microreview -- auth_delay_cleanup_conn_record() has\n\n> + port->remote_host[0] = '\\0';\n\nwhich doesn't seem right. I assume acr->remote_host was meant?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:14:46 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 05:14:46PM -0800, Jacob Champion wrote:\n> On Tue, Mar 5, 2024 at 1:51 PM Nathan Bossart <[email protected]> wrote:\n>> I don't have a strong opinion about making this configurable, but I do\n>> think we should consider making this a hash table so that we can set\n>> MAX_CONN_RECORDS much higher.\n> \n> I'm curious why? It seems like the higher you make MAX_CONN_RECORDS,\n> the easier it is to put off the brute-force protection. (My assumption\n> is that anyone mounting a serious attack is not going to be doing this\n> from their own computer; they'll be doing it from many devices they\n> don't own -- a botnet, or a series of proxies, or something.)\n\nAssuming you are referring to the case where we run out of free slots in\nacr_array, I'm not sure I see that as desirable behavior. Once you run out\nof slots, all failed authentication attempts are now subject to the maximum\ndelay, which is arguably a denial-of-service scenario, albeit not a\nparticularly worrisome one.\n\nI also think the linear search approach artifically constrains the value of\nMAX_CONN_RECORDS, so even if a user wanted to bump it up substantially for\ntheir own build, they'd potentially begin noticing the effects of the O(n)\nbehavior. AFAICT this is really the only reason this value is set so low\nat the moment, as I don't think the memory usage or code complexity of a\nhash table with thousands of slots would be too bad. In fact, it might\neven be simpler to use hash_search() instead of hard-coding linear searches\nin multiple places.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:10:45 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 8:10 AM Nathan Bossart <[email protected]> wrote:\n> Assuming you are referring to the case where we run out of free slots in\n> acr_array, I'm not sure I see that as desirable behavior. Once you run out\n> of slots, all failed authentication attempts are now subject to the maximum\n> delay, which is arguably a denial-of-service scenario, albeit not a\n> particularly worrisome one.\n\nMaybe I've misunderstood the attack vector, but I thought the point of\nthe feature was to deny service when the server is under attack. If we\ndon't deny service, what does the feature do?\n\nAnd I may have introduced a red herring in talking about the number of\nhosts, because an attacker operating from a single host is under no\nobligation to actually wait for the authentication delay. Once we hit\nsome short timeout, we can safely assume the password is wrong,\nabandon the request, and open up a new connection. It seems like the\nthing limiting our attack is the number of connection slots, not\nMAX_CONN_RECORDS. Am I missing something crucial?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:24:01 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "\n\nOn 3/6/24 19:24, Jacob Champion wrote:\n> On Wed, Mar 6, 2024 at 8:10 AM Nathan Bossart <[email protected]> wrote:\n>> Assuming you are referring to the case where we run out of free slots in\n>> acr_array, I'm not sure I see that as desirable behavior. Once you run out\n>> of slots, all failed authentication attempts are now subject to the maximum\n>> delay, which is arguably a denial-of-service scenario, albeit not a\n>> particularly worrisome one.\n> \n> Maybe I've misunderstood the attack vector, but I thought the point of\n> the feature was to deny service when the server is under attack. If we\n> don't deny service, what does the feature do?\n> \n> And I may have introduced a red herring in talking about the number of\n> hosts, because an attacker operating from a single host is under no\n> obligation to actually wait for the authentication delay. Once we hit\n> some short timeout, we can safely assume the password is wrong,\n> abandon the request, and open up a new connection. It seems like the\n> thing limiting our attack is the number of connection slots, not\n> MAX_CONN_RECORDS. Am I missing something crucial?\n> \n\nDoesn't this mean this approach (as implemented) doesn't actually work\nas a protection against this sort of DoS?\n\nIf I'm an attacker, and I can just keep opening new connections for each\nauth request, am I even subject to any auth delay?\n\nISTM the problem lies in the fact that we apply the delay only *after*\nthe failed auth attempt. Which makes sense, because until now we didn't\nhave any state with information for new connections. But with the new\nacr_array, we could check that first, and do the delay before trying to\nathenticate, no?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Mar 2024 21:34:37 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 06, 2024 at 09:34:37PM +0100, Tomas Vondra wrote:\n> On 3/6/24 19:24, Jacob Champion wrote:\n> > On Wed, Mar 6, 2024 at 8:10 AM Nathan Bossart <[email protected]> wrote:\n> >> Assuming you are referring to the case where we run out of free slots in\n> >> acr_array, I'm not sure I see that as desirable behavior. Once you run out\n> >> of slots, all failed authentication attempts are now subject to the maximum\n> >> delay, which is arguably a denial-of-service scenario, albeit not a\n> >> particularly worrisome one.\n> > \n> > Maybe I've misunderstood the attack vector, but I thought the point of\n> > the feature was to deny service when the server is under attack. If we\n> > don't deny service, what does the feature do?\n\nI think there are two attack vectors under discussion:\n\n1. Somebody tries to brute-force a password. The original auth_delay\ndelays auth for a bit everytime authentication fails. If you configure\nthe delay to be very small, maybe it does not bother the attacker too\nmuch. If you configure it to be long enough, legitimate users might be\nannoyed when typoing their password. The suggested feature tries to help\nhere by initially delaying authentication just a bit and then gradually\nincreasing the delay.\n\n2. Somebody tries to denial-of-service a server by exhausting all\n(remaining) available connections with failed authentication requests\nthat are being delayed. For this attack, the suggested feature is\nhurting more than doing good as it potentially keeps a failed\nauthentication attempt's connection hanging around longer than before\nand makes it easier to denial-of-service a server in this way.\n\nIn order to at least make case 2 not worse for exponential backoff, we\ncould maybe disable it (and just wait for auth_delay.milliseconds) once\nMAX_CONN_RECORDS is full. In addition, maybe MAX_CONN_RECORDS should be\nsome fraction of max_connections, like 25%?\n\n> > And I may have introduced a red herring in talking about the number of\n> > hosts, because an attacker operating from a single host is under no\n> > obligation to actually wait for the authentication delay. Once we hit\n> > some short timeout, we can safely assume the password is wrong,\n> > abandon the request, and open up a new connection.\n\nThat is a valid point.\n\nMaybe this could be averted if we either outright deny even a successful\nauthentication request if the host it originates from has a\nmax_milliseconds delay on file (i.e. has been trying to brute-force the\npassword for a while) or at least delay a successful authentication\nrequest for some delay, if the host it originates on has a\nmax_milliseconds delay on file (assuming it will close the connection\nbeforehand as it thinks the password guess was wrong)?\n\n> > It seems like the thing limiting our attack is the number of\n> > connection slots, not MAX_CONN_RECORDS. Am I missing something\n> > crucial?\n> \n> Doesn't this mean this approach (as implemented) doesn't actually work\n> as a protection against this sort of DoS?\n> \n> If I'm an attacker, and I can just keep opening new connections for each\n> auth request, am I even subject to any auth delay?\n\nYeah, but see above.\n\n> ISTM the problem lies in the fact that we apply the delay only *after*\n> the failed auth attempt. Which makes sense, because until now we didn't\n> have any state with information for new connections. But with the new\n> acr_array, we could check that first, and do the delay before trying to\n> athenticate, no?\n\nI don't think we have a hook for that yet, do we?\n\n\nMichael\n\n\n",
"msg_date": "Wed, 6 Mar 2024 23:45:27 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 12:34 PM Tomas Vondra\n<[email protected]> wrote:\n> Doesn't this mean this approach (as implemented) doesn't actually work\n> as a protection against this sort of DoS?\n>\n> If I'm an attacker, and I can just keep opening new connections for each\n> auth request, am I even subject to any auth delay?\n\ns/DoS/brute-force/, but yeah, that's basically the question at the\nheart of my comment. _If_ the point of auth_delay is to tie up\nconnection slots in order to degrade service during an attack, then I\nthink this addition works, in the sense that it makes the self-imposed\nDoS more draconian the more failures occur.\n\nBut I don't know if that's actually the intended goal. For one, I'm\nnot convinced that the host tracking part buys you anything more in\npractice, under that model. And if people are trying to *avoid* the\nDoS somehow, then I just really don't understand the feature.\n\n> ISTM the problem lies in the fact that we apply the delay only *after*\n> the failed auth attempt. Which makes sense, because until now we didn't\n> have any state with information for new connections. But with the new\n> acr_array, we could check that first, and do the delay before trying to\n> athenticate, no?\n\nYeah, I think one key point is to apply the delay to both successful\nand failed connections. That probably opens up a bunch more questions,\nthough? It seems like a big change from the previous behavior. An\nattacker can still front-load a bunch of connections in parallel. And\nthe end state of the working feature is probably still slot exhaustion\nduring an attack, so...\n\nI looked around a bit at other designs. [1] is HTTP-focused, but it\ntalks about some design tradeoffs. I wonder if flipping the sense of\nthe host tracking [2], to keep track of well-behaved clients and let\nthem through the service degradation that we're applying to the\nmasses, might be more robust. But I don't know how to let those\nclients punch through slot exhaustion without a lot more work.\n\n--Jacob\n\n[1] https://owasp.org/www-community/controls/Blocking_Brute_Force_Attacks\n[2] https://owasp.org/www-community/Slow_Down_Online_Guessing_Attacks_with_Device_Cookies\n\n\n",
"msg_date": "Wed, 6 Mar 2024 14:46:07 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 2:45 PM Michael Banck <[email protected]> wrote:\n> In order to at least make case 2 not worse for exponential backoff, we\n> could maybe disable it (and just wait for auth_delay.milliseconds) once\n> MAX_CONN_RECORDS is full. In addition, maybe MAX_CONN_RECORDS should be\n> some fraction of max_connections, like 25%?\n\n(Our mails crossed; hopefully I've addressed the other points.)\n\nI think solutions for case 1 and case 2 are necessarily at odds under\nthe current design, if auth_delay relies on slot exhaustion to do its\nwork effectively. Weakening that on purpose doesn't make much sense to\nme; if a DBA is uncomfortable with the DoS implications then I'd argue\nthey need a different solution. (Which we could theoretically\nimplement, but it's not my intention to sign you up for that. :D )\n\n--Jacob\n\n\n",
"msg_date": "Wed, 6 Mar 2024 14:58:50 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 2:15 PM Jacob Champion\n<[email protected]> wrote:\n> I think solutions for case 1 and case 2 are necessarily at odds under\n> the current design, if auth_delay relies on slot exhaustion to do its\n> work effectively. Weakening that on purpose doesn't make much sense to\n> me; if a DBA is uncomfortable with the DoS implications then I'd argue\n> they need a different solution. (Which we could theoretically\n> implement, but it's not my intention to sign you up for that. :D )\n\nThe thread got quiet, and I'm nervous that I squashed it unintentionally. :/\n\nIs there consensus on whether the backoff is useful, even without the\nhost tracking? (Or, alternatively, is the host tracking helpful in a\nway I'm not seeing?) Failing those, is there a way forward that could\nmake it useful in the future?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 20 Mar 2024 14:21:29 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "> On 20 Mar 2024, at 22:21, Jacob Champion <[email protected]> wrote:\n> \n> On Wed, Mar 20, 2024 at 2:15 PM Jacob Champion\n> <[email protected]> wrote:\n>> I think solutions for case 1 and case 2 are necessarily at odds under\n>> the current design, if auth_delay relies on slot exhaustion to do its\n>> work effectively. Weakening that on purpose doesn't make much sense to\n>> me; if a DBA is uncomfortable with the DoS implications then I'd argue\n>> they need a different solution. (Which we could theoretically\n>> implement, but it's not my intention to sign you up for that. :D )\n> \n> The thread got quiet, and I'm nervous that I squashed it unintentionally. :/\n> \n> Is there consensus on whether the backoff is useful, even without the\n> host tracking? (Or, alternatively, is the host tracking helpful in a\n> way I'm not seeing?) Failing those, is there a way forward that could\n> make it useful in the future?\n\nI actually wrote more or less the same patch with rudimentary attacker\nfingerprinting, and after some off-list discussion decided to abandon it for\nthe reasons discussed in this thread. It's unlikely to protect against the\nattackers we wan't to protect the cluster against since they won't wait for the\ndelay anyways.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Mar 2024 23:22:12 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 20, 2024 at 11:22:12PM +0100, Daniel Gustafsson wrote:\n> > On 20 Mar 2024, at 22:21, Jacob Champion <[email protected]> wrote:\n> > \n> > On Wed, Mar 20, 2024 at 2:15 PM Jacob Champion\n> > <[email protected]> wrote:\n> >> I think solutions for case 1 and case 2 are necessarily at odds under\n> >> the current design, if auth_delay relies on slot exhaustion to do its\n> >> work effectively. Weakening that on purpose doesn't make much sense to\n> >> me; if a DBA is uncomfortable with the DoS implications then I'd argue\n> >> they need a different solution. (Which we could theoretically\n> >> implement, but it's not my intention to sign you up for that. :D )\n> > \n> > The thread got quiet, and I'm nervous that I squashed it unintentionally. :/\n> > \n> > Is there consensus on whether the backoff is useful, even without the\n> > host tracking? (Or, alternatively, is the host tracking helpful in a\n> > way I'm not seeing?) Failing those, is there a way forward that could\n> > make it useful in the future?\n> \n> I actually wrote more or less the same patch with rudimentary attacker\n> fingerprinting, and after some off-list discussion decided to abandon it for\n> the reasons discussed in this thread. It's unlikely to protect against the\n> attackers we wan't to protect the cluster against since they won't wait for the\n> delay anyways.\n\nI have marked the patch \"Returned with Feedback\" now. Maybe I will get\nback to this for v18, but it was clearly not ready for v17.\n\n\nMichael\n\n\n",
"msg_date": "Sun, 7 Apr 2024 09:43:39 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Exponential backoff for auth_delay"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe use postgres_fdw to connect two databases. Both DBs have an extension \ninstalled which provides a custom string data type. Our extension is \nknown to the FDW as we created the foreign server with our extension \nlisted in the \"extensions\" option.\n\nThe filter clause of the query SELECT * FROM test WHERE col = 'foo' OR \ncol = 'bar' is pushed down to the remote, while the filter clause of the \nsemantically equivalent query SELECT * FROM test WHERE col IN ('foo', \n'bar') is not.\n\nI traced this down to getExtensionOfObject() called from \nlookup_shippable(). getExtensionOfObject() doesn't recurse but only \nchecks first level dependencies and only checks for extension \ndependencies. However, the IN operator takes an array of our custom data \ntype as argument (type is typically prefixed with _ in pg_type). This \narray type is only dependent on our extension via the custom data type \nin two steps which postgres_fdw doesn't see. Therefore, postgres_fdw \ndoesn't allow for push-down of the IN.\n\nThoughts?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 17:43:37 +0100",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "David Geier <[email protected]> writes:\n> The filter clause of the query SELECT * FROM test WHERE col = 'foo' OR \n> col = 'bar' is pushed down to the remote, while the filter clause of the \n> semantically equivalent query SELECT * FROM test WHERE col IN ('foo', \n> 'bar') is not.\n\n> I traced this down to getExtensionOfObject() called from \n> lookup_shippable(). getExtensionOfObject() doesn't recurse but only \n> checks first level dependencies and only checks for extension \n> dependencies. However, the IN operator takes an array of our custom data \n> type as argument (type is typically prefixed with _ in pg_type). This \n> array type is only dependent on our extension via the custom data type \n> in two steps which postgres_fdw doesn't see. Therefore, postgres_fdw \n> doesn't allow for push-down of the IN.\n\nHmm. It seems odd that if an extension defines a type, the type is\nlisted as a member of the extension but the array type is not.\nThat makes it look like the array type is an externally-created\nthing that happens to depend on the extension, when it's actually\npart of the extension. I'm surprised we've not run across other\nmisbehaviors traceable to that.\n\nOf course, fixing it like that leads to needing to change the\ncontents of pg_depend, so it wouldn't be back-patchable. But it\nseems like the best way in the long run.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 12:38:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "Hi,\n\nOn 12/27/23 18:38, Tom Lane wrote:\n> Hmm. It seems odd that if an extension defines a type, the type is\n> listed as a member of the extension but the array type is not.\n> That makes it look like the array type is an externally-created\n> thing that happens to depend on the extension, when it's actually\n> part of the extension. I'm surprised we've not run across other\n> misbehaviors traceable to that.\nAgreed.\n> Of course, fixing it like that leads to needing to change the\n> contents of pg_depend, so it wouldn't be back-patchable. But it\n> seems like the best way in the long run.\n\nThe attached patch just adds a 2nd dependency between the array type and \nthe extension, using recordDependencyOnCurrentExtension(). It seems like \nthat the other internal dependency on the element type must stay? If \nthat seems reasonable I can add a test to modules/test_extensions. Can \nextensions in contrib use test extension in their own tests? It looks \nlike postgres_fdw doesn't test any of the shippability logic.\n\n-- \nDavid Geier\n(ServiceNow)",
"msg_date": "Mon, 8 Jan 2024 12:21:24 +0100",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "Hi,\n\nI realized that ALTER EXTENSION foo ADD TYPE _bar does pretty much the \nsame via ExecAlterExtensionContentsStmt(). So the code in the patch \nseems fine.\n\nOn 1/8/24 12:21, David Geier wrote:\n> The attached patch just adds a 2nd dependency between the array type \n> and the extension, using recordDependencyOnCurrentExtension(). It \n> seems like that the other internal dependency on the element type must \n> stay? If that seems reasonable I can add a test to \n> modules/test_extensions. Can extensions in contrib use test extension \n> in their own tests? It looks like postgres_fdw doesn't test any of the \n> shippability logic.\n>\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Mon, 15 Jan 2024 14:24:38 +0100",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "David Geier <[email protected]> writes:\n> On 12/27/23 18:38, Tom Lane wrote:\n>> Hmm. It seems odd that if an extension defines a type, the type is\n>> listed as a member of the extension but the array type is not.\n>> That makes it look like the array type is an externally-created\n>> thing that happens to depend on the extension, when it's actually\n>> part of the extension. I'm surprised we've not run across other\n>> misbehaviors traceable to that.\n\n> Agreed.\n> The attached patch just adds a 2nd dependency between the array type and \n> the extension, using recordDependencyOnCurrentExtension().\n\nI don't like this patch too much: it fixes the problem in a place far\naway from GenerateTypeDependencies' existing treatment of extension\ndependencies, and relatedly to that, fails to update the comments\nit has falsified. I think we should do it more like the attached.\nThis approach means that relation rowtypes will also be explicitly\nlisted as extension members, which seems like it is fixing the same\nsort of bug as the array case.\n\nI also noted that you'd not run check-world, because there's a test\ncase that changes output. This is good though, because we don't need\nto add any new test to prove it does what we want.\n\nThere's one big remaining to-do item, I think: experimentation with\npg_upgrade proves that a binary upgrade fails to fix the extension\nmembership of arrays/rowtypes. That's because pg_dump needs to\nmanually reconstruct the membership dependencies, and it thinks that\nit doesn't need to do anything for implicit arrays. Normally the\npoint of that is that we want to exactly reconstruct the extension's\ncontents, but I think in this case we'd really like to add the\nadditional pg_depend entries even if they weren't there before.\nOtherwise people wouldn't get the new behavior until they do a\nfull dump/reload.\n\nI can see two ways we could do that:\n\n* add logic to pg_dump\n\n* modify ALTER EXTENSION ADD TYPE so that it automatically recurses\nfrom a base type to its array type (and I guess we'd need to add\nsomething for relation rowtypes and multiranges, too).\n\nI think I like the latter approach because it's like how we\nhandle ownership: pg_dump doesn't emit any commands to explicitly\nchange the ownership of dependent types, either. (But see [1].)\nWe could presumably steal some logic from ALTER TYPE OWNER.\nI've not tried to code that here, though.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1580383.1705343264%40sss.pgh.pa.us",
"msg_date": "Mon, 15 Jan 2024 14:01:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "I wrote:\n> There's one big remaining to-do item, I think: experimentation with\n> pg_upgrade proves that a binary upgrade fails to fix the extension\n> membership of arrays/rowtypes. That's because pg_dump needs to\n> manually reconstruct the membership dependencies, and it thinks that\n> it doesn't need to do anything for implicit arrays. Normally the\n> point of that is that we want to exactly reconstruct the extension's\n> contents, but I think in this case we'd really like to add the\n> additional pg_depend entries even if they weren't there before.\n> Otherwise people wouldn't get the new behavior until they do a\n> full dump/reload.\n\n> I can see two ways we could do that:\n\n> * add logic to pg_dump\n\n> * modify ALTER EXTENSION ADD TYPE so that it automatically recurses\n> from a base type to its array type (and I guess we'd need to add\n> something for relation rowtypes and multiranges, too).\n\n> I think I like the latter approach because it's like how we\n> handle ownership: pg_dump doesn't emit any commands to explicitly\n> change the ownership of dependent types, either. (But see [1].)\n> We could presumably steal some logic from ALTER TYPE OWNER.\n> I've not tried to code that here, though.\n\nNow that the multirange issue is fixed (3e8235ba4), here's a\nnew version that includes the needed recursion in ALTER EXTENSION.\nI spent some more effort on a proper test case, too.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 14 Feb 2024 14:37:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
},
{
"msg_contents": "I wrote:\n> Now that the multirange issue is fixed (3e8235ba4), here's a\n> new version that includes the needed recursion in ALTER EXTENSION.\n> I spent some more effort on a proper test case, too.\n\nI looked this over again and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Mar 2024 14:54:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw fails to see that array type belongs to extension"
}
] |
[
{
"msg_contents": "Hi.\nsimilar to [1], add function argument names to the following functions:\nregexp_like, regexp_match,regexp_matches,regexp_replace,\nregexp_substr,regexp_split_to_array,regexp_split_to_table,regexp_count\n\nso I call these function in a different notation[2], like:\n\nSELECT regexp_like(string=>'a'||CHR(10)||'d', pattern=>'a.d', flags:='n');\nselect regexp_match(string=>'abc',n pattern=>'(B)(c)', flags=>'i');\nselect regexp_matches(string=>'Programmer', pattern=>'(\\w)(.*?\\1)',\nflags=>'ig');\nSELECT regexp_replace(source=>'A PostgreSQL function',\npattern=>'a|e|i|o|u', replacement=>'X', start=>1, n=>4, flags=>'i');\nSELECT regexp_substr(string=>'1234567890',\npattern=>'(123)(4(56)(78))', start=>1, n=>1, flags=>'i', subexpr=>4);\nSELECT regexp_split_to_array(string=>'thE QUick bROWn FOx jUMPs ovEr\nThe lazy dOG', pattern=>'e', flags=>'i');\n\nSELECT foo, length(foo)\nFROM regexp_split_to_table(string=>'thE QUick bROWn FOx jUMPs ovEr The\nlazy dOG', pattern=>'e',flags=>'i') AS foo;\nSELECT regexp_count(string=>'ABCABCABCABC', pattern=>'Abc', start=>1,\nflags=>'i');\n\nIn [3], except the above mentioned function, there is a \"substring\"\nfunction.\nI want to refactor substring function argument names. it looks like:\n Schema | Name | Result data type | Argument data\ntypes | Type\n------------+-----------+------------------+--------------------------------------------+------\n pg_catalog | substring | bit | bits bit, \"from\" integer\n | func\n pg_catalog | substring | bit | bits bit, \"from\" integer,\n\"for\" integer | func\n pg_catalog | substring | bytea | bytes bytea, \"from\"\ninteger | func\n pg_catalog | substring | bytea | bytes bytea, \"from\"\ninteger, \"for\" integer | func\n pg_catalog | substring | text | string text, \"from\"\ninteger | func\n pg_catalog | substring | text | string text, \"from\"\ninteger, \"for\" integer | func\n pg_catalog | substring | text | string text, pattern text\n | func\n pg_catalog | substring | text | text, text, text\n | func\n(8 rows)\n\nAs you can see, the substring function argument names need an explicit\ndouble quote,\nwhich doesn't look good, so I gave up.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/[email protected]\n[2]\nhttps://www.postgresql.org/docs/current/sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS\n[3] https://www.postgresql.org/docs/current/functions-matching.html",
"msg_date": "Thu, 28 Dec 2023 00:53:13 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "add function argument names to regex* functions."
},
{
"msg_contents": "On 27.12.23 17:53, jian he wrote:\n> similar to [1], add function argument names to the following functions:\n> regexp_like, regexp_match,regexp_matches,regexp_replace,\n> regexp_substr,regexp_split_to_array,regexp_split_to_table,regexp_count\n\nNote that these functions are a quasi-standard that is shared with other \nSQL implementations. It might be worth looking around if there are \nalready other implementations of this idea.\n\n\n\n",
"msg_date": "Wed, 27 Dec 2023 23:25:05 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 6:25 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 27.12.23 17:53, jian he wrote:\n> > similar to [1], add function argument names to the following functions:\n> > regexp_like, regexp_match,regexp_matches,regexp_replace,\n> > regexp_substr,regexp_split_to_array,regexp_split_to_table,regexp_count\n>\n> Note that these functions are a quasi-standard that is shared with other\n> SQL implementations. It might be worth looking around if there are\n> already other implementations of this idea.\n>\n\nturns out people do like calling functions via explicitly mentioning\nfunction argument names, example: [0]\nThere are no provisions for the argument names.\n\nI looked around the oracle implementation in [1], and the oracle regex\nrelated function argumentation name in [2]\nI use the doc [3] syntax explanation and add the related function names.\n\nCurrent, regex.* function syntax seems fine. but only parameter `N`\nseems a little bit weird.\nIf we change the function's argument name, we also need to change\nfunction syntax explanation in the doc; vise versa.\n\nQUOTE:\nThe regexp_instr function returns the starting or ending position of\nthe N'th match of a POSIX regular expression pattern to a string, or\nzero if there is no such match. It has the syntax regexp_instr(string,\npattern [, start [, N [, endoption [, flags [, subexpr ]]]]]). pattern\nis searched for in string, normally from the beginning of the string,\nbut if the start parameter is provided then beginning from that\ncharacter index. If N is specified then the N'th match of the pattern\nis located, otherwise the first match is located.\nEND OF QUOTE.\n\nmaybe we can change `N` to occurrence. but `occurrence` is kind of verbose.\n\n[0] https://stackoverflow.com/questions/33387348/oracle-named-parameters-in-regular-functions\n[1] https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/REGEXP_SUBSTR.html#GUID-2903904D-455F-4839-A8B2-1731EF4BD099\n[2] https://dbfiddle.uk/h_SBDEKi\n[3] https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n\n\n",
"msg_date": "Thu, 28 Dec 2023 11:28:42 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed Dec 27, 2023 at 10:28 PM EST, jian he wrote:\n> On Thu, Dec 28, 2023 at 6:25 AM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 27.12.23 17:53, jian he wrote:\n> > > similar to [1], add function argument names to the following functions:\n> > > regexp_like, regexp_match,regexp_matches,regexp_replace,\n> > > regexp_substr,regexp_split_to_array,regexp_split_to_table,regexp_count\n> >\n> > Note that these functions are a quasi-standard that is shared with other\n> > SQL implementations. It might be worth looking around if there are\n> > already other implementations of this idea.\n> >\n>\n> turns out people do like calling functions via explicitly mentioning\n> function argument names, example: [0]\n> There are no provisions for the argument names.\n>\n> I looked around the oracle implementation in [1], and the oracle regex\n> related function argumentation name in [2]\n> I use the doc [3] syntax explanation and add the related function names.\n>\n> Current, regex.* function syntax seems fine. but only parameter `N`\n> seems a little bit weird.\n> If we change the function's argument name, we also need to change\n> function syntax explanation in the doc; vise versa.\n>\n> QUOTE:\n> The regexp_instr function returns the starting or ending position of\n> the N'th match of a POSIX regular expression pattern to a string, or\n> zero if there is no such match. It has the syntax regexp_instr(string,\n> pattern [, start [, N [, endoption [, flags [, subexpr ]]]]]). pattern\n> is searched for in string, normally from the beginning of the string,\n> but if the start parameter is provided then beginning from that\n> character index. If N is specified then the N'th match of the pattern\n> is located, otherwise the first match is located.\n> END OF QUOTE.\n>\n> maybe we can change `N` to occurrence. but `occurrence` is kind of verbose.\n>\n> [0] https://stackoverflow.com/questions/33387348/oracle-named-parameters-in-regular-functions\n> [1] https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/REGEXP_SUBSTR.html#GUID-2903904D-455F-4839-A8B2-1731EF4BD099\n> [2] https://dbfiddle.uk/h_SBDEKi\n> [3] https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n\nI've been trying to use named arguments more diligently so expanding\nsupport for built-in functions is welcome. The patch applies cleanly and\nworks as advertised.\n\nI agree that the parameter name `n` is not ideal. For example, in\n`regexp_replace` it's easy to misinterpret it as \"make up to n\nreplacements\". This has not been a problem when `n` only lives in the\ndocumentation which explains exactly what it does, but that context is\nnot readily available in code expressing `n => 3`.\n\nAnother possibility is `index`, which is relatively short and not a\nreserved keyword ^1. `position` is not as precise but would avoid the\nconceptual overloading of ordinary indices.\n\n1. https://www.postgresql.org/docs/current/sql-keywords-appendix.html\n\n\n",
"msg_date": "Mon, 01 Jan 2024 13:05:45 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 28.12.23 04:28, jian he wrote:\n> I looked around the oracle implementation in [1], and the oracle regex\n> related function argumentation name in [2]\n> I use the doc [3] syntax explanation and add the related function names.\n> \n> Current, regex.* function syntax seems fine. but only parameter `N`\n> seems a little bit weird.\n> If we change the function's argument name, we also need to change\n> function syntax explanation in the doc; vise versa.\n\nSo, it looks like Oracle already has defined parameter names for these, \nso we should make ours match.\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 13:13:03 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "\n\n\n\n\nOn 1/1/24 12:05 PM, Dian Fay wrote:\n\n\nI agree that the parameter name `n` is not ideal. For example, in\n`regexp_replace` it's easy to misinterpret it as \"make up to n\nreplacements\". This has not been a problem when `n` only lives in the\ndocumentation which explains exactly what it does, but that context is\nnot readily available in code expressing `n => 3`.\n\n\n Agreed; IMO it's worth diverging from what Oracle has done here.\n\n\nAnother possibility is `index`, which is relatively short and not a\nreserved keyword ^1. `position` is not as precise but would avoid the\nconceptual overloading of ordinary indices.\n\n\nI'm not a fan of \"index\" since that leaves the question of\n whether it's 0 or 1 based. \"Position\" is a bit better, but I think\n Jian's suggestion of \"occurance\" is best.\n\n--\nJim Nasby, Data Architect, Austin TX\n\n\n",
"msg_date": "Wed, 3 Jan 2024 16:23:37 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "> > Another possibility is `index`, which is relatively short and not a\n> > reserved keyword ^1. `position` is not as precise but would avoid the\n> > conceptual overloading of ordinary indices.\n>\n> I'm not a fan of \"index\" since that leaves the question of\n> whether it's 0 or 1 based. \"Position\" is a bit better, but I think\n> Jian's suggestion of \"occurance\" is best.\n\nWe do have precedent for one-based `index` in Postgres: array types are\n1-indexed by default! \"Occurrence\" removes that ambiguity but it's long\nand easy to misspell (I looked it up after typing it just now and it\n_still_ feels off).\n\nHow's \"instance\"?\n\n\n",
"msg_date": "Wed, 03 Jan 2024 18:05:47 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "\n\n\n\n\nOn 1/3/24 5:05 PM, Dian Fay wrote:\n\n\n\n\nAnother possibility is `index`, which is relatively short and not a\nreserved keyword ^1. `position` is not as precise but would avoid the\nconceptual overloading of ordinary indices.\n\n\n\nI'm not a fan of \"index\" since that leaves the question of\nwhether it's 0 or 1 based. \"Position\" is a bit better, but I think\nJian's suggestion of \"occurance\" is best.\n\n\n\nWe do have precedent for one-based `index` in Postgres: array types are\n1-indexed by default! \"Occurrence\" removes that ambiguity but it's long\nand easy to misspell (I looked it up after typing it just now and it\n_still_ feels off).\n\nHow's \"instance\"?\n\n\nPresumably someone referencing arguments by name would have just\n looked up the names via \\df or whatever, so presumably misspelling\n wouldn't be a big issue. But I think \"instance\" is OK as well.\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:25:59 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 7:26 AM Jim Nasby <[email protected]> wrote:\n>\n> On 1/3/24 5:05 PM, Dian Fay wrote:\n>\n> Another possibility is `index`, which is relatively short and not a\n> reserved keyword ^1. `position` is not as precise but would avoid the\n> conceptual overloading of ordinary indices.\n>\n> I'm not a fan of \"index\" since that leaves the question of\n> whether it's 0 or 1 based. \"Position\" is a bit better, but I think\n> Jian's suggestion of \"occurance\" is best.\n>\n> We do have precedent for one-based `index` in Postgres: array types are\n> 1-indexed by default! \"Occurrence\" removes that ambiguity but it's long\n> and easy to misspell (I looked it up after typing it just now and it\n> _still_ feels off).\n>\n> How's \"instance\"?\n>\n> Presumably someone referencing arguments by name would have just looked up the names via \\df or whatever, so presumably misspelling wouldn't be a big issue. But I think \"instance\" is OK as well.\n>\n> --\n> Jim Nasby, Data Architect, Austin TX\n\nregexp_instr: It has the syntax regexp_instr(string, pattern [, start\n[, N [, endoption [, flags [, subexpr ]]]]])\noracle:\nREGEXP_INSTR (source_char, pattern, [, position [, occurrence [,\nreturn_opt [, match_param [, subexpr ]]]]] )\n\n\"string\" and \"source_char\" are almost the same descriptive, so maybe\nthere is no need to change.\n\"start\" is better than \"position\", imho.\n\"return_opt\" is better than \"endoption\", (maybe we need change, for\nnow I didn't)\n\"flags\" cannot be changed to \"match_param\", given it quite everywhere\nin functions-matching.html.\n\nsimilarly for function regexp_replace, oracle using \"repplace_string\",\nwe use \"replacement\"(mentioned in the doc).\nso I don't think we need to change to \"repplace_string\".\n\nBased on how people google[0], I think `occurrence` is ok, even though\nit's verbose.\nto change from `N` to `occurrence`, we also need to change the doc,\nthat is why this patch is more larger.\n\n\n[0]: https://www.google.com/search?q=regex+nth+match&oq=regex+nth+match&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg8MgYIAhBFGDzSAQc2MThqMGo5qAIAsAIA&sourceid=chrome&ie=UTF-8",
"msg_date": "Thu, 4 Jan 2024 15:03:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu Jan 4, 2024 at 2:03 AM EST, jian he wrote:\n> On Thu, Jan 4, 2024 at 7:26 AM Jim Nasby <[email protected]> wrote:\n> >\n> > On 1/3/24 5:05 PM, Dian Fay wrote:\n> >\n> > Another possibility is `index`, which is relatively short and not a\n> > reserved keyword ^1. `position` is not as precise but would avoid the\n> > conceptual overloading of ordinary indices.\n> >\n> > I'm not a fan of \"index\" since that leaves the question of\n> > whether it's 0 or 1 based. \"Position\" is a bit better, but I think\n> > Jian's suggestion of \"occurance\" is best.\n> >\n> > We do have precedent for one-based `index` in Postgres: array types are\n> > 1-indexed by default! \"Occurrence\" removes that ambiguity but it's long\n> > and easy to misspell (I looked it up after typing it just now and it\n> > _still_ feels off).\n> >\n> > How's \"instance\"?\n> >\n> > Presumably someone referencing arguments by name would have just looked up the names via \\df or whatever, so presumably misspelling wouldn't be a big issue. But I think \"instance\" is OK as well.\n> >\n> > --\n> > Jim Nasby, Data Architect, Austin TX\n>\n> regexp_instr: It has the syntax regexp_instr(string, pattern [, start\n> [, N [, endoption [, flags [, subexpr ]]]]])\n> oracle:\n> REGEXP_INSTR (source_char, pattern, [, position [, occurrence [,\n> return_opt [, match_param [, subexpr ]]]]] )\n>\n> \"string\" and \"source_char\" are almost the same descriptive, so maybe\n> there is no need to change.\n> \"start\" is better than \"position\", imho.\n> \"return_opt\" is better than \"endoption\", (maybe we need change, for\n> now I didn't)\n> \"flags\" cannot be changed to \"match_param\", given it quite everywhere\n> in functions-matching.html.\n>\n> similarly for function regexp_replace, oracle using \"repplace_string\",\n> we use \"replacement\"(mentioned in the doc).\n> so I don't think we need to change to \"repplace_string\".\n>\n> Based on how people google[0], I think `occurrence` is ok, even though\n> it's verbose.\n> to change from `N` to `occurrence`, we also need to change the doc,\n> that is why this patch is more larger.\n>\n>\n> [0]: https://www.google.com/search?q=regex+nth+match&oq=regex+nth+match&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg8MgYIAhBFGDzSAQc2MThqMGo5qAIAsAIA&sourceid=chrome&ie=UTF-8\n\nThe `regexp_replace` summary in table 9.10 is mismatched and still\nspecifies the first parameter name as `string` instead of `source`.\nSince all the other functions use `string`, should `regexp_replace` do\nthe same or is this a case where an established \"standard\" diverges?\n\nI noticed the original documentation for some of these functions is\nrather disorganized; summaries explain `occurrence` without explaining\nthe prior `start` parameter, and detailed documentation in 9.7 is\nusually a single paragraph per function running pell-mell through ifs\nand buts without section headings, so entries in table 9.10 have to\nreference the entire section 9.7.3 instead of their specific functions.\nIt's out of scope here, but should I bring this up on pgsql-docs?\n\n\n",
"msg_date": "Sun, 07 Jan 2024 19:44:55 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 8:44 AM Dian Fay <[email protected]> wrote:\n>\n> On Thu Jan 4, 2024 at 2:03 AM EST, jian he wrote:\n> > On Thu, Jan 4, 2024 at 7:26 AM Jim Nasby <[email protected]> wrote:\n> > >\n> > > On 1/3/24 5:05 PM, Dian Fay wrote:\n> > >\n> > > Another possibility is `index`, which is relatively short and not a\n> > > reserved keyword ^1. `position` is not as precise but would avoid the\n> > > conceptual overloading of ordinary indices.\n> > >\n> > > I'm not a fan of \"index\" since that leaves the question of\n> > > whether it's 0 or 1 based. \"Position\" is a bit better, but I think\n> > > Jian's suggestion of \"occurance\" is best.\n> > >\n> > > We do have precedent for one-based `index` in Postgres: array types are\n> > > 1-indexed by default! \"Occurrence\" removes that ambiguity but it's long\n> > > and easy to misspell (I looked it up after typing it just now and it\n> > > _still_ feels off).\n> > >\n> > > How's \"instance\"?\n> > >\n> > > Presumably someone referencing arguments by name would have just looked up the names via \\df or whatever, so presumably misspelling wouldn't be a big issue. But I think \"instance\" is OK as well.\n> > >\n> > > --\n> > > Jim Nasby, Data Architect, Austin TX\n> >\n> > regexp_instr: It has the syntax regexp_instr(string, pattern [, start\n> > [, N [, endoption [, flags [, subexpr ]]]]])\n> > oracle:\n> > REGEXP_INSTR (source_char, pattern, [, position [, occurrence [,\n> > return_opt [, match_param [, subexpr ]]]]] )\n> >\n> > \"string\" and \"source_char\" are almost the same descriptive, so maybe\n> > there is no need to change.\n> > \"start\" is better than \"position\", imho.\n> > \"return_opt\" is better than \"endoption\", (maybe we need change, for\n> > now I didn't)\n> > \"flags\" cannot be changed to \"match_param\", given it quite everywhere\n> > in functions-matching.html.\n> >\n> > similarly for function regexp_replace, oracle using \"repplace_string\",\n> > we use \"replacement\"(mentioned in the doc).\n> > so I don't think we need to change to \"repplace_string\".\n> >\n> > Based on how people google[0], I think `occurrence` is ok, even though\n> > it's verbose.\n> > to change from `N` to `occurrence`, we also need to change the doc,\n> > that is why this patch is more larger.\n> >\n> >\n> > [0]: https://www.google.com/search?q=regex+nth+match&oq=regex+nth+match&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg8MgYIAhBFGDzSAQc2MThqMGo5qAIAsAIA&sourceid=chrome&ie=UTF-8\n>\n> The `regexp_replace` summary in table 9.10 is mismatched and still\n> specifies the first parameter name as `string` instead of `source`.\n> Since all the other functions use `string`, should `regexp_replace` do\n> the same or is this a case where an established \"standard\" diverges?\n>\n\ngot it. Thanks for pointing it out.\n\nin functions-matching.html\nif I change <replaceable>source</replaceable> to\n<replaceable>string</replaceable> then\nthere are no markup \"string\" and markup \"string\", it's kind of\nslightly confusing.\n\nSo does the following refactored description of regexp_replace make sense:\n\n The <replaceable>string</replaceable> is returned unchanged if\n there is no match to the <replaceable>pattern</replaceable>. If there is a\n match, the <replaceable>string</replaceable> is returned with the\n <replaceable>replacement</replaceable> string substituted for the matching\n substring. The <replaceable>replacement</replaceable> string can contain\n <literal>\\</literal><replaceable>n</replaceable>, where\n<replaceable>n</replaceable> is 1\n through 9, to indicate that the source substring matching the\n <replaceable>n</replaceable>'th parenthesized subexpression of\nthe pattern should be\n inserted, and it can contain <literal>\\&</literal> to indicate that the\n substring matching the entire pattern should be inserted. Write\n <literal>\\\\</literal> if you need to put a literal backslash in\nthe replacement\n text.\n\n> I noticed the original documentation for some of these functions is\n> rather disorganized; summaries explain `occurrence` without explaining\n> the prior `start` parameter, and detailed documentation in 9.7 is\n> usually a single paragraph per function running pell-mell through ifs\n> and buts without section headings, so entries in table 9.10 have to\n> reference the entire section 9.7.3 instead of their specific functions.\n> It's out of scope here, but should I bring this up on pgsql-docs?\n\nI got it.\nin Table 9.10. Other String Functions and Operators, if we can\nreference the specific function would be great.\nAs for now, in the browser, you need to use Ctrl+F to find the\ndetailed explanation in 9.7.3.\nyou can just bring your suggested or patch to [email protected].\n\n\n",
"msg_date": "Mon, 8 Jan 2024 22:26:34 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Mon Jan 8, 2024 at 9:26 AM EST, jian he wrote:\n> On Mon, Jan 8, 2024 at 8:44 AM Dian Fay <[email protected]> wrote:\n> > The `regexp_replace` summary in table 9.10 is mismatched and still\n> > specifies the first parameter name as `string` instead of `source`.\n> > Since all the other functions use `string`, should `regexp_replace` do\n> > the same or is this a case where an established \"standard\" diverges?\n>\n> got it. Thanks for pointing it out.\n>\n> in functions-matching.html\n> if I change <replaceable>source</replaceable> to\n> <replaceable>string</replaceable> then\n> there are no markup \"string\" and markup \"string\", it's kind of\n> slightly confusing.\n>\n> So does the following refactored description of regexp_replace make sense:\n>\n> The <replaceable>string</replaceable> is returned unchanged if\n> there is no match to the <replaceable>pattern</replaceable>. If there is a\n> match, the <replaceable>string</replaceable> is returned with the\n> <replaceable>replacement</replaceable> string substituted for the matching\n> substring. The <replaceable>replacement</replaceable> string can contain\n> <literal>\\</literal><replaceable>n</replaceable>, where\n> <replaceable>n</replaceable> is 1\n> through 9, to indicate that the source substring matching the\n> <replaceable>n</replaceable>'th parenthesized subexpression of\n> the pattern should be\n> inserted, and it can contain <literal>\\&</literal> to indicate that the\n> substring matching the entire pattern should be inserted. Write\n> <literal>\\\\</literal> if you need to put a literal backslash in\n> the replacement\n> text.\n\nThat change makes sense to me! I'll see about the section refactoring\nafter this lands.\n\n\n",
"msg_date": "Mon, 08 Jan 2024 19:51:58 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Tue, Jan 9, 2024 at 8:52 AM Dian Fay <[email protected]> wrote:\n>\n> On Mon Jan 8, 2024 at 9:26 AM EST, jian he wrote:\n> > On Mon, Jan 8, 2024 at 8:44 AM Dian Fay <[email protected]> wrote:\n> > > The `regexp_replace` summary in table 9.10 is mismatched and still\n> > > specifies the first parameter name as `string` instead of `source`.\n> > > Since all the other functions use `string`, should `regexp_replace` do\n> > > the same or is this a case where an established \"standard\" diverges?\n> >\n> > got it. Thanks for pointing it out.\n> >\n> > in functions-matching.html\n> > if I change <replaceable>source</replaceable> to\n> > <replaceable>string</replaceable> then\n> > there are no markup \"string\" and markup \"string\", it's kind of\n> > slightly confusing.\n> >\n> > So does the following refactored description of regexp_replace make sense:\n> >\n> > The <replaceable>string</replaceable> is returned unchanged if\n> > there is no match to the <replaceable>pattern</replaceable>. If there is a\n> > match, the <replaceable>string</replaceable> is returned with the\n> > <replaceable>replacement</replaceable> string substituted for the matching\n> > substring. The <replaceable>replacement</replaceable> string can contain\n> > <literal>\\</literal><replaceable>n</replaceable>, where\n> > <replaceable>n</replaceable> is 1\n> > through 9, to indicate that the source substring matching the\n> > <replaceable>n</replaceable>'th parenthesized subexpression of\n> > the pattern should be\n> > inserted, and it can contain <literal>\\&</literal> to indicate that the\n> > substring matching the entire pattern should be inserted. Write\n> > <literal>\\\\</literal> if you need to put a literal backslash in\n> > the replacement\n> > text.\n>\n> That change makes sense to me! I'll see about the section refactoring\n> after this lands.\n\nI put the changes into the new patch.",
"msg_date": "Wed, 10 Jan 2024 22:18:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed Jan 10, 2024 at 9:18 AM EST, jian he wrote:\n> On Tue, Jan 9, 2024 at 8:52 AM Dian Fay <[email protected]> wrote:\n> >\n> > On Mon Jan 8, 2024 at 9:26 AM EST, jian he wrote:\n> > > On Mon, Jan 8, 2024 at 8:44 AM Dian Fay <[email protected]> wrote:\n> > > > The `regexp_replace` summary in table 9.10 is mismatched and still\n> > > > specifies the first parameter name as `string` instead of `source`.\n> > > > Since all the other functions use `string`, should `regexp_replace` do\n> > > > the same or is this a case where an established \"standard\" diverges?\n> > >\n> > > got it. Thanks for pointing it out.\n> > >\n> > > in functions-matching.html\n> > > if I change <replaceable>source</replaceable> to\n> > > <replaceable>string</replaceable> then\n> > > there are no markup \"string\" and markup \"string\", it's kind of\n> > > slightly confusing.\n> > >\n> > > So does the following refactored description of regexp_replace make sense:\n> > >\n> > > The <replaceable>string</replaceable> is returned unchanged if\n> > > there is no match to the <replaceable>pattern</replaceable>. If there is a\n> > > match, the <replaceable>string</replaceable> is returned with the\n> > > <replaceable>replacement</replaceable> string substituted for the matching\n> > > substring. The <replaceable>replacement</replaceable> string can contain\n> > > <literal>\\</literal><replaceable>n</replaceable>, where\n> > > <replaceable>n</replaceable> is 1\n> > > through 9, to indicate that the source substring matching the\n> > > <replaceable>n</replaceable>'th parenthesized subexpression of\n> > > the pattern should be\n> > > inserted, and it can contain <literal>\\&</literal> to indicate that the\n> > > substring matching the entire pattern should be inserted. Write\n> > > <literal>\\\\</literal> if you need to put a literal backslash in\n> > > the replacement\n> > > text.\n> >\n> > That change makes sense to me! I'll see about the section refactoring\n> > after this lands.\n>\n> I put the changes into the new patch.\n\nSorry, I missed one minor issue with v2. The replacement on lines\n6027-6028 of func.sgml (originally \"`n` rows if there are `n` matches\")\nis not needed and could be more confusing since the `n` represents a\nnumber, not an argument to `regexp_matches`. I've built v3 and gone over\neverything else one more time and it looks good.\n\n\n",
"msg_date": "Wed, 10 Jan 2024 22:57:33 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 10.01.24 15:18, jian he wrote:\n> I put the changes into the new patch.\n\nReading back through the discussion, I wasn't quite able to interpret \nthe resolution regarding Oracle compatibility. From the patch, it looks \nlike you chose not to adopt the parameter names from Oracle. Was that \nyour intention?\n\n\n\n",
"msg_date": "Thu, 18 Jan 2024 09:17:11 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 4:17 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 10.01.24 15:18, jian he wrote:\n> > I put the changes into the new patch.\n>\n> Reading back through the discussion, I wasn't quite able to interpret\n> the resolution regarding Oracle compatibility. From the patch, it looks\n> like you chose not to adopt the parameter names from Oracle. Was that\n> your intention?\n>\n\nper committee message:\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=6424337073589476303b10f6d7cc74f501b8d9d7\nEven if the names are all the same, our function is still not the same\nas oracle.\n\nThere is a documentation bug.\nIn [0], Table 9.25. Regular Expression Functions Equivalencies\nregexp_replace function definition: regexp_replace(string, pattern, replacement)\n\nIn one of the <tip> section below, regexp_replace explains as\n<<<<<\nThe regexp_replace function provides substitution of new text for\nsubstrings that match POSIX regular expression patterns. It has the\nsyntax regexp_replace(source, pattern, replacement [, start [, N ]] [,\nflags ]). (Notice that N cannot be specified unless start is, but\nflags can be given in any case.)\n<<<<<\nSo I changed the first argument of regexp_replace to \"string\". So\naccordingly, the doc needs to change also, which I did.\n\nanother regex* function argument changes: from \"N\" to \"occurences\", example:\n + If <replaceable>occurrence</replaceable> is specified\n + then the <replaceable>occurrence</replaceable>'th match of the pattern\n + is located,\n\nbut [2] says\nSpeaking of the \"occurrence'th\noccurrence\" is just silly, not to mention long and easy to misspell.\"\n\nsummary:\nadding function-named notation is my intention.\nTo make regex.* functions named-notation works, we need to add\nproargnames to src/include/catalog/pg_proc.dat.\nadd proargnames also require changing the doc.\nnaming proargnames is a matter of taste now, So I only change 'N' to\n'occurrence'.\n\n[0] https://www.postgresql.org/docs/current/functions-matching.html\n[1] https://www.postgresql.org/message-id/flat/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n\n\n",
"msg_date": "Sat, 20 Jan 2024 10:55:41 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Sat, Jan 20, 2024 at 10:55 AM jian he <[email protected]> wrote:\n>\n>\n> another regex* function argument changes: from \"N\" to \"occurences\", example:\n> + If <replaceable>occurrence</replaceable> is specified\n> + then the <replaceable>occurrence</replaceable>'th match of the pattern\n> + is located,\n>\n> but [2] says\n> Speaking of the \"occurrence'th\n> occurrence\" is just silly, not to mention long and easy to misspell.\"\n>\n\nsorry.\n[2], The reference link is\nhttps://www.postgresql.org/message-id/1567465.1627860115%40sss.pgh.pa.us\n\nmy previous post will link to the whole thread.\n\n\n",
"msg_date": "Sat, 20 Jan 2024 11:03:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Thu, Jan 18, 2024 at 4:17 PM Peter Eisentraut <[email protected]> wrote:\n>> Reading back through the discussion, I wasn't quite able to interpret\n>> the resolution regarding Oracle compatibility. From the patch, it looks\n>> like you chose not to adopt the parameter names from Oracle. Was that\n>> your intention?\n\n> per committee message:\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=6424337073589476303b10f6d7cc74f501b8d9d7\n> Even if the names are all the same, our function is still not the same\n> as oracle.\n\nThe fact that there's minor discrepancies in the regex languages\ndoesn't seem to me to have a lot of bearing on whether we should\nfollow Oracle's choices of parameter names.\n\nHowever, if we do follow Oracle, it seems like we should do that\nconsistently, which this patch doesn't. For instance, per [1]\nOracle calls the arguments of regex_substr\n\n\tsource_char,\n\tpattern,\n\tposition,\n\toccurrence,\n\tmatch_param,\n\tsubexpr\n\nwhile we have\n\n\tstring,\n\tpattern,\n\tstart,\n\tN,\n\tflags,\n\tsubexpr\n\nThe patch proposes to replace \"N\" with \"occurrence\" but not touch\nthe other discrepancies, which seems to me to be a pretty poor\nchoice. \"occurrence\" is very long and difficult to spell correctly,\nand if you're not following Oracle slavishly, exactly what is the\nargument in its favor? I quite agree that Oracle's other choices\naren't improvements over ours, but neither is that one.\n\nOn the whole my inclination would be to stick to the names we have\nin the documentation. There might be an argument for changing \"N\"\nto something lower-case so you don't have to quote it; but if we do,\nI'd go for, say, \"count\".\n\n\t\t\tregards, tom lane\n\n[1] https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/REGEXP_SUBSTR.html#GUID-2903904D-455F-4839-A8B2-1731EF4BD099\n\n\n",
"msg_date": "Tue, 02 Apr 2024 16:45:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 4:45 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > On Thu, Jan 18, 2024 at 4:17 PM Peter Eisentraut <[email protected]> wrote:\n> >> Reading back through the discussion, I wasn't quite able to interpret\n> >> the resolution regarding Oracle compatibility. From the patch, it looks\n> >> like you chose not to adopt the parameter names from Oracle. Was that\n> >> your intention?\n>\n> > per committee message:\n> > https://git.postgresql.org/cgit/postgresql.git/commit/?id=6424337073589476303b10f6d7cc74f501b8d9d7\n> > Even if the names are all the same, our function is still not the same\n> > as oracle.\n>\n> The fact that there's minor discrepancies in the regex languages\n> doesn't seem to me to have a lot of bearing on whether we should\n> follow Oracle's choices of parameter names.\n>\n> However, if we do follow Oracle, it seems like we should do that\n> consistently, which this patch doesn't. For instance, per [1]\n> Oracle calls the arguments of regex_substr\n>\n> source_char,\n> pattern,\n> position,\n> occurrence,\n> match_param,\n> subexpr\n>\n> while we have\n>\n> string,\n> pattern,\n> start,\n> N,\n> flags,\n> subexpr\n>\n> The patch proposes to replace \"N\" with \"occurrence\" but not touch\n> the other discrepancies, which seems to me to be a pretty poor\n> choice. \"occurrence\" is very long and difficult to spell correctly,\n> and if you're not following Oracle slavishly, exactly what is the\n> argument in its favor? I quite agree that Oracle's other choices\n> aren't improvements over ours, but neither is that one.\n>\n> On the whole my inclination would be to stick to the names we have\n> in the documentation. There might be an argument for changing \"N\"\n> to something lower-case so you don't have to quote it; but if we do,\n> I'd go for, say, \"count\".\n>\n\nwe have\n---------------------------------------------------------------\nThe replacement string can contain \\n, where n is 1 through 9, to\nindicate that the source substring matching the n'th parenthesized\nsubexpression of the pattern should be inserted, and it can contain \\&\nto indicate that the substring matching the entire pattern should be\ninserted.\n----------------------------------------------------------------------------\nin the regexp_replace explanation section.\nchanging \"N\" to lower-case would be misleading for regexp_replace?\nso I choose \"count\".\n\nBy the way, I think the above is so hard to comprehend.\nI can only find related test in src/test/regress/sql/strings.sql are:\nSELECT regexp_replace('1112223333', E'(\\\\d{3})(\\\\d{3})(\\\\d{4})',\nE'(\\\\1) \\\\2-\\\\3');\nSELECT regexp_replace('foobarrbazz', E'(.)\\\\1', E'X\\\\&Y', 'g');\nSELECT regexp_replace('foobarrbazz', E'(.)\\\\1', E'X\\\\\\\\Y', 'g');\n\nbut these tests seem not friendly.\nmaybe we should have some simple examples to demonstrate the above paragraph.",
"msg_date": "Thu, 4 Apr 2024 21:54:53 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 9:55 AM jian he <[email protected]> wrote:\n> in the regexp_replace explanation section.\n> changing \"N\" to lower-case would be misleading for regexp_replace?\n> so I choose \"count\".\n\nI don't see why that would be confusing for regexp_replace\nspecifically, but I think N => count is a reasonable change to make.\nHowever, I don't think this quite works:\n\n+ then the <replaceable>count</replaceable>'th match of the pattern\n\nAn English speaker is more likely to understand what is meant by\n\"N'th\" than what is meant by \"count'th\". Even if they can guess, it's\nkinda strange-looking. I think it needs to be rephrased somehow, but\nI'm not sure exactly how.\n\n> By the way, I think the above is so hard to comprehend.\n> I can only find related test in src/test/regress/sql/strings.sql are:\n> SELECT regexp_replace('1112223333', E'(\\\\d{3})(\\\\d{3})(\\\\d{4})',\n> E'(\\\\1) \\\\2-\\\\3');\n> SELECT regexp_replace('foobarrbazz', E'(.)\\\\1', E'X\\\\&Y', 'g');\n> SELECT regexp_replace('foobarrbazz', E'(.)\\\\1', E'X\\\\\\\\Y', 'g');\n>\n> but these tests seem not friendly.\n> maybe we should have some simple examples to demonstrate the above paragraph.\n\nExamples in the regression tests aren't meant as tests, not examples\nfor users to copy. If we want examples, those belong in the\ndocumentation. However, I see that regexp_replace already has some\nexamples at https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP\nso I'm not sure exactly what you think should be added.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 14:46:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 2:46 PM Robert Haas <[email protected]> wrote:\n> Examples in the regression tests aren't meant as tests, not examples\n> for users to copy. If we want examples, those belong in the\n> documentation. However, I see that regexp_replace already has some\n> examples at https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n> so I'm not sure exactly what you think should be added.\n\nWoops. I should have said: Examples in the regression tests *are*\nmeant as tests...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 14:50:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 11:46 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Apr 4, 2024 at 9:55 AM jian he <[email protected]>\n> wrote:\n> > in the regexp_replace explanation section.\n> > changing \"N\" to lower-case would be misleading for regexp_replace?\n> > so I choose \"count\".\n>\n> I don't see why that would be confusing for regexp_replace\n> specifically, but I think N => count is a reasonable change to make.\n> However, I don't think this quite works:\n>\n> + then the <replaceable>count</replaceable>'th match of the pattern\n>\n> An English speaker is more likely to understand what is meant by\n> \"N'th\" than what is meant by \"count'th\". Even if they can guess, it's\n> kinda strange-looking. I think it needs to be rephrased somehow, but\n> I'm not sure exactly how.\n>\n>\nI think this confusion goes to show that replacing N with count doesn't\nwork.\n\n\"replace_at\" comes to mind as a better name.\n\nBy default, only the first match of the pattern is replaced. If replace_at\nis specified and greater than zero, then the first \"replace_at - 1\" matches\nare skipped before making a single replacement (i.e., the g flag is ignored\nwhen replace_at is specified.)\n\nDavid J.\n\nOn Wed, May 15, 2024 at 11:46 AM Robert Haas <[email protected]> wrote:On Thu, Apr 4, 2024 at 9:55 AM jian he <[email protected]> wrote:\n> in the regexp_replace explanation section.\n> changing \"N\" to lower-case would be misleading for regexp_replace?\n> so I choose \"count\".\n\nI don't see why that would be confusing for regexp_replace\nspecifically, but I think N => count is a reasonable change to make.\nHowever, I don't think this quite works:\n\n+ then the <replaceable>count</replaceable>'th match of the pattern\n\nAn English speaker is more likely to understand what is meant by\n\"N'th\" than what is meant by \"count'th\". Even if they can guess, it's\nkinda strange-looking. I think it needs to be rephrased somehow, but\nI'm not sure exactly how.I think this confusion goes to show that replacing N with count doesn't work.\"replace_at\" comes to mind as a better name.By default, only the first match of the pattern is replaced. If replace_at is specified and greater than zero, then the first \"replace_at - 1\" matches are skipped before making a single replacement (i.e., the g flag is ignored when replace_at is specified.)David J.",
"msg_date": "Wed, 15 May 2024 12:01:18 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 3:01 PM David G. Johnston\n<[email protected]> wrote:\n> I think this confusion goes to show that replacing N with count doesn't work.\n>\n> \"replace_at\" comes to mind as a better name.\n\nI do not agree with that at all. It shows that a literal\nsearch-and-replace changing N to count does not work, but it does not\nshow that count is a bad name for the concept, and I don't think it\nis. I believe that if I were reading the documentation, count would be\nclearer to me than N, N would probably still be clear enough, and\nreplace_at wouldn't be clear at all. I'd expect replace_at to be a\ncharacter position or something, not an occurrence count.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:07:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 4, 2024 at 9:55 AM jian he <[email protected]> wrote:\n>> changing \"N\" to lower-case would be misleading for regexp_replace?\n>> so I choose \"count\".\n\n> I don't see why that would be confusing for regexp_replace\n> specifically, but I think N => count is a reasonable change to make.\n> However, I don't think this quite works:\n> + then the <replaceable>count</replaceable>'th match of the pattern\n\nI think the origin of the problem here is not wanting to use \"N\"\nas the actual name of the parameter, because then users would have\nto double-quote it to write \"regexp_replace(..., \"N\" => 42, ...)\".\n\nHowever ... is that really so awful? It's still fewer keystrokes\nthan \"count\". It's certainly a potential gotcha for users who've\nnot internalized when they need double quotes, but I think we\ncould largely address that problem just by making sure to provide\na documentation example that shows use of \"N\".\n\n> An English speaker is more likely to understand what is meant by\n> \"N'th\" than what is meant by \"count'th\".\n\n+1 ... none of the proposals make that bit read more clearly\nthan it does now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2024 15:10:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 05/15/24 15:07, Robert Haas wrote:\n> is. I believe that if I were reading the documentation, count would be\n> clearer to me than N, N would probably still be clear enough, and\n> replace_at wouldn't be clear at all. I'd expect replace_at to be a\n> character position or something, not an occurrence count.\n\nYou've said the magic word. In the analogous (but XQuery-based)\nISO standard regex functions, the argument that does that is identified\nwith the keyword OCCURRENCE.\n\nWhat would be wrong with that, for consistency's sake?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 15 May 2024 15:22:59 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:07 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, May 15, 2024 at 3:01 PM David G. Johnston\n> <[email protected]> wrote:\n> > I think this confusion goes to show that replacing N with count doesn't\n> work.\n> >\n> > \"replace_at\" comes to mind as a better name.\n>\n> I do not agree with that at all. It shows that a literal\n> search-and-replace changing N to count does not work, but it does not\n> show that count is a bad name for the concept, and I don't think it\n> is. I believe that if I were reading the documentation, count would be\n> clearer to me than N, N would probably still be clear enough, and\n> replace_at wouldn't be clear at all. I'd expect replace_at to be a\n> character position or something, not an occurrence count.\n>\n>\nThe function replaces matches, not random characters. And if you are\nreading the documentation I find it implausible that the wording I\nsuggested would cause one to think in terms of characters instead of\nmatches.\n\nIf I choose not to read the documentation \"count\" seems like it behaves as\na qualified \"g\". I don't want all matches replaced, I want the first\n\"count\" matches only replaced.\n\n\"occurrence\" probably is the best choice but I agree the spelling issues\nare a big negative.\n\ncount - how many things there are. This isn't a count. I'd rather stick\nwith N, at least it actually has the desired meaning as a pointer to an\nitem in a list.\n\nN - The label provides zero context as to what the number you place there\nis going to be used for. Labels ideally do more work than this especially\nif someone takes the time to spell them out. Otherwise why use \"pattern\"\ninstead of \"p\".\n\nDavid J.\n\nOn Wed, May 15, 2024 at 12:07 PM Robert Haas <[email protected]> wrote:On Wed, May 15, 2024 at 3:01 PM David G. Johnston\n<[email protected]> wrote:\n> I think this confusion goes to show that replacing N with count doesn't work.\n>\n> \"replace_at\" comes to mind as a better name.\n\nI do not agree with that at all. It shows that a literal\nsearch-and-replace changing N to count does not work, but it does not\nshow that count is a bad name for the concept, and I don't think it\nis. I believe that if I were reading the documentation, count would be\nclearer to me than N, N would probably still be clear enough, and\nreplace_at wouldn't be clear at all. I'd expect replace_at to be a\ncharacter position or something, not an occurrence count.The function replaces matches, not random characters. And if you are reading the documentation I find it implausible that the wording I suggested would cause one to think in terms of characters instead of matches.If I choose not to read the documentation \"count\" seems like it behaves as a qualified \"g\". I don't want all matches replaced, I want the first \"count\" matches only replaced.\"occurrence\" probably is the best choice but I agree the spelling issues are a big negative.count - how many things there are. This isn't a count. I'd rather stick with N, at least it actually has the desired meaning as a pointer to an item in a list.N - The label provides zero context as to what the number you place there is going to be used for. Labels ideally do more work than this especially if someone takes the time to spell them out. Otherwise why use \"pattern\" instead of \"p\".David J.",
"msg_date": "Wed, 15 May 2024 12:24:22 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 3:23 PM Chapman Flack <[email protected]> wrote:\n> What would be wrong with that, for consistency's sake?\n\nIt was proposed and rejected upthread, but that's not to say that I\nnecessarily endorse the reasons given.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:31:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 3:25 PM David G. Johnston\n<[email protected]> wrote:\n> The function replaces matches, not random characters. And if you are reading the documentation I find it implausible that the wording I suggested would cause one to think in terms of characters instead of matches.\n\nI mean I just told you what my reaction to it was. If you find that\nreaction \"implausible\" then I guess you think I was lying when I said\nthat?\n\n> N - The label provides zero context as to what the number you place there is going to be used for. Labels ideally do more work than this especially if someone takes the time to spell them out. Otherwise why use \"pattern\" instead of \"p\".\n\nI feel like you're attacking a straw man here. I never said that N was\nmy first choice; in fact, I said the opposite. But I do think that if\nthe documentation says, as it does, that the function is\nregexp_replace(source, pattern, replacement, start, N, flags), a\nreader who has some idea what a function called regexp_replace might\ndo will probably be able to guess what N is. It's probably also true\nthat if we changed \"pattern\" to \"p\" they would still be able to guess\nthat too, because there's nothing other than a pattern that you'd\nexpect to pass to a regexp-replacement function that starts with p,\nbut it would still be worse than what we have now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:52:42 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:52 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, May 15, 2024 at 3:25 PM David G. Johnston\n> <[email protected]> wrote:\n> > The function replaces matches, not random characters. And if you are\n> reading the documentation I find it implausible that the wording I\n> suggested would cause one to think in terms of characters instead of\n> matches.\n>\n> I mean I just told you what my reaction to it was. If you find that\n> reaction \"implausible\" then I guess you think I was lying when I said\n> that?\n>\n>\nYou just broke my brain when you say that you read:\n\nBy default, only the first match of the pattern is replaced. If replace_at\nis specified and greater than zero, then the first \"replace_at - 1\" matches\nare skipped before making a single replacement (i.e., the g flag is ignored\nwhen replace_at is specified.)\n\nAnd then say:\n\nI'd expect replace_at to be a character position or something, not an\noccurrence count.\n\nI guess it isn't a claim you are lying, rather I simply don't follow your\nmental model of all this and in my mental model behind the proposal I don't\nbelieve the typical reader will become confused on that point. I guess\nthat means I don't find you to be the typical reader, at least so far as\nthis specific topic goes. But hey, maybe I'm the one in the minority. In\neither case we disagree and that was my main point.\n\nDavid J.\n\nOn Wed, May 15, 2024 at 12:52 PM Robert Haas <[email protected]> wrote:On Wed, May 15, 2024 at 3:25 PM David G. Johnston\n<[email protected]> wrote:\n> The function replaces matches, not random characters. And if you are reading the documentation I find it implausible that the wording I suggested would cause one to think in terms of characters instead of matches.\n\nI mean I just told you what my reaction to it was. If you find that\nreaction \"implausible\" then I guess you think I was lying when I said\nthat?You just broke my brain when you say that you read:By default, only the first match of the pattern is replaced. If replace_at is specified and greater than zero, then the first \"replace_at - 1\" matches are skipped before making a single replacement (i.e., the g flag is ignored when replace_at is specified.)And then say:I'd expect replace_at to be a character position or something, not an occurrence count.I guess it isn't a claim you are lying, rather I simply don't follow your mental model of all this and in my mental model behind the proposal I don't believe the typical reader will become confused on that point. I guess that means I don't find you to be the typical reader, at least so far as this specific topic goes. But hey, maybe I'm the one in the minority. In either case we disagree and that was my main point.David J.",
"msg_date": "Wed, 15 May 2024 13:12:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 12:07 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, May 15, 2024 at 3:01 PM David G. Johnston\n> <[email protected]> wrote:\n> > I think this confusion goes to show that replacing N with count doesn't\n> work.\n> >\n> > \"replace_at\" comes to mind as a better name.\n> I'd expect replace_at to be a\n> character position or something, not an occurrence count.\n>\n>\nI'll amend the name to: \"replace_match\"\n\nI do now see that since the immediately preceding parameter, \"start\", deals\nwith characters instead of matches that making it clear this parameter\ndeals in matches in the name work. The singular 'match' has all the same\nbenefits as 'at' plus this point of clarity.\n\n\nDavid J.\n\nOn Wed, May 15, 2024 at 12:07 PM Robert Haas <[email protected]> wrote:On Wed, May 15, 2024 at 3:01 PM David G. Johnston\n<[email protected]> wrote:\n> I think this confusion goes to show that replacing N with count doesn't work.\n>\n> \"replace_at\" comes to mind as a better name.I'd expect replace_at to be a\ncharacter position or something, not an occurrence count.I'll amend the name to: \"replace_match\"I do now see that since the immediately preceding parameter, \"start\", deals with characters instead of matches that making it clear this parameter deals in matches in the name work. The singular 'match' has all the same benefits as 'at' plus this point of clarity.David J.",
"msg_date": "Wed, 15 May 2024 13:17:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 4:13 PM David G. Johnston\n<[email protected]> wrote:\n>\n> You just broke my brain when you say that you read:\n>\n> By default, only the first match of the pattern is replaced. If replace_at is specified and greater than zero, then the first \"replace_at - 1\" matches are skipped before making a single replacement (i.e., the g flag is ignored when replace_at is specified.)\n>\n> And then say:\n>\n> I'd expect replace_at to be a character position or something, not an occurrence count.\n\nAh. What I meant was: if I just saw the parameter name, and not the\ndocumentation, I believe that I would not correctly understand what it\ndid. I would have had to read the docs. Whereas I'm pretty sure at\nsome point years ago, I looked up these functions and I saw \"N\", and I\ndid understand what that did without needing it explained. If I had\nseen \"count\" or \"occurrence\" I think I would have understood that\nwithout further explanation, too.\n\nSo my point was: to me, N is more self-documenting than replace_at,\nand less self-documenting than count or occurrence.\n\nIf your mileage varies on that point, so be it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 16:18:53 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 1:19 PM Robert Haas <[email protected]> wrote:\n\n>\n> So my point was: to me, N is more self-documenting than replace_at,\n> and less self-documenting than count or occurrence.\n>\n> If your mileage varies on that point, so be it!\n>\n>\nMaybe just \"match\" instead of \"replace_match\".\n\nReading this it strikes me that any of these parameter names can and\nprobably should be read as having \"replace\" in front of them:\n\nreplace N\nreplace count\nreplace occurrence\nreplace match\n\nSaying replace becomes redundant:\nreplace replace at\nreplace replace match\n\nDavid J.\n\nOn Wed, May 15, 2024 at 1:19 PM Robert Haas <[email protected]> wrote:\nSo my point was: to me, N is more self-documenting than replace_at,\nand less self-documenting than count or occurrence.\n\nIf your mileage varies on that point, so be it!Maybe just \"match\" instead of \"replace_match\".Reading this it strikes me that any of these parameter names can and probably should be read as having \"replace\" in front of them:replace Nreplace countreplace occurrencereplace matchSaying replace becomes redundant:replace replace atreplace replace matchDavid J.",
"msg_date": "Wed, 15 May 2024 13:24:40 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 05/15/24 15:31, Robert Haas wrote:\n> On Wed, May 15, 2024 at 3:23 PM Chapman Flack <[email protected]> wrote:\n>> What would be wrong with [occurrence], for consistency's sake?\n> \n> It was proposed and rejected upthread, but that's not to say that I\n> necessarily endorse the reasons given.\n\nApologies for not having read far enough up the thread before replying.\n\nHaving done so now, I guess I'd just offer one small point: the upthread\ndiscussion did mention that 'occurrence' was used by Oracle, and asked\n\"if you're not following Oracle slavishly, exactly what is the argument\nin its favor?\".\n\nNothing else upthread seems to have mentioned that OCCURRENCE is the\nexact keyword used in ISO SQL for the analogous argument in analogous\nfunctions. Maybe that won't have any effect on the outcome either, but\nit does seem worth getting into the thread.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 15 May 2024 16:28:35 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Wed, May 15, 2024 at 4:25 PM David G. Johnston\n<[email protected]> wrote:\n> Maybe just \"match\" instead of \"replace_match\".\n\nWell, this is just turning into a bikeshedding exercise at this point.\nWe can generate names for this parameter all day long, but a bunch of\nnames none of which gets more than one vote is not really helping\nanything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 16:48:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Thu, May 16, 2024 at 3:10 AM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > On Thu, Apr 4, 2024 at 9:55 AM jian he <[email protected]> wrote:\n> >> changing \"N\" to lower-case would be misleading for regexp_replace?\n> >> so I choose \"count\".\n>\n> > I don't see why that would be confusing for regexp_replace\n> > specifically, but I think N => count is a reasonable change to make.\n> > However, I don't think this quite works:\n> > + then the <replaceable>count</replaceable>'th match of the pattern\n>\n> I think the origin of the problem here is not wanting to use \"N\"\n> as the actual name of the parameter, because then users would have\n> to double-quote it to write \"regexp_replace(..., \"N\" => 42, ...)\".\n>\n> However ... is that really so awful? It's still fewer keystrokes\n> than \"count\". It's certainly a potential gotcha for users who've\n> not internalized when they need double quotes, but I think we\n> could largely address that problem just by making sure to provide\n> a documentation example that shows use of \"N\".\n\ndone it this way. patch attached.\n\nlast example from\n\nregexp_replace('A PostgreSQL function', 'a|e|i|o|u', 'X', 1, 3, 'i')\n A PostgrXSQL function\n\nchange to\n\nregexp_replace(string=>'A PostgreSQL function', pattern=>'a|e|i|o|u',\nreplacement=>'X',start=>1, \"N\"=>3, flags=>'i');\n A PostgrXSQL function\n\nbut I am not 100% sure\n <lineannotation>A PostgrXSQL\nfunction</lineannotation>\nis in the right position.\n\n\nalso address Chapman Flack point:\ncorrect me if i am wrong, but i don't think the ISO standard mandates\nfunction argument names.\nSo we can choose the best function argument name for our purpose?",
"msg_date": "Mon, 15 Jul 2024 20:02:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 07/15/24 08:02, jian he wrote:\n> also address Chapman Flack point:\n> correct me if i am wrong, but i don't think the ISO standard mandates\n> function argument names.\n> So we can choose the best function argument name for our purpose?\n\nAh, I may have mistaken which functions the patch meant to apply to.\n\nThese being the non-ISO regexp_* functions using POSIX expressions,\nthe ISO standard indeed says nothing about them.\n\nIn the ISO standard *_regex \"functions\", there are not really \"function\nargument names\" mandated, because, like so many things in ISO SQL, they\nhave their own special syntax instead of being generic function calls:\n\nTRANSLATE_REGEX('a|e|i|o|u' FLAG 'i' IN 'A PostgreSQL function'\n WITH 'X' FROM 1 OCCURRENCE 3);\n\nAny choice to use similar argument names in the regexp_* functions would\nbe a matter of consistency with the analogous ISO functions, not anything\nmandated.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 15 Jul 2024 10:46:06 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 07/15/24 10:46, Chapman Flack wrote:\n> Ah, I may have mistaken which functions the patch meant to apply to.\n> ...\n> Any choice to use similar argument names in the regexp_* functions would\n> be a matter of consistency with the analogous ISO functions, not anything\n> mandated.\n\nOr, looking back, I might have realized these were the non-ISO regexp_*\nfunctions, but seen there was bikeshedding happening over the best name\nto use for the occurrence argument, and merely suggested ISO's choice\nOCCURRENCE for the analogous ISO functions, as a possible bikeshed\naccelerator.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 15 Jul 2024 10:54:55 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> [ v5-0001-add-regex-functions-argument-names-to-pg_proc.patch ]\n\nI'm not sure whether we've bikeshedded this to death yet, but\npersonally I'm content with the naming choices here (which basically\nare those already shown in table 9.10). However, while looking\nat the patch I noticed a couple of issues, one small, the other\na bit bigger.\n\nThe small issue is that table 9.10 offers this syntax diagram\nfor regexp_replace:\n\nregexp_replace ( string text, pattern text, replacement text [, start integer ] [, flags text ] ) → text\n\nThis implies that it's valid to write\n\n\tregexp_replace (string, pattern, replacement, start, flags)\n\nbut it is not: we have no function matching that signature. I'm not\nin a hurry to add one, either, for fear of ambiguity against the other\nregexp_replace signature. I think this needs to be broken into two\nsyntax diagrams:\n\nregexp_replace ( string text, pattern text, replacement text [, start integer ] ) → text\nregexp_replace ( string text, pattern text, replacement text [, flags text ] ) → text\n\nThe larger issue is that contrib/citext offers versions of some of\nthese functions that are meant to be drop-in replacements using\ncitext input. Hence, we need to add the same parameter names to\nthose functions, or they'll fail to replace some calls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 17:48:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 5:48 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > [ v5-0001-add-regex-functions-argument-names-to-pg_proc.patch ]\n>\n> I'm not sure whether we've bikeshedded this to death yet, but\n> personally I'm content with the naming choices here (which basically\n> are those already shown in table 9.10). However, while looking\n> at the patch I noticed a couple of issues, one small, the other\n> a bit bigger.\n>\n> The small issue is that table 9.10 offers this syntax diagram\n> for regexp_replace:\n>\n> regexp_replace ( string text, pattern text, replacement text [, start integer ] [, flags text ] ) → text\n>\n> This implies that it's valid to write\n>\n> regexp_replace (string, pattern, replacement, start, flags)\n>\n> but it is not: we have no function matching that signature. I'm not\n> in a hurry to add one, either, for fear of ambiguity against the other\n> regexp_replace signature. I think this needs to be broken into two\n> syntax diagrams:\n>\n> regexp_replace ( string text, pattern text, replacement text [, start integer ] ) → text\n> regexp_replace ( string text, pattern text, replacement text [, flags text ] ) → text\n\n\nWe can list them separately.\nregexp_replace(string, pattern, replacement [, start])\nregexp_replace(string, pattern, replacement [, flags])\nregexp_replace(string, pattern, replacement , start , N [, flags ]).\n\nif both optional is not there then they are the same, list 2 potential\nidentical functions separately seems wrong?\nso i choose 2 bracket with a vertical bar:\n\nregexp_replace(string, pattern, replacement [[, start] | [, flags]]).\n\nmaybe less readable.\n\n\n> The larger issue is that contrib/citext offers versions of some of\n> these functions that are meant to be drop-in replacements using\n> citext input. Hence, we need to add the same parameter names to\n> those functions, or they'll fail to replace some calls.\n>\n\nI first wanted to use alterfunction solve this, then found out it cannot,\nlater I found out CREATE OR REPLACE FUNCTION saved us.\n\ncitext module, these functions:\nregexp_match()\nregexp_matches()\nregexp_replace()\nregexp_split_to_array()\nregexp_split_to_table()\nwere created in contrib/citext/citext--1.4.sql, we can add the CREATE\nOR REPLACE FUNCTION to 1.4.sql.\nbut to avoid unintended consequences I just add these to the newly\ncreated file citext--1.6--1.7.sql,\nto make a version bump.",
"msg_date": "Fri, 19 Jul 2024 13:51:39 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Fri, Jul 19, 2024 at 5:48 AM Tom Lane <[email protected]> wrote:\n>> The larger issue is that contrib/citext offers versions of some of\n>> these functions that are meant to be drop-in replacements using\n>> citext input. Hence, we need to add the same parameter names to\n>> those functions, or they'll fail to replace some calls.\n\n> citext module, these functions:\n> regexp_match()\n> regexp_matches()\n> regexp_replace()\n> regexp_split_to_array()\n> regexp_split_to_table()\n> were created in contrib/citext/citext--1.4.sql, we can add the CREATE\n> OR REPLACE FUNCTION to 1.4.sql.\n> but to avoid unintended consequences I just add these to the newly\n> created file citext--1.6--1.7.sql,\n> to make a version bump.\n\nYes. You *have to* do it like that, the shortcut is not an option,\nbecause without an extension update script there is no way to\nupgrade an existing installation to the new definition. Basically,\nonce we ship a given release of an extension, that script is frozen\nin amber.\n\nI haven't heard any further bikeshedding on the argument names,\nso I'll move forward with committing this soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:51:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On 15.07.24 16:52, Chapman Flack wrote:\n> On 07/15/24 10:46, Chapman Flack wrote:\n>> Ah, I may have mistaken which functions the patch meant to apply to.\n>> ...\n>> Any choice to use similar argument names in the regexp_* functions would\n>> be a matter of consistency with the analogous ISO functions, not anything\n>> mandated.\n> \n> Or, looking back, I might have realized these were the non-ISO regexp_*\n> functions, but seen there was bikeshedding happening over the best name\n> to use for the occurrence argument, and merely suggested ISO's choice\n> OCCURRENCE for the analogous ISO functions, as a possible bikeshed\n> accelerator.\n\nThese functions were copied from Oracle, so one argument was to use the \nnames from Oracle as-is.\n\n\n\n",
"msg_date": "Thu, 25 Jul 2024 15:42:29 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "I wrote:\n> I haven't heard any further bikeshedding on the argument names,\n> so I'll move forward with committing this soon.\n\nPushed, after a little further fooling with the documentation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2024 14:53:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 5:48 AM Tom Lane <[email protected]> wrote:\n>\n> The small issue is that table 9.10 offers this syntax diagram\n> for regexp_replace:\n>\n> regexp_replace ( string text, pattern text, replacement text [, start integer ] [, flags text ] ) → text\n>\n> This implies that it's valid to write\n>\n> regexp_replace (string, pattern, replacement, start, flags)\n>\n> but it is not: we have no function matching that signature. I'm not\n> in a hurry to add one, either, for fear of ambiguity against the other\n> regexp_replace signature. I think this needs to be broken into two\n> syntax diagrams:\n>\n> regexp_replace ( string text, pattern text, replacement text [, start integer ] ) → text\n> regexp_replace ( string text, pattern text, replacement text [, flags text ] ) → text\n>\n\nthis problem is still there, after commit\n580f8727ca93b7b9a2ce49746b9cdbcb0a2b4a7e.\n\n<<\nIt has the syntax regexp_replace(string, pattern, replacement [, start\n[, N ]] [, flags ]). (Notice that N cannot be specified unless start\nis, but flags can be given in any case.)\n<<\ndoc, the above part still needs change?\n\nsee my posts:\nhttps://postgr.es/m/CACJufxE5p4KhGyBUwCZCxhxdU%2BzJBXy2deX4u85SL%2Bkew4F7Cw%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 21:45:58 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Fri, Jul 19, 2024 at 5:48 AM Tom Lane <[email protected]> wrote:\n>> but it is not: we have no function matching that signature. I'm not\n>> in a hurry to add one, either, for fear of ambiguity against the other\n>> regexp_replace signature. I think this needs to be broken into two\n>> syntax diagrams:\n\n> this problem is still there, after commit\n> 580f8727ca93b7b9a2ce49746b9cdbcb0a2b4a7e.\n\nNo, I believe I fixed it: the table now offers\n\nregexp_replace ( string text, pattern text, replacement text [, flags text ] ) → text\n\nregexp_replace ( string text, pattern text, replacement text, start integer [, N integer [, flags text ] ] ) → text\n\nThat's different from either of the solutions discussed in this\nthread, but simpler.\n\n> <<\n> It has the syntax regexp_replace(string, pattern, replacement [, start\n> [, N ]] [, flags ]). (Notice that N cannot be specified unless start\n> is, but flags can be given in any case.)\n> <<\n> doc, the above part still needs change?\n\nAFAICS, that one is correct, so I left it alone. (I didn't try to\nmerge the table's two entries into one like that, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:17:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 10:17 PM Tom Lane <[email protected]> wrote:\n>\n> > <<\n> > It has the syntax regexp_replace(string, pattern, replacement [, start\n> > [, N ]] [, flags ]). (Notice that N cannot be specified unless start\n> > is, but flags can be given in any case.)\n> > <<\n> > doc, the above part still needs change?\n>\n> AFAICS, that one is correct, so I left it alone. (I didn't try to\n> merge the table's two entries into one like that, though.)\n>\n\nfunctions-string.html output is correct.\n\nbut in functions-matching.html\n\nregexp_replace(string, pattern, replacement [, start [, N ]] [, flags ]).\n\ncan represent\n\nregexp_replace(string, pattern, replacement , start, flags ) ?\n\nbut we don't have \"regexp_replace(string, pattern, replacement ,\nstart, flags )\"\n\n\n",
"msg_date": "Fri, 26 Jul 2024 22:30:55 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Fri, Jul 26, 2024 at 10:17 PM Tom Lane <[email protected]> wrote:\n>> AFAICS, that one is correct, so I left it alone. (I didn't try to\n>> merge the table's two entries into one like that, though.)\n\n> regexp_replace(string, pattern, replacement [, start [, N ]] [, flags ]).\n\n> can represent\n\n> regexp_replace(string, pattern, replacement , start, flags ) ?\n\nHmm, yeah, you're right. I didn't want to write two separate\nsynopses there, but maybe there's no choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:39:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 10:40 PM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > On Fri, Jul 26, 2024 at 10:17 PM Tom Lane <[email protected]> wrote:\n> >> AFAICS, that one is correct, so I left it alone. (I didn't try to\n> >> merge the table's two entries into one like that, though.)\n>\n> > regexp_replace(string, pattern, replacement [, start [, N ]] [, flags ]).\n>\n> > can represent\n>\n> > regexp_replace(string, pattern, replacement , start, flags ) ?\n>\n> Hmm, yeah, you're right. I didn't want to write two separate\n> synopses there, but maybe there's no choice.\n>\n\nwe can get rid of:\n (Notice that <replaceable>N</replaceable> cannot be specified\n unless <replaceable>start</replaceable> is,\n but <replaceable>flags</replaceable> can be given in any case.)\n\nNow the output is\nIt has the syntax regexp_replace(string, pattern, replacement [, flags\n]) and regexp_replace(string, pattern, replacement, start [, N [,\nflags ]]).\n\n\nI also decorated \"[]\" with \"<optional>\".",
"msg_date": "Sat, 27 Jul 2024 08:56:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add function argument names to regex* functions."
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Fri, Jul 26, 2024 at 10:40 PM Tom Lane <[email protected]> wrote:\n>> Hmm, yeah, you're right. I didn't want to write two separate\n>> synopses there, but maybe there's no choice.\n\n> Now the output is\n> It has the syntax regexp_replace(string, pattern, replacement [, flags\n> ]) and regexp_replace(string, pattern, replacement, start [, N [,\n> flags ]]).\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2024 15:40:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add function argument names to regex* functions."
}
] |
[
{
"msg_contents": "Hi folks,\n\nI was playing around with the `typos` tool\n(https://github.com/crate-ci/typos), and thought I'd run it on the\nposgres repo for fun. After a bit of tweaking to get rid of most false\npositives (see separately attached .typos.toml file), it came up with a\nuseful set of suggestions, some of which I applied verbatim, others\nwhich needed a bit more rewording.\n\nAttached is a series of patches. The first one are what I consider\nobvious, unambiguous fixes to code comments. The subsequent ones are\nfixes for actual code (variable, function, type names) and docs, one\npatch per class of typo. As far as I can tell, none of the code changes\n(except the ECPG one, see below) affect anything exported, so this\nshould not cause any compatibility issues for extensions.\n\nThe ECPG change affects the generated C code, but from my reading of the\ncallers in descriptor.c and ecpg.trailer, any code that would have\ncaused it to encounter the affected enum value would fail to compile, so\neither the case is not possible, or nobody actually uses whatever syntax\nis affected (I don't know enough about ECPG to tell without spending far\ntoo much time digging in the code).\n\n- ilmari",
"msg_date": "Wed, 27 Dec 2023 21:51:26 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assorted typo fixes"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 3:21 AM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> Hi folks,\n>\n> I was playing around with the `typos` tool\n> (https://github.com/crate-ci/typos), and thought I'd run it on the\n> posgres repo for fun. After a bit of tweaking to get rid of most false\n> positives (see separately attached .typos.toml file), it came up with a\n> useful set of suggestions, some of which I applied verbatim, others\n> which needed a bit more rewording.\n>\n> Attached is a series of patches. The first one are what I consider\n> obvious, unambiguous fixes to code comments. The subsequent ones are\n> fixes for actual code (variable, function, type names) and docs, one\n> patch per class of typo. As far as I can tell, none of the code changes\n> (except the ECPG one, see below) affect anything exported, so this\n> should not cause any compatibility issues for extensions.\n>\n> The ECPG change affects the generated C code, but from my reading of the\n> callers in descriptor.c and ecpg.trailer, any code that would have\n> caused it to encounter the affected enum value would fail to compile, so\n> either the case is not possible, or nobody actually uses whatever syntax\n> is affected (I don't know enough about ECPG to tell without spending far\n> too much time digging in the code).\n\nI was reviewing the Patch and came across a minor issue that the Patch\ndoes not apply on the current Head. Please provide the updated version\nof the patch. Also, I found one typo:\n0008-ecpg-fix-typo-in-get_dtype-return-value-for-ECPGd_co.patch\nAll the other enum values return a string mathing the enum label, but\nthis has had a trailing r since the function was added in commit\n339a5bbfb17ecd171ebe076c5bf016c4e66e2c0a\n\n Here 'mathing' should be 'matching'.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Mon, 1 Jan 2024 14:33:45 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "Shubham Khanna <[email protected]> writes:\n\n> I was reviewing the Patch and came across a minor issue that the Patch\n> does not apply on the current Head. Please provide the updated version\n> of the patch.\n\nThanks for the heads-up. Commit 5ccb3bb13dcbedc30d015fc06d306d5106701e16\nremoved one of the instances of \"data struture\" fixed by the patch.\n\nRebased patch set attached. I also squashed the check_decls.m4 change\ninto the main comment typos commit.\n\n> Also, I found one typo:\n> 0008-ecpg-fix-typo-in-get_dtype-return-value-for-ECPGd_co.patch All\n> the other enum values return a string mathing the enum label, but this\n> has had a trailing r since the function was added in commit\n> 339a5bbfb17ecd171ebe076c5bf016c4e66e2c0a\n>\n> Here 'mathing' should be 'matching'.\n\nThanks. I've fixed the commit message (and elaborated it a bit more why\nI think it's a valid and safe fix).\n\n> Thanks and Regards,\n> Shubham Khanna.\n\n- ilmari",
"msg_date": "Mon, 01 Jan 2024 23:05:08 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 4:35 AM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> Shubham Khanna <[email protected]> writes:\n>\n> > I was reviewing the Patch and came across a minor issue that the Patch\n> > does not apply on the current Head. Please provide the updated version\n> > of the patch.\n>\n> Thanks for the heads-up. Commit 5ccb3bb13dcbedc30d015fc06d306d5106701e16\n> removed one of the instances of \"data struture\" fixed by the patch.\n>\n> Rebased patch set attached. I also squashed the check_decls.m4 change\n> into the main comment typos commit.\n>\n> > Also, I found one typo:\n> > 0008-ecpg-fix-typo-in-get_dtype-return-value-for-ECPGd_co.patch All\n> > the other enum values return a string mathing the enum label, but this\n> > has had a trailing r since the function was added in commit\n> > 339a5bbfb17ecd171ebe076c5bf016c4e66e2c0a\n> >\n> > Here 'mathing' should be 'matching'.\n>\n> Thanks. I've fixed the commit message (and elaborated it a bit more why\n> I think it's a valid and safe fix).\n\nI have reviewed the Rebased version of the Patch and it looks fine to me.\n\nThanks and Regards,\nShubham Khanna.",
"msg_date": "Tue, 2 Jan 2024 15:51:22 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 6:05 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n> Thanks. I've fixed the commit message (and elaborated it a bit more why\n> I think it's a valid and safe fix).\n\nRegarding 0001:\n\n- AIUI, check_decls.m4 is copied from an upstream project, so I don't\nthink we should tinker with it.\n- I'm not convinced by encrypter->encryptor\n- I'm not convinced by multidimension-aware->multidimensional-aware\n- I'm not convinced by cachable->cacheable\n- You corrected restorting to restarting, but I'm wondering if Andres\nintended restoring?\n\nCommitted the rest of 0001.\n\n0002-0005 look OK to me, so I committed those as well.\n\nWith regard to 0006, we typically use indexes rather than indices as\nthe plural of \"index\", although exceptions exist.\n\nI haven't looked at the rest.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 12:34:20 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 12:34:20PM -0500, Robert Haas wrote:\n> 0002-0005 look OK to me, so I committed those as well.\n\nCool, thanks.\n\n> With regard to 0006, we typically use indexes rather than indices as\n> the plural of \"index\", although exceptions exist.\n> \n> I haven't looked at the rest.\n\nI got that on a local branch, then got drifted away. I have grouped\n0007~0009 together (0007 was on me), and applied them on HEAD.\n\n0010 is indeed the \"correct\" plural form for vertex I've known but\n\"vertexes\" is not wrong either. Perhaps that's worth changing on\nconsistency grounds?\n\n 8)</literal>, must be identical. It doesn't matter which representation\n you choose to be the canonical one, so long as two equivalent values with\n- different formattings are always mapped to the same value with the same\n+ different formatting are always mapped to the same value with the same\n formatting.\n\nI am not sure about this one in 0011 though.. It also feels like this\ncould be reworded completely.\n--\nMichael",
"msg_date": "Wed, 3 Jan 2024 14:34:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> 0010 is indeed the \"correct\" plural form for vertex I've known but\n> \"vertexes\" is not wrong either. Perhaps that's worth changing on\n> consistency grounds?\n\nYeah. A quick grep shows that we have 16 uses of \"vertices\" and\nonly this one of \"vertexes\". It's not really wrong, but +1 for\nmaking it match the others.\n\n> 8)</literal>, must be identical. It doesn't matter which representation\n> you choose to be the canonical one, so long as two equivalent values with\n> - different formattings are always mapped to the same value with the same\n> + different formatting are always mapped to the same value with the same\n> formatting.\n\n> I am not sure about this one in 0011 though.. It also feels like this\n> could be reworded completely.\n\nI'd leave this alone, it's not wrong either. If you want to propose\na complete rewording, do so; but that's not \"misspelling\" territory.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jan 2024 00:56:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n\n> On Mon, Jan 1, 2024 at 6:05 PM Dagfinn Ilmari Mannsåker\n> <[email protected]> wrote:\n>> Thanks. I've fixed the commit message (and elaborated it a bit more why\n>> I think it's a valid and safe fix).\n>\n> Regarding 0001:\n>\n> - AIUI, check_decls.m4 is copied from an upstream project, so I don't\n> think we should tinker with it.\n\nIt contains modified versions of a few macros from Autoconf's\ngeneral.m4¹, specifically _AC_UNDECLARED_WARNING (since renamed to\n_AC_UNDECLARED_BUILTIN upstream) and _AC_CHECK_DECL_BODY. That has\nsince been updated² to spell François' name correctly, so I think we\nshould follow suit (and maybe also check if our override is even still\nnecessary).\n\n[1]: http://git.savannah.gnu.org/gitweb/?p=autoconf.git;a=history;f=lib/autoconf/general.m4;hb=HEAD\n[2]: http://git.savannah.gnu.org/gitweb/?p=autoconf.git;a=commitdiff;h=8a228e9d58363ad3ebdb89a05bd77568d1d863b7\n\n> - I'm not convinced by encrypter->encryptor\n> - I'm not convinced by multidimension-aware->multidimensional-aware\n\nI don't feel particularly strongy about these.\n\n> - I'm not convinced by cachable->cacheable\n\nIf nothing else, consistency. There are 13 occurrences of \"cacheable\"\nand only three of \"cachable\" in the tree.\n\n> - You corrected restorting to restarting, but I'm wondering if Andres\n> intended restoring?\n\nYeah, re-reading the sentence that's clearly meant to be \"restoring\".\n\n> Committed the rest of 0001.\n>\n> 0002-0005 look OK to me, so I committed those as well.\n\nThanks!\n\n> With regard to 0006, we typically use indexes rather than indices as\n> the plural of \"index\", although exceptions exist.\n\nWe (mostly) use indexes when referring to database indexes (as in btree,\ngist, etc.), but indices when referring to offsets in arrays, which is\nwhat this variable is about.\n\n- ilmari\n\n\n",
"msg_date": "Wed, 03 Jan 2024 16:45:56 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assorted typo fixes"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 12:56:58AM -0500, Tom Lane wrote:\n> Yeah. A quick grep shows that we have 16 uses of \"vertices\" and\n> only this one of \"vertexes\". It's not really wrong, but +1 for\n> making it match the others.\n\nApplied this one as 793ecff7df80 on HEAD.\n\n> I'd leave this alone, it's not wrong either. If you want to propose\n> a complete rewording, do so; but that's not \"misspelling\" territory.\n\nI don't have a better idea in my mind now, so I'm OK to leave that be.\n--\nMichael",
"msg_date": "Fri, 5 Jan 2024 20:13:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assorted typo fixes"
}
] |
[
{
"msg_contents": "Hi,\n\nElsewhere [1] I required a way to estimate the time corresponding to a\nparticular LSN in the past. I devised the attached LSNTimeline, a data\nstructure mapping LSNs <-> timestamps with decreasing precision for\nolder time, LSN pairs. This can be used to locate and translate a\nparticular time to LSN or vice versa using linear interpolation.\n\nI've added an instance of the LSNTimeline to PgStat_WalStats and insert\nnew values to it in background writer's main loop. This patch set also\nintroduces some new pageinspect functions exposing LSN <-> time\ntranslations.\n\nOutside of being useful to users wondering about the last modification\ntime of a particular block in a relation, the LSNTimeline can be put to\nuse in other Postgres sub-systems to govern behavior based on resource\nconsumption -- using the LSN consumption rate as a proxy.\n\nAs mentioned in [1], the LSNTimeline is a prerequisite for my\nimplementation of a new freeze heuristic which seeks to freeze only\npages which will remain unmodified for a certain amount of wall clock\ntime. But one can imagine other uses for such translation capabilities.\n\nThe pageinspect additions need a bit more work. I didn't bump the\npageinspect version (didn't add the new functions to a new pageinspect\nversion file). I also didn't exercise the new pageinspect functions in a\ntest. I was unsure how to write a test which would be guaranteed not to\nflake. Because the background writer updates the timeline, it seemed a\nremote possibility that the time or LSN returned by the functions would\nbe 0 and as such, I'm not sure even a test that SELECT time/lsn > 0\nwould always pass.\n\nI also noticed the pageinspect functions don't have XML id attributes\nfor link discoverability. I planned to add that in a separate commit.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_b3tpbdRPUPh1Q5h35gXhY%3DspH2ssNsEsJ9sDfw6%3DPEAg%40mail.gmail.com",
"msg_date": "Wed, 27 Dec 2023 17:16:17 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 5:16 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> Elsewhere [1] I required a way to estimate the time corresponding to a\n> particular LSN in the past. I devised the attached LSNTimeline, a data\n> structure mapping LSNs <-> timestamps with decreasing precision for\n> older time, LSN pairs. This can be used to locate and translate a\n> particular time to LSN or vice versa using linear interpolation.\n\nAttached is a new version which fixes one overflow danger I noticed in\nthe original patch set.\n\nI have also been doing some thinking about the LSNTimeline data\nstructure. Its array elements are combined before all elements have\nbeen used. This sacrifices precision earlier than required. I tried\nsome alternative structures that would use the whole array. There are\na lot of options, though. Currently each element fits twice as many\nmembers as the preceding element. To use the whole array, we'd have to\nchange the behavior from filling each element to its max capacity to\nsomething that filled elements only partially. I'm not sure what the\nbest distribution would be.\n\n> I've added an instance of the LSNTimeline to PgStat_WalStats and insert\n> new values to it in background writer's main loop. This patch set also\n> introduces some new pageinspect functions exposing LSN <-> time\n> translations.\n\nI was thinking that maybe it is silly to have the functions allowing\nfor translation between LSN and time in the pageinspect extension --\nsince they are not specifically related to pages (pages are just an\nobject that has an accessible LSN). I was thinking perhaps we add them\nas system information functions. However, the closest related\nfunctions I can think of are those to get the current LSN (like\npg_current_wal_lsn ()). And those are listed as system administration\nfunctions under backup control [1]. I don't think the LSN <-> time\nfunctionality fits under backup control.\n\nIf I did put them in one of the system information function sections\n[2], which one would work best?\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-ADMIN-BACKUP\n[2] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO",
"msg_date": "Tue, 30 Jan 2024 14:07:44 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Hi,\n\nI took a look at this today, to try to understand the purpose and how it\nworks. Let me share some initial thoughts and questions I have. Some of\nthis may be wrong/missing the point, so apologies for that.\n\nThe goal seems worthwhile in general - the way I understand it, the\npatch aims to provide tracking of WAL \"velocity\", i.e. how much WAL was\ngenerated over time. Which we now don't have, as we only maintain simple\ncumulative stats/counters. And then uses it to estimate timestamp for a\ngiven LSN, and vice versa, because that's what the pruning patch needs.\n\nWhen I first read this, I immediately started wondering if this might\nuse the commit timestamp stuff we already have. Because for each commit\nwe already store the LSN and commit timestamp, right? But I'm not sure\nthat would be a good match - the commit_ts serves a very special purpose\nof mapping XID => (LSN, timestamp), I don't see how to make it work for\n(LSN=>timestmap) and (timestamp=>LSN) very easily.\n\n\nAs for the inner workings of the patch, my understanding is this:\n\n- \"LSNTimeline\" consists of \"LSNTime\" entries representing (LSN,ts)\npoints, but those points are really \"buckets\" that grow larger and\nlarger for older periods of time.\n\n- The entries are being added from bgwriter, i.e. on each loop we add\nthe current (LSN, timestamp) into the timeline.\n\n- We then estimate LSN/timestamp using the data stored in LSNTimeline\n(either LSN => timestamp, or the opposite direction).\n\n\nSome comments in arbitrary order:\n\n- AFAIK each entry represent an interval of time, and the next (older)\ninterval is twice as long, right? So the first interval is 1 second,\nthen 2 seconds, 4 seconds, 8 seconds, ...\n\n- But I don't understand how the LSNTimeline entries are \"aging\" and get\nless accurate, while the \"current\" bucket is short. lsntime_insert()\nseems to simply move to the next entry, but doesn't that mean we insert\nthe entries into larger and larger buckets?\n\n- The comments never really spell what amount of time the entries cover\n/ how granular it is. My understanding is it's simply measured in number\nof entries added, which is assumed to be constant and drive by\nbgwriter_delay, right? Which is 200ms by default. Which seems fine, but\nisn't the hibernation (HIBERNATE_FACTOR) going to mess with it?\n\nIs there some case where bgwriter would just loop without sleeping,\nfilling the timeline much faster? (I can't think of any, but ...)\n\n- The LSNTimeline comment claims an array of size 64 is large enough to\nnot need to care about filling it, but maybe it should briefly explain\nwhy we can never fill it (I guess 2^64 is just too many).\n\n- I don't quite understand why 0005 adds the functions to pageinspect.\nThis has nothing to do with pages, right?\n\n- Not sure why we need 0001. Just so that the \"estimate\" functions in\n0002 have a convenient \"start\" point? Surely we could look at the\ncurrent LSNTimeline data and use the oldest value, or (if there's no\ndata) use the current timestamp/LSN?\n\n- I wonder what happens if we lose the data - we know that if people\nreset statistics for whatever reason (or just lose them because of a\ncrash, or because they're on a replica), bad things happen to\nautovacuum. What's the (expected) impact on pruning?\n\n- What about a SRF function that outputs the whole LSNTimeline? Would be\nuseful for debugging / development, I think. (Just a suggestion).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Feb 2024 21:41:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Thanks so much for reviewing!\n\nOn Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> When I first read this, I immediately started wondering if this might\n> use the commit timestamp stuff we already have. Because for each commit\n> we already store the LSN and commit timestamp, right? But I'm not sure\n> that would be a good match - the commit_ts serves a very special purpose\n> of mapping XID => (LSN, timestamp), I don't see how to make it work for\n> (LSN=>timestmap) and (timestamp=>LSN) very easily.\n\nI took a look at the code in commit_ts.c, and I really don't see a way\nof reusing any of this commit<->timestamp infrastructure for\ntimestamp<->LSN mappings.\n\n> As for the inner workings of the patch, my understanding is this:\n>\n> - \"LSNTimeline\" consists of \"LSNTime\" entries representing (LSN,ts)\n> points, but those points are really \"buckets\" that grow larger and\n> larger for older periods of time.\n\nYes, they are buckets in the sense that they represent multiple values\nbut each contains a single LSNTime value which is the minimum of all\nthe LSNTimes we \"merged\" into that single array element. In order to\nrepresent a range of time, you need to use two array elements. The\nlinear interpolation from time <-> LSN is all done with two elements.\n\n> - AFAIK each entry represent an interval of time, and the next (older)\n> interval is twice as long, right? So the first interval is 1 second,\n> then 2 seconds, 4 seconds, 8 seconds, ...\n>\n> - But I don't understand how the LSNTimeline entries are \"aging\" and get\n> less accurate, while the \"current\" bucket is short. lsntime_insert()\n> seems to simply move to the next entry, but doesn't that mean we insert\n> the entries into larger and larger buckets?\n\nBecause the earlier array elements can represent fewer logical members\nthan later ones and because elements are merged into the next element\nwhen space runs out, later array elements will contain older data and\nmore of it, so those \"ranges\" will be larger. But, after thinking\nabout it and also reading your feedback, I realized my algorithm was\nsilly because it starts merging logical members before it has even\nused the whole array.\n\nThe attached v3 has a new algorithm. Now, LSNTimes are added from the\nend of the array forward until all array elements have at least one\nlogical member (array length == volume). Once array length == volume,\nnew LSNTimes will result in merging logical members in existing\nelements. We want to merge older members because those can be less\nprecise. So, the number of logical members per array element will\nalways monotonically increase starting from the beginning of the array\n(which contains the most recent data) and going to the end. We want to\nuse all the available space in the array. That means that each LSNTime\ninsertion will always result in a single merge. We want the timeline\nto be inclusive of the oldest data, so merging means taking the\nsmaller value of two LSNTime values. I had to pick a rule for choosing\nwhich elements to merge. So, I choose the merge target as the oldest\nelement whose logical membership is < 2x its predecessor. I merge the\nmerge target's predecessor into the merge target. Then I move all of\nthe intervening elements down 1. Then I insert the new LSNTime at\nindex 0.\n\n> - The comments never really spell what amount of time the entries cover\n> / how granular it is. My understanding is it's simply measured in number\n> of entries added, which is assumed to be constant and drive by\n> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but\n> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?\n>\n> Is there some case where bgwriter would just loop without sleeping,\n> filling the timeline much faster? (I can't think of any, but ...)\n\nbgwriter will wake up when there are buffers to flush, which is likely\ncorrelated with there being new LSNs. So, actually it seems like it\nmight work well to rely on only filling the timeline when there are\nthings for bgwriter to do.\n\n> - The LSNTimeline comment claims an array of size 64 is large enough to\n> not need to care about filling it, but maybe it should briefly explain\n> why we can never fill it (I guess 2^64 is just too many).\n\nThe new structure fits a different number of members. I have yet to\ncalculate that number, but it should be explained in the comments once\nI do.\n\nFor example, if we made an LSNTimeline with volume 4, once every\nelement had one LSNTime and we needed to start merging, the following\nis how many logical members each element would have after each of four\nmerges:\n1111\n1112\n1122\n1114\n1124\nSo, if we store the number of members as an unsigned 64-bit int and we\nhave an LSNTimeline with volume 64, what is the maximum number of\nmembers can we store if we hold all of the invariants described in my\nalgorithm above (we only merge when required, every element holds < 2x\nthe number of logical members as its predecessor, we do exactly one\nmerge every insertion [when required], membership must monotonically\nincrease [choose the oldest element meeting the criteria when deciding\nwhat to merge])?\n\n> - I don't quite understand why 0005 adds the functions to pageinspect.\n> This has nothing to do with pages, right?\n\nYou're right. I just couldn't think of a good place to put the\nfunctions. In version 3, I just put the SQL functions in pgstat_wal.c\nand made them generally available (i.e. not in a contrib module). I\nhaven't added docs back yet. But perhaps a section near the docs\ndescribing pg_xact_commit_timestamp() [1]? I wasn't sure if I should\nput the SQL function source code in pgstatfuncs.c -- I kind of prefer\nit in pgstat_wal.c but there are no other SQL functions there.\n\n> - Not sure why we need 0001. Just so that the \"estimate\" functions in\n> 0002 have a convenient \"start\" point? Surely we could look at the\n> current LSNTimeline data and use the oldest value, or (if there's no\n> data) use the current timestamp/LSN?\n\nWhen there are 0 or 1 entries in the timeline you'll get an answer\nthat could be very off if you just return the current timestamp or\nLSN. I guess that is okay?\n\n> - I wonder what happens if we lose the data - we know that if people\n> reset statistics for whatever reason (or just lose them because of a\n> crash, or because they're on a replica), bad things happen to\n> autovacuum. What's the (expected) impact on pruning?\n\nThis is an important question. Because stats aren't crashsafe, we\ncould return very inaccurate numbers for some time/LSN values if we\ncrash. I don't actually know what we could do about that. When I use\nthe LSNTimeline for the freeze heuristic it is less of an issue\nbecause the freeze heuristic has a fallback strategy when there aren't\nenough stats to do its calculations. But other users of this\nLSNTimeline will simply get highly inaccurate results (I think?). Is\nthere anything we could do about this? It seems bad.\n\nAndres had brought up something at some point about, what if the\ndatabase is simply turned off for awhile and then turned back on. Even\nif you cleanly shut down, will there be \"gaps\" in the timeline? I\nthink that could be okay, but it is something to think about.\n\n> - What about a SRF function that outputs the whole LSNTimeline? Would be\n> useful for debugging / development, I think. (Just a suggestion).\n\nGood idea! I've added this. Though, maybe there was a simpler way to\nimplement than I did.\n\nJust a note, all of my comments could use a lot of work, but I want to\nget consensus on the algorithm before I make sure and write about it\nin a perfect way.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-COMMIT-TIMESTAMP",
"msg_date": "Wed, 21 Feb 2024 21:45:24 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "> On 22 Feb 2024, at 03:45, Melanie Plageman <[email protected]> wrote:\n> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra\n> <[email protected]> wrote:\n\n>> - Not sure why we need 0001. Just so that the \"estimate\" functions in\n>> 0002 have a convenient \"start\" point? Surely we could look at the\n>> current LSNTimeline data and use the oldest value, or (if there's no\n>> data) use the current timestamp/LSN?\n> \n> When there are 0 or 1 entries in the timeline you'll get an answer\n> that could be very off if you just return the current timestamp or\n> LSN. I guess that is okay?\n\nI don't think that's a huge problem at such a young \"lsn-age\", but I might be\nmissing something.\n\n>> - I wonder what happens if we lose the data - we know that if people\n>> reset statistics for whatever reason (or just lose them because of a\n>> crash, or because they're on a replica), bad things happen to\n>> autovacuum. What's the (expected) impact on pruning?\n> \n> This is an important question. Because stats aren't crashsafe, we\n> could return very inaccurate numbers for some time/LSN values if we\n> crash. I don't actually know what we could do about that. When I use\n> the LSNTimeline for the freeze heuristic it is less of an issue\n> because the freeze heuristic has a fallback strategy when there aren't\n> enough stats to do its calculations. But other users of this\n> LSNTimeline will simply get highly inaccurate results (I think?). Is\n> there anything we could do about this? It seems bad.\n\nA complication with this over stats is that we can't recompute this in case of\na crash/corruption issue. The simple solution is to consider this unlogged\ndata and start fresh at every unclean shutdown, but I have a feeling that won't\nbe good enough for basing heuristics on?\n\n> Andres had brought up something at some point about, what if the\n> database is simply turned off for awhile and then turned back on. Even\n> if you cleanly shut down, will there be \"gaps\" in the timeline? I\n> think that could be okay, but it is something to think about.\n\nThe gaps would represent reality, so there is nothing wrong per se with gaps,\nbut if they inflate the interval of a bucket which in turns impact the\nprecision of the results then that can be a problem.\n\n> Just a note, all of my comments could use a lot of work, but I want to\n> get consensus on the algorithm before I make sure and write about it\n> in a perfect way.\n\nI'm not sure \"a lot of work\" is accurate, they seem pretty much there to me,\nbut I think that an illustration of running through the algorithm in an\nascii-art array would be helpful.\n\n\nReading through this I think such a function has merits, not only for your\nusecase but other heuristic based work and quite possibly systems debugging.\n\nWhile the bucketing algorithm is a clever algorithm for degrading precision for\nolder entries without discarding them, I do fear that we'll risk ending up with\nanswers like \"somewhere between in the past and even further in the past\".\nI've been playing around with various compression algorithms for packing the\nbuckets such that we can retain precision for longer. Since you were aiming to\nwork on other patches leading up to the freeze, let's pick this up again once\nthings calm down.\n\nWhen compiling I got this warning for lsntime_merge_target:\n\npgstat_wal.c:242:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]\n}\n^\n1 warning generated.\n\nThe issue seems to be that the can't-really-happen path is protected with an\nAssertion, which is a no-op for production builds. I think we should handle\nthe error rather than rely on testing catching it (since if it does happen even\nthough it can't, it's going to be when it's for sure not tested). Returning an\ninvalid array subscript like -1 and testing for it in lsntime_insert, with an\nelog(WARNING..), seems enough.\n\n\n- last_snapshot_lsn <= GetLastImportantRecPtr())\n+ last_snapshot_lsn <= current_lsn)\nI think we should delay extracting the LSN with GetLastImportantRecPtr until we\nknow that we need it, to avoid taking locks in this codepath unless needed.\n\nI've attached a diff with the above suggestions which applies on op of your\npatchset.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 18 Mar 2024 15:02:51 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "\n\nOn 2/22/24 03:45, Melanie Plageman wrote:\n> Thanks so much for reviewing!\n> \n> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> When I first read this, I immediately started wondering if this might\n>> use the commit timestamp stuff we already have. Because for each commit\n>> we already store the LSN and commit timestamp, right? But I'm not sure\n>> that would be a good match - the commit_ts serves a very special purpose\n>> of mapping XID => (LSN, timestamp), I don't see how to make it work for\n>> (LSN=>timestmap) and (timestamp=>LSN) very easily.\n> \n> I took a look at the code in commit_ts.c, and I really don't see a way\n> of reusing any of this commit<->timestamp infrastructure for\n> timestamp<->LSN mappings.\n> \n>> As for the inner workings of the patch, my understanding is this:\n>>\n>> - \"LSNTimeline\" consists of \"LSNTime\" entries representing (LSN,ts)\n>> points, but those points are really \"buckets\" that grow larger and\n>> larger for older periods of time.\n> \n> Yes, they are buckets in the sense that they represent multiple values\n> but each contains a single LSNTime value which is the minimum of all\n> the LSNTimes we \"merged\" into that single array element. In order to\n> represent a range of time, you need to use two array elements. The\n> linear interpolation from time <-> LSN is all done with two elements.\n> \n>> - AFAIK each entry represent an interval of time, and the next (older)\n>> interval is twice as long, right? So the first interval is 1 second,\n>> then 2 seconds, 4 seconds, 8 seconds, ...\n>>\n>> - But I don't understand how the LSNTimeline entries are \"aging\" and get\n>> less accurate, while the \"current\" bucket is short. lsntime_insert()\n>> seems to simply move to the next entry, but doesn't that mean we insert\n>> the entries into larger and larger buckets?\n> \n> Because the earlier array elements can represent fewer logical members\n> than later ones and because elements are merged into the next element\n> when space runs out, later array elements will contain older data and\n> more of it, so those \"ranges\" will be larger. But, after thinking\n> about it and also reading your feedback, I realized my algorithm was\n> silly because it starts merging logical members before it has even\n> used the whole array.\n> \n> The attached v3 has a new algorithm. Now, LSNTimes are added from the\n> end of the array forward until all array elements have at least one\n> logical member (array length == volume). Once array length == volume,\n> new LSNTimes will result in merging logical members in existing\n> elements. We want to merge older members because those can be less\n> precise. So, the number of logical members per array element will\n> always monotonically increase starting from the beginning of the array\n> (which contains the most recent data) and going to the end. We want to\n> use all the available space in the array. That means that each LSNTime\n> insertion will always result in a single merge. We want the timeline\n> to be inclusive of the oldest data, so merging means taking the\n> smaller value of two LSNTime values. I had to pick a rule for choosing\n> which elements to merge. So, I choose the merge target as the oldest\n> element whose logical membership is < 2x its predecessor. I merge the\n> merge target's predecessor into the merge target. Then I move all of\n> the intervening elements down 1. Then I insert the new LSNTime at\n> index 0.\n> \n\nI can't help but think about t-digest [1], which also merges data into\nvariable-sized buckets (called centroids, which is a pair of values,\njust like LSNTime). But the merging is driven by something called \"scale\nfunction\" which I found like a pretty nice approach to this, and it\nyields some guarantees regarding accuracy. I wonder if we could do\nsomething similar here ...\n\nThe t-digest is a way to approximate quantiles, and the default scale\nfunction is optimized for best accuracy on the extremes (close to 0.0\nand 1.0), but it's possible to use scale functions that optimize only\nfor accuracy close to 1.0.\n\nThis may be misguided, but I see similarity between quantiles and what\nLSNTimeline does - timestamps are ordered, and quantiles close to 0.0\nare \"old timestamps\" while quantiles close to 1.0 are \"now\".\n\nAnd t-digest also defines a pretty efficient algorithm to merge data in\na way that gradually combines older \"buckets\" into larger ones.\n\n>> - The comments never really spell what amount of time the entries cover\n>> / how granular it is. My understanding is it's simply measured in number\n>> of entries added, which is assumed to be constant and drive by\n>> bgwriter_delay, right? Which is 200ms by default. Which seems fine, but\n>> isn't the hibernation (HIBERNATE_FACTOR) going to mess with it?\n>>\n>> Is there some case where bgwriter would just loop without sleeping,\n>> filling the timeline much faster? (I can't think of any, but ...)\n> \n> bgwriter will wake up when there are buffers to flush, which is likely\n> correlated with there being new LSNs. So, actually it seems like it\n> might work well to rely on only filling the timeline when there are\n> things for bgwriter to do.\n> \n>> - The LSNTimeline comment claims an array of size 64 is large enough to\n>> not need to care about filling it, but maybe it should briefly explain\n>> why we can never fill it (I guess 2^64 is just too many).\n> \n> The new structure fits a different number of members. I have yet to\n> calculate that number, but it should be explained in the comments once\n> I do.\n> \n> For example, if we made an LSNTimeline with volume 4, once every\n> element had one LSNTime and we needed to start merging, the following\n> is how many logical members each element would have after each of four\n> merges:\n> 1111\n> 1112\n> 1122\n> 1114\n> 1124\n> So, if we store the number of members as an unsigned 64-bit int and we\n> have an LSNTimeline with volume 64, what is the maximum number of\n> members can we store if we hold all of the invariants described in my\n> algorithm above (we only merge when required, every element holds < 2x\n> the number of logical members as its predecessor, we do exactly one\n> merge every insertion [when required], membership must monotonically\n> increase [choose the oldest element meeting the criteria when deciding\n> what to merge])?\n> \n\nI guess that should be enough for (2^64-1) logical members, because it's\na sequence 1, 2, 4, 8, ..., 2^63. Seems enough.\n\nBut now that I think about it, does it make sense to do the merging\nbased on the number of logical members? Shouldn't this really be driven\nby the \"length\" of the time interval the member covers?\n\n>> - I don't quite understand why 0005 adds the functions to pageinspect.\n>> This has nothing to do with pages, right?\n> \n> You're right. I just couldn't think of a good place to put the\n> functions. In version 3, I just put the SQL functions in pgstat_wal.c\n> and made them generally available (i.e. not in a contrib module). I\n> haven't added docs back yet. But perhaps a section near the docs\n> describing pg_xact_commit_timestamp() [1]? I wasn't sure if I should\n> put the SQL function source code in pgstatfuncs.c -- I kind of prefer\n> it in pgstat_wal.c but there are no other SQL functions there.\n> \n\nOK, pgstat_wal seems like a good place.\n\n>> - Not sure why we need 0001. Just so that the \"estimate\" functions in\n>> 0002 have a convenient \"start\" point? Surely we could look at the\n>> current LSNTimeline data and use the oldest value, or (if there's no\n>> data) use the current timestamp/LSN?\n> \n> When there are 0 or 1 entries in the timeline you'll get an answer\n> that could be very off if you just return the current timestamp or\n> LSN. I guess that is okay?\n> \n>> - I wonder what happens if we lose the data - we know that if people\n>> reset statistics for whatever reason (or just lose them because of a\n>> crash, or because they're on a replica), bad things happen to\n>> autovacuum. What's the (expected) impact on pruning?\n> \n> This is an important question. Because stats aren't crashsafe, we\n> could return very inaccurate numbers for some time/LSN values if we\n> crash. I don't actually know what we could do about that. When I use\n> the LSNTimeline for the freeze heuristic it is less of an issue\n> because the freeze heuristic has a fallback strategy when there aren't\n> enough stats to do its calculations. But other users of this\n> LSNTimeline will simply get highly inaccurate results (I think?). Is\n> there anything we could do about this? It seems bad.\n> \n> Andres had brought up something at some point about, what if the\n> database is simply turned off for awhile and then turned back on. Even\n> if you cleanly shut down, will there be \"gaps\" in the timeline? I\n> think that could be okay, but it is something to think about.\n> \n>> - What about a SRF function that outputs the whole LSNTimeline? Would be\n>> useful for debugging / development, I think. (Just a suggestion).\n> \n> Good idea! I've added this. Though, maybe there was a simpler way to\n> implement than I did.\n> \n\nThanks. I'll take a look.\n\n> Just a note, all of my comments could use a lot of work, but I want to\n> get consensus on the algorithm before I make sure and write about it\n> in a perfect way.\n> \n\nMakes sense, as long as the comments are sufficiently clear. It's hard\nto reach consensus on something not explained clearly enough.\n\n\nregards\n\n\n[1]\nhttps://github.com/tdunning/t-digest/blob/main/docs/t-digest-paper/histo.pdf\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Mar 2024 18:29:57 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "\nOn 3/18/24 15:02, Daniel Gustafsson wrote:\n>> On 22 Feb 2024, at 03:45, Melanie Plageman <[email protected]> wrote:\n>> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra\n>> <[email protected]> wrote:\n> \n>>> - Not sure why we need 0001. Just so that the \"estimate\" functions in\n>>> 0002 have a convenient \"start\" point? Surely we could look at the\n>>> current LSNTimeline data and use the oldest value, or (if there's no\n>>> data) use the current timestamp/LSN?\n>>\n>> When there are 0 or 1 entries in the timeline you'll get an answer\n>> that could be very off if you just return the current timestamp or\n>> LSN. I guess that is okay?\n> \n> I don't think that's a huge problem at such a young \"lsn-age\", but I might be\n> missing something.\n> \n>>> - I wonder what happens if we lose the data - we know that if people\n>>> reset statistics for whatever reason (or just lose them because of a\n>>> crash, or because they're on a replica), bad things happen to\n>>> autovacuum. What's the (expected) impact on pruning?\n>>\n>> This is an important question. Because stats aren't crashsafe, we\n>> could return very inaccurate numbers for some time/LSN values if we\n>> crash. I don't actually know what we could do about that. When I use\n>> the LSNTimeline for the freeze heuristic it is less of an issue\n>> because the freeze heuristic has a fallback strategy when there aren't\n>> enough stats to do its calculations. But other users of this\n>> LSNTimeline will simply get highly inaccurate results (I think?). Is\n>> there anything we could do about this? It seems bad.\n> \n\nDo we have something to calculate a sufficiently good \"average\" to use\nas a default, if we don't have a better value? For example, we know the\ntimestamp of the last checkpoint, and we know the LSN, right? Maybe if\nwe're sufficiently far from the checkpoint, we could use that.\n\nOr maybe checkpoint_timeout / max_wal_size would be enough to calculate\nsome default value?\n\nI wonder how long it takes until LSNTimeline gives us sufficiently good\ndata for all LSNs we need. That is, if we lose this, how long it takes\nuntil we get enough data to do good decisions?\n\nWhy don't we simply WAL-log this in some trivial way? It's pretty small,\nso if we WAL-log this once in a while (after a merge happens), that\nshould not be a problem.\n\nOr a different idea - if we lost the data, but commit_ts is enabled,\ncan't we \"simply\" walk commit_ts and feed LSN/timestamp into the\ntimeline? I guess we don't want to walk 2B transactions, but even just\nsampling some recent transactions might be enough, no?\n\n> A complication with this over stats is that we can't recompute this in case of\n> a crash/corruption issue. The simple solution is to consider this unlogged\n> data and start fresh at every unclean shutdown, but I have a feeling that won't\n> be good enough for basing heuristics on?\n> \n>> Andres had brought up something at some point about, what if the\n>> database is simply turned off for awhile and then turned back on. Even\n>> if you cleanly shut down, will there be \"gaps\" in the timeline? I\n>> think that could be okay, but it is something to think about.\n> \n> The gaps would represent reality, so there is nothing wrong per se with gaps,\n> but if they inflate the interval of a bucket which in turns impact the\n> precision of the results then that can be a problem.\n> \n\nWell, I think the gaps are a problem in the sense that they disappear\nonce we start merging the buckets. But maybe that's fine, if we're only\ninterested in approximate data.\n\n>> Just a note, all of my comments could use a lot of work, but I want to\n>> get consensus on the algorithm before I make sure and write about it\n>> in a perfect way.\n> \n> I'm not sure \"a lot of work\" is accurate, they seem pretty much there to me,\n> but I think that an illustration of running through the algorithm in an\n> ascii-art array would be helpful.\n> \n\n+1\n\n> \n> Reading through this I think such a function has merits, not only for your\n> usecase but other heuristic based work and quite possibly systems debugging.\n> \n> While the bucketing algorithm is a clever algorithm for degrading precision for\n> older entries without discarding them, I do fear that we'll risk ending up with\n> answers like \"somewhere between in the past and even further in the past\".\n> I've been playing around with various compression algorithms for packing the\n> buckets such that we can retain precision for longer. Since you were aiming to\n> work on other patches leading up to the freeze, let's pick this up again once\n> things calm down.\n> \n\nI guess this ambiguity is pretty inherent to a structure that does not\nkeep all the data, and gradually reduces the resolution for old stuff.\nBut my understanding was that's sufficient for the freezing patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Mar 2024 18:46:13 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Hi everyone!\n\nMe, Bharath, and Ilya are on patch review session at the PGConf.dev :) Maybe we got everything wrong, please consider that we are just doing training on reviewing patches.\n\n\n=== Purpose of the patch ===\nCurrently, we have checkpoint_timeout and max_wal size to know when we need a checkpoint. This patch brings a capability to freeze page not only by internal state of the system, but also by wall clock time.\nTo do so we need an infrastructure which will tell when page was modified.\n\nThe patch in this thread is doing exactly this: in-memory information to map LSNs with wall clock time. Mapping is maintained by bacgroundwriter.\n\n=== Questions ===\n1. The patch does not handle server restart. All pages will need freeze after any crash?\n2. Some benchmarks to proof the patch does not have CPU footprint.\n\n=== Nits ===\n\"Timeline\" term is already taken.\nThe patch needs rebase due to some header changes.\nTests fail on Windows.\nThe patch lacks tests.\nSome docs would be nice, but the feature is for developers.\nMapping is protected for multithreaded access by walstats LWlock and might have tuplestore_putvalues() under that lock. That might be a little dangerous, if tuplestore will be on-disk for some reason (should not happen).\n\n\nOverall, the patch is a base for good feature which would help to do freeze right in time. Thanks!\n\n\nBest regards, Bharath, Andrey, Ilya.\n\n",
"msg_date": "Thu, 30 May 2024 10:25:29 -0700",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 1:29 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/22/24 03:45, Melanie Plageman wrote:\n> > The attached v3 has a new algorithm. Now, LSNTimes are added from the\n> > end of the array forward until all array elements have at least one\n> > logical member (array length == volume). Once array length == volume,\n> > new LSNTimes will result in merging logical members in existing\n> > elements. We want to merge older members because those can be less\n> > precise. So, the number of logical members per array element will\n> > always monotonically increase starting from the beginning of the array\n> > (which contains the most recent data) and going to the end. We want to\n> > use all the available space in the array. That means that each LSNTime\n> > insertion will always result in a single merge. We want the timeline\n> > to be inclusive of the oldest data, so merging means taking the\n> > smaller value of two LSNTime values. I had to pick a rule for choosing\n> > which elements to merge. So, I choose the merge target as the oldest\n> > element whose logical membership is < 2x its predecessor. I merge the\n> > merge target's predecessor into the merge target. Then I move all of\n> > the intervening elements down 1. Then I insert the new LSNTime at\n> > index 0.\n> >\n>\n> I can't help but think about t-digest [1], which also merges data into\n> variable-sized buckets (called centroids, which is a pair of values,\n> just like LSNTime). But the merging is driven by something called \"scale\n> function\" which I found like a pretty nice approach to this, and it\n> yields some guarantees regarding accuracy. I wonder if we could do\n> something similar here ...\n>\n> The t-digest is a way to approximate quantiles, and the default scale\n> function is optimized for best accuracy on the extremes (close to 0.0\n> and 1.0), but it's possible to use scale functions that optimize only\n> for accuracy close to 1.0.\n>\n> This may be misguided, but I see similarity between quantiles and what\n> LSNTimeline does - timestamps are ordered, and quantiles close to 0.0\n> are \"old timestamps\" while quantiles close to 1.0 are \"now\".\n>\n> And t-digest also defines a pretty efficient algorithm to merge data in\n> a way that gradually combines older \"buckets\" into larger ones.\n\nI started taking a look at this paper and think the t-digest could be\napplicable as a possible alternative data structure to the one I am\nusing to approximate page age for the actual opportunistic freeze\nheuristic -- especially since the centroids are pairs of a mean and a\ncount. I couldn't quite understand how the t-digest is combining those\ncentroids. Since I am not computing quantiles over the LSNTimeStream,\nthough, I think I can probably do something simpler for this part of\nthe project.\n\n> >> - The LSNTimeline comment claims an array of size 64 is large enough to\n> >> not need to care about filling it, but maybe it should briefly explain\n> >> why we can never fill it (I guess 2^64 is just too many).\n-- snip --\n> I guess that should be enough for (2^64-1) logical members, because it's\n> a sequence 1, 2, 4, 8, ..., 2^63. Seems enough.\n>\n> But now that I think about it, does it make sense to do the merging\n> based on the number of logical members? Shouldn't this really be driven\n> by the \"length\" of the time interval the member covers?\n\nAfter reading this, I decided to experiment with a few different\nalgorithms in python and plot the unabbreviated LSNTimeStream against\nvarious ways of compressing it. You can see the results if you run the\npython code here [1].\n\nWhat I found is that attempting to calculate the error represented by\ndropping a point and picking the point which would cause the least\nadditional error were it to be dropped produced more accurate results\nthan combining the oldest entries based on logical membership to fit\nsome series.\n\nThis is inspired by what you said about using the length of segments\nto decide which points to merge. In my case, I want to merge segments\nthat have a similar slope -- those which have a point that is\nessentially redundant. I loop through the LSNTimeStream and look for\nthe point that we can drop with the lowest impact on future\ninterpolation accuracy. To do this, I calculate the area of the\ntriangle made by each point on the stream and its adjacent points. The\nidea being that if you drop that point, the triangle is the amount of\nerror you introduce for points being interpolated between the new pair\n(previous adjacencies of the dropped point). This has some issues, but\nit seems more logical than just always combining the oldest points.\n\nIf you run the python simulator code, you'll see that for the\nLSNTimeStream I generated, using this method produces more accurate\nresults than either randomly dropping points or using the \"combine\noldest members\" method.\n\nIt would be nice if we could do something with the accumulated error\n-- so we could use it to modify estimates when interpolating. I don't\nreally know how to keep it though. I thought I would just save the\ncalculated error in one or the other of the adjacent points after\ndropping a point, but then what do we do with the error saved in a\npoint before it is dropped? Add it to the error value in one of the\nadjacent points? If we did, what would that even mean? How would we\nuse it?\n\n- Melanie\n\n[1] https://gist.github.com/melanieplageman/95126993bcb43d4b4042099e9d0ccc11\n\n\n",
"msg_date": "Wed, 26 Jun 2024 21:35:27 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Thanks for the review!\n\nAttached v4 implements the new algorithm/compression described in [1].\n\nWe had discussed off-list possibly using error in some way. So, I'm\ninterested to know what you think about this method I suggested which\ncalculates error. It doesn't save the error so that we could use it\nwhen interpolating for reasons I describe in that mail. If you have\nany ideas on how to use the calculated error or just how to combine\nerror when dropping a point, that would be super helpful.\n\nNote that in this version, I've changed the name from LSNTimeline to\nLSNTimeStream to address some feedback from another reviewer about\nTimeline being already in use in Postgres as a concept.\n\nOn Mon, Mar 18, 2024 at 10:02 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 22 Feb 2024, at 03:45, Melanie Plageman <[email protected]> wrote:\n> > On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >> - I wonder what happens if we lose the data - we know that if people\n> >> reset statistics for whatever reason (or just lose them because of a\n> >> crash, or because they're on a replica), bad things happen to\n> >> autovacuum. What's the (expected) impact on pruning?\n> >\n> > This is an important question. Because stats aren't crashsafe, we\n> > could return very inaccurate numbers for some time/LSN values if we\n> > crash. I don't actually know what we could do about that. When I use\n> > the LSNTimeline for the freeze heuristic it is less of an issue\n> > because the freeze heuristic has a fallback strategy when there aren't\n> > enough stats to do its calculations. But other users of this\n> > LSNTimeline will simply get highly inaccurate results (I think?). Is\n> > there anything we could do about this? It seems bad.\n>\n> A complication with this over stats is that we can't recompute this in case of\n> a crash/corruption issue. The simple solution is to consider this unlogged\n> data and start fresh at every unclean shutdown, but I have a feeling that won't\n> be good enough for basing heuristics on?\n\nYes, I still haven't dealt with this yet. Tomas had a list of\nsuggestions in an upthread email, so I will spend some time thinking\nabout those next.\n\nIt seems like we might be able to come up with some way of calculating\na valid \"default\" value or \"best guess\" which could be used whenever\nthere isn't enough data. Though, if we crash and lose some time stream\ndata, we won't know that that data was lost due to a crash so we\nwouldn't know to use our \"best guess\" to make up for it. So, maybe I\nshould try and rebuild the stream using some combination of WAL, clog,\nand commit timestamps? Or perhaps I should do some basic WAL logging\njust for this data structure.\n\n> > Andres had brought up something at some point about, what if the\n> > database is simply turned off for awhile and then turned back on. Even\n> > if you cleanly shut down, will there be \"gaps\" in the timeline? I\n> > think that could be okay, but it is something to think about.\n>\n> The gaps would represent reality, so there is nothing wrong per se with gaps,\n> but if they inflate the interval of a bucket which in turns impact the\n> precision of the results then that can be a problem.\n\nYes, actually I added some hacky code to the quick and dirty python\nsimulator I wrote [2] to test out having a big gap with no updates (if\nthere is no db activity so nothing for bgwriter to do or the db is off\nfor a while). And it seemed to basically work fine.\n\n> While the bucketing algorithm is a clever algorithm for degrading precision for\n> older entries without discarding them, I do fear that we'll risk ending up with\n> answers like \"somewhere between in the past and even further in the past\".\n> I've been playing around with various compression algorithms for packing the\n> buckets such that we can retain precision for longer. Since you were aiming to\n> work on other patches leading up to the freeze, let's pick this up again once\n> things calm down.\n\nLet me know what you think about the new algorithm. I wonder if for\npoints older than the second to oldest point, we have the function\nreturn something like \"older than a year ago\" instead of guessing. The\nnew method doesn't focus on compressing old data though.\n\n> When compiling I got this warning for lsntime_merge_target:\n>\n> pgstat_wal.c:242:1: warning: non-void function does not return a value in all control paths [-Wreturn-type]\n> }\n> ^\n> 1 warning generated.\n>\n> The issue seems to be that the can't-really-happen path is protected with an\n> Assertion, which is a no-op for production builds. I think we should handle\n> the error rather than rely on testing catching it (since if it does happen even\n> though it can't, it's going to be when it's for sure not tested). Returning an\n> invalid array subscript like -1 and testing for it in lsntime_insert, with an\n> elog(WARNING..), seems enough.\n>\n>\n> - last_snapshot_lsn <= GetLastImportantRecPtr())\n> + last_snapshot_lsn <= current_lsn)\n> I think we should delay extracting the LSN with GetLastImportantRecPtr until we\n> know that we need it, to avoid taking locks in this codepath unless needed.\n>\n> I've attached a diff with the above suggestions which applies on op of your\n> patchset.\n\nI've implemented these review points in the attached v4.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_YbbZGz-X_pm2zXJA%2B6A22YYpaWhOjmytqFL1yF_FCv6w%40mail.gmail.com\n[2] https://gist.github.com/melanieplageman/7400e81bbbd518fe08b4af55a9b632d4",
"msg_date": "Wed, 26 Jun 2024 22:04:13 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Thanks so much Bharath, Andrey, and Ilya for the review!\n\nI've posted a new version here [1] which addresses some of your\nconcerns. I'll comment on those it does not address inline.\n\nOn Thu, May 30, 2024 at 1:26 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> === Questions ===\n> 1. The patch does not handle server restart. All pages will need freeze after any crash?\n\nI haven't fixed this yet. See my email for some thoughts on what I\nshould do here.\n\n> 2. Some benchmarks to proof the patch does not have CPU footprint.\n\nThis is still a todo. Typically when designing a benchmark like this,\nI would want to pick a worst-case workload to see how bad it could be.\nI wonder if just a write heavy workload like pgbench builtin tpcb-like\nwould be sufficient?\n\n> === Nits ===\n> \"Timeline\" term is already taken.\n\nI changed it to LSNTimeStream. What do you think?\n\n> The patch needs rebase due to some header changes.\n\nI did this.\n\n> Tests fail on Windows.\n\nI think this was because of the compiler warnings, but I need to\ndouble-check now.\n\n> The patch lacks tests.\n\nI thought about this a bit. I wonder what kind of tests make sense.\n\nI could\n1) Add tests with the existing stats tests\n(src/test/regress/sql/stats.sql) and just test that bgwriter is in\nfact adding to the time stream.\n\n2) Or should I add some infrastructure to be able to create an\nLSNTimeStream and then insert values to it and do some validations of\nwhat is added? I did a version of this but it is just much more\nannoying with C & SQL than with python (where I tried out my\nalgorithms) [2].\n\n> Some docs would be nice, but the feature is for developers.\n\nI added some docs.\n\n> Mapping is protected for multithreaded access by walstats LWlock and might have tuplestore_putvalues() under that lock. That might be a little dangerous, if tuplestore will be on-disk for some reason (should not happen).\n\nAh, great point! I forgot about the *fetch_stat*() functions. I used\npgstat_fetch_stat_wal() in the latest version so I have a local copy\nthat I can stuff into the tuplestore without any locking. It won't be\nas up-to-date, but I think that is 100% okay for this function.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_a6WSkWPtJCw%3DW%2BP%2Bg-Fw9kfA_t8sMx99dWpMiGHCqJNA%40mail.gmail.com\n[2] https://gist.github.com/melanieplageman/95126993bcb43d4b4042099e9d0ccc11\n\n\n",
"msg_date": "Wed, 26 Jun 2024 22:18:18 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Wed, Jun 26, 2024 at 10:04 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> I've implemented these review points in the attached v4.\n\nI realized the docs had a compilation error. Attached v5 fixes that as\nwell as three bugs I found while using this patch set more intensely\ntoday.\n\nI see Michael has been working on some crash safety for stats here\n[1]. I wonder if that would be sufficient for the LSNTimeStream. I\nhaven't examined his patch functionality yet, though.\n\nI also had an off-list conversation with Robert where he suggested I\ncould perhaps change the user-facing functions for estimating an\nLSN/time conversion to instead return a floor and a ceiling -- instead\nof linearly interpolating a guess. This would be a way to keep users\nfrom misunderstanding the accuracy of the functions to translate LSN\n<-> time. I'm interested in what others think of this.\n\nI like this idea a lot because it allows me to worry less about how I\ndecide to compress the data and whether or not it will be accurate for\nuse cases different than my own (the opportunistic freezing\nheuristic). If I can provide a floor and a ceiling that are definitely\naccurate, I don't have to worry about misleading people.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/ZnEiqAITL-VgZDoY%40paquier.xyz",
"msg_date": "Fri, 28 Jun 2024 18:09:23 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Hi!\n\nI’m doing another iteration over the patchset.\n\nPgStartLSN = GetXLogInsertRecPtr();\nShould this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash recovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted?\n\nlsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the function may return negative result, despite claiming area as a result. It’s intended, but a little misleading.\n\ni-- > 0\nIs there a point to do a backward count in the loop?\nConsider dropping not one by one, but half of a stream, LSNTimeStream is ~2Kb of cache and it’s loaded as a whole to the cache..\n\n\n\n> On 27 Jun 2024, at 07:18, Melanie Plageman <[email protected]> wrote:\n> \n>> 2. Some benchmarks to proof the patch does not have CPU footprint.\n> \n> This is still a todo. Typically when designing a benchmark like this,\n> I would want to pick a worst-case workload to see how bad it could be.\n> I wonder if just a write heavy workload like pgbench builtin tpcb-like\n> would be sufficient?\n\nIncreasing background writer activity to maximum and not seeing LSNTimeStream function in `perf top` seems enough to me.\n\n> \n>> === Nits ===\n>> \"Timeline\" term is already taken.\n> \n> I changed it to LSNTimeStream. What do you think?\nSounds good to me.\n\n\n> \n>> Tests fail on Windows.\n> \n> I think this was because of the compiler warnings, but I need to\n> double-check now.\nNope, it really looks more serious.\n[12:31:25.701] FAILED: src/backend/postgres_lib.a.p/utils_activity_pgstat_wal.c.obj \n[12:31:25.701] \"cl\" \"-Isrc\\backend\\postgres_lib.a.p\" \"-Isrc\\include\" \"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\" \"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\" \"/MDd\" \"/FIpostgres_pch.h\" \"/Yupostgres_pch.h\" \"/Fpsrc\\backend\\postgres_lib.a.p\\postgres_pch.pch\" \"/nologo\" \"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\" \"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\" \"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\" \"/wd4102\" \"/wd4090\" \"/wd4267\" \"-DBUILDING_DLL\" \"/FS\" \"/FdC:\\cirrus\\build\\src\\backend\\postgres_lib.pdb\" /Fosrc/backend/postgres_lib.a.p/utils_activity_pgstat_wal.c.obj \"/c\" ../src/backend/utils/activity/pgstat_wal.c\n[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(433): error C2375: 'pg_estimate_lsn_at_time': redefinition; different linkage\n[12:31:25.701] c:\\cirrus\\build\\src\\include\\utils/fmgrprotos.h(2906): note: see declaration of 'pg_estimate_lsn_at_time'\n[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(434): error C2375: 'pg_estimate_time_at_lsn': redefinition; different linkage\n[12:31:25.701] c:\\cirrus\\build\\src\\include\\utils/fmgrprotos.h(2905): note: see declaration of 'pg_estimate_time_at_lsn'\n[12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(435): error C2375: 'pg_lsntime_stream': redefinition; different linkage\n[12:31:25.858] c:\\cirrus\\build\\src\\include\\utils/fmgrprotos.h(2904): note: see declaration of 'pg_lsntime_stream'\n\n\n> \n>> The patch lacks tests.\n> \n> I thought about this a bit. I wonder what kind of tests make sense.\n> \n> I could\n> 1) Add tests with the existing stats tests\n> (src/test/regress/sql/stats.sql) and just test that bgwriter is in\n> fact adding to the time stream.\n> \n> 2) Or should I add some infrastructure to be able to create an\n> LSNTimeStream and then insert values to it and do some validations of\n> what is added? I did a version of this but it is just much more\n> annoying with C & SQL than with python (where I tried out my\n> algorithms) [2].\n\nI think just a test which calls functions and discards the result would greatly increase coverage.\n\n\n> On 29 Jun 2024, at 03:09, Melanie Plageman <[email protected]> wrote:\n> change the user-facing functions for estimating an\n> LSN/time conversion to instead return a floor and a ceiling -- instead\n> of linearly interpolating a guess. This would be a way to keep users\n> from misunderstanding the accuracy of the functions to translate LSN\n> <-> time.\n\nI think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary, caller can correctly interpret this situation.\n\nThanks! Looking forward to more timely freezing.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 6 Jul 2024 22:36:17 +0500",
"msg_from": "Andrey M. Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Thanks for the review! v6 attached.\n\nOn Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> PgStartLSN = GetXLogInsertRecPtr();\n> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash recovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted?\n\nSo, I don't think we will allow use of the LSNTimeStream on a standby,\nsince it is unclear what it would mean on a standby. For example, do\nyou want to know the time the LSN was generated or the time it was\nreplayed? Note that bgwriter won't insert values to the time stream on\na standby (it explicitly checks).\n\nBut, you bring up an issue that I don't quite know what to do about.\nIf the standby doesn't have an LSNTimeStream, then when it is\npromoted, LSN <-> time conversions of LSNs and times before the\npromotion seem impossible. Maybe if the stats file is getting written\nout at checkpoints, we could restore from that previous primary's file\nafter promotion?\n\nThis brings up a second issue, which is that, currently, bgwriter\nwon't insert into the time stream when wal_level is minimal. I'm not\nsure exactly how to resolve it because I am relying on the \"last\nimportant rec pointer\" and the LOG_SNAPSHOT_INTERVAL_MS to throttle\nwhen the bgwriter actually inserts new records into the LSNTimeStream.\nI could come up with a different timeout interval for updating the\ntime stream. Perhaps I should do that?\n\n> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the function may return negative result, despite claiming area as a result. It’s intended, but a little misleading.\n\nAh, great point. I've fixed this.\n\n> i-- > 0\n> Is there a point to do a backward count in the loop?\n> Consider dropping not one by one, but half of a stream, LSNTimeStream is ~2Kb of cache and it’s loaded as a whole to the cache..\n\nYes, the backwards looping was super confusing. It was a relic of my\nold design. Even without your point about cache locality, the code is\nmuch harder to understand with the backwards looping. I've changed the\narray to fill forwards and be accessed with forward loops.\n\n> > On 27 Jun 2024, at 07:18, Melanie Plageman <[email protected]> wrote:\n> >\n> >> 2. Some benchmarks to proof the patch does not have CPU footprint.\n> >\n> > This is still a todo. Typically when designing a benchmark like this,\n> > I would want to pick a worst-case workload to see how bad it could be.\n> > I wonder if just a write heavy workload like pgbench builtin tpcb-like\n> > would be sufficient?\n>\n> Increasing background writer activity to maximum and not seeing LSNTimeStream function in `perf top` seems enough to me.\n\nI've got this on my TODO.\n\n> >> Tests fail on Windows.\n> >\n> > I think this was because of the compiler warnings, but I need to\n> > double-check now.\n> Nope, it really looks more serious.\n> [12:31:25.701] ../src/backend/utils/activity/pgstat_wal.c(433): error C2375: 'pg_estimate_lsn_at_time': redefinition; different linkage\n\nAh, yes. I mistakenly added the functions to pg_proc.dat and also\ncalled PG_FUNCTION_INFO_V1 for the functions. I've fixed it.\n\n> >> The patch lacks tests.\n> >\n> > I thought about this a bit. I wonder what kind of tests make sense.\n> >\n> > I could\n> > 1) Add tests with the existing stats tests\n> > (src/test/regress/sql/stats.sql) and just test that bgwriter is in\n> > fact adding to the time stream.\n> >\n> > 2) Or should I add some infrastructure to be able to create an\n> > LSNTimeStream and then insert values to it and do some validations of\n> > what is added? I did a version of this but it is just much more\n> > annoying with C & SQL than with python (where I tried out my\n> > algorithms) [2].\n>\n> I think just a test which calls functions and discards the result would greatly increase coverage.\n\nI've added tests of the two main conversion functions. I didn't add a\ntest of the function which gets the whole stream (pg_lsntime_stream())\nbecause I don't think I can guarantee it won't be empty -- so I'm not\nsure what I could do with it in a test.\n\n> > On 29 Jun 2024, at 03:09, Melanie Plageman <[email protected]> wrote:\n> > change the user-facing functions for estimating an\n> > LSN/time conversion to instead return a floor and a ceiling -- instead\n> > of linearly interpolating a guess. This would be a way to keep users\n> > from misunderstanding the accuracy of the functions to translate LSN\n> > <-> time.\n>\n> I think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary, caller can correctly interpret this situation.\n\nProviding \"ceiling\" and \"floor\" user functions is still a TODO for me,\nhowever, I think that the patch mostly does handle server restarts.\n\nIn the event of a restart, the cumulative stats system will have\npersisted our time stream, so the LSNTimeStream will just be read back\nin with the rest of the stats. I've added logic to ensure that if the\nPgStartLSN is newer than our oldest LSNTimeStream entry, we use the\noldest entry instead of PgStartLSN when doing conversions LSN <->\ntime.\n\nAs for a crash, stats do not persist crashes, but I think Michael's\npatch will go in to write out the stats file at checkpoints, and then\nthis should be good enough.\n\nIs there anything else you think that is an issue with restarts?\n\n> Thanks! Looking forward to more timely freezing.\n\nThanks! I'll be posting a new version of the opportunistic freezing\npatch that uses the time stream quite soon, so I hope you'll take a\nlook at that as well!\n\n- Melanie",
"msg_date": "Wed, 31 Jul 2024 21:12:15 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "This is a copy of my message for pgsql-hackers mailing list. Unfortunately original message was rejected due to one of recipients addresses is blocked.\n\n> On 1 Aug 2024, at 10:54, Andrey M. Borodin <[email protected]> wrote:\n> \n> \n> \n>> On 1 Aug 2024, at 05:44, Melanie Plageman <[email protected]> wrote:\n>> \n>> Thanks for the review! v6 attached.\n>> \n>> On Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <[email protected]> wrote:\n>>> \n>>> PgStartLSN = GetXLogInsertRecPtr();\n>>> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash recovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted?\n>> \n>> So, I don't think we will allow use of the LSNTimeStream on a standby,\n>> since it is unclear what it would mean on a standby. For example, do\n>> you want to know the time the LSN was generated or the time it was\n>> replayed? Note that bgwriter won't insert values to the time stream on\n>> a standby (it explicitly checks).\n> \n> Yes, I mentioned Standby because PgStartLSN is not what it says it is.\n> \n>> \n>> But, you bring up an issue that I don't quite know what to do about.\n>> If the standby doesn't have an LSNTimeStream, then when it is\n>> promoted, LSN <-> time conversions of LSNs and times before the\n>> promotion seem impossible. Maybe if the stats file is getting written\n>> out at checkpoints, we could restore from that previous primary's file\n>> after promotion?\n> \n> I’m afraid that clocks of a Primary from previous timeline might be not in sync with ours.\n> It’s OK if it causes error, we just need to be prepared when they indicate values from future. Perhaps, by shifting their last point to our “PgStartLSN”.\n> \n>> \n>> This brings up a second issue, which is that, currently, bgwriter\n>> won't insert into the time stream when wal_level is minimal. I'm not\n>> sure exactly how to resolve it because I am relying on the \"last\n>> important rec pointer\" and the LOG_SNAPSHOT_INTERVAL_MS to throttle\n>> when the bgwriter actually inserts new records into the LSNTimeStream.\n>> I could come up with a different timeout interval for updating the\n>> time stream. Perhaps I should do that?\n> \n> IDK. My knowledge of bgwriter is not enough to give a meaningful advise here.\n> \n>> \n>>> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the function may return negative result, despite claiming area as a result. It’s intended, but a little misleading.\n>> \n>> Ah, great point. I've fixed this.\n> \n> Well, not exactly. Result of lsn_ts_calculate_error_area() is still fabs()’ed upon usage. I’d recommend to fabs in function.\n> BTW lsn_ts_calculate_error_area() have no prototype.\n> \n> Also, I’m not a big fan of using IEEE 754 float in this function. This data type have 24 bits of significand bits.\n> Consider that current timestamp has 50 binary digits. Let’s assume realistic LSNs have same 50 bits.\n> Then our rounding error is 2^76 byte*microseconds.\n> Let’s assume we are interested to measure time on a scale of 1GB WAL records.\n> This gives us rounding error of 2^46 microseconds = 2^26 seconds = 64 million seconds = 2 years.\n> Seems like a gross error.\n> \n> If we use IEEE 754 doubles we have 53 significand bytes. And rounding error will be on a scale of 128 microseconds per GB, which is acceptable.\n> \n> So I think double is better than float here.\n> \n> Nitpicking, but I’d prefer to sum up (triangle2 + triangle3 + rectangle_part) before subtracting. This can save a bit of precision (smaller figures can have lesser exponent).\n> \n> \n>>>> On 29 Jun 2024, at 03:09, Melanie Plageman <[email protected]> wrote:\n>>>> change the user-facing functions for estimating an\n>>>> LSN/time conversion to instead return a floor and a ceiling -- instead\n>>>> of linearly interpolating a guess. This would be a way to keep users\n>>>> from misunderstanding the accuracy of the functions to translate LSN\n>>>> <-> time.\n>>> \n>>> I think this is a good idea. And it covers well “server restart problem”. If API just returns -inf as a boundary, caller can correctly interpret this situation.\n>> \n>> Providing \"ceiling\" and \"floor\" user functions is still a TODO for me,\n>> however, I think that the patch mostly does handle server restarts.\n>> \n>> In the event of a restart, the cumulative stats system will have\n>> persisted our time stream, so the LSNTimeStream will just be read back\n>> in with the rest of the stats. I've added logic to ensure that if the\n>> PgStartLSN is newer than our oldest LSNTimeStream entry, we use the\n>> oldest entry instead of PgStartLSN when doing conversions LSN <->\n>> time.\n>> \n>> As for a crash, stats do not persist crashes, but I think Michael's\n>> patch will go in to write out the stats file at checkpoints, and then\n>> this should be good enough.\n>> \n>> Is there anything else you think that is an issue with restarts?\n> \n> Nope, looks good to me.\n> \n>> \n>>> Thanks! Looking forward to more timely freezing.\n>> \n>> Thanks! I'll be posting a new version of the opportunistic freezing\n>> patch that uses the time stream quite soon, so I hope you'll take a\n>> look at that as well!\n> \n> Great! Thank you!\n> Besides your TODOs and my nitpicking this patch series looks RfC to me.\n> \n> I have to address some review comments on my patches, then I hope I’ll switch to reviewing opportunistic freezing.\n> \n> \n> Best regards, Andrey Borodin.\n\n\n\n\n",
"msg_date": "Thu, 1 Aug 2024 13:37:05 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Attached v7 changes the SQL-callable functions to return ranges of\nLSNs and times covering the target time or LSN instead of linearly\ninterpolating an approximate answer.\n\nI also changed the frequency and conditions under which the background\nwriter updates the global LSNTimeStream. There is now a dedicated\ninterval at which the LSNTimeStream is updated (instead of reusing the\nlog standby snapshot interval).\n\nI also found that it is incorrect to set PgStartLSN to the insert LSN\nin PostmasterMain(). The XLog buffer cache is not guaranteed to be\ninitialized in time. Instead of trying to provide an LSN lower bound\nfor locating times before those recorded on the global LSNTimeStream,\nI simply return a lower bound of InvalidXLogRecPtr. Similarly, I\nprovide a lower bound of -infinity when locating LSNs before those\nrecorded on the global LSNTimeStream.\n\nOn Thu, Aug 1, 2024 at 3:55 AM Andrey M. Borodin <[email protected]> wrote:\n>\n> > On 1 Aug 2024, at 05:44, Melanie Plageman <[email protected]> wrote:\n> >\n> > On Sat, Jul 6, 2024 at 1:36 PM Andrey M. Borodin <[email protected]> wrote:\n> >>\n> >> PgStartLSN = GetXLogInsertRecPtr();\n> >> Should this be kind of RecoveryEndPtr? How is it expected to behave on Standby in HA cluster, which was doing a crash recovery of 1y WALs in a day, then is in startup for a year as a Hot Standby, and then is promoted?\n> >\n> > So, I don't think we will allow use of the LSNTimeStream on a standby,\n> > since it is unclear what it would mean on a standby. For example, do\n> > you want to know the time the LSN was generated or the time it was\n> > replayed? Note that bgwriter won't insert values to the time stream on\n> > a standby (it explicitly checks).\n>\n> Yes, I mentioned Standby because PgStartLSN is not what it says it is.\n\nRight, I've found another way of dealing with this since PgStartLSN\nwas incorrect.\n\n> > But, you bring up an issue that I don't quite know what to do about.\n> > If the standby doesn't have an LSNTimeStream, then when it is\n> > promoted, LSN <-> time conversions of LSNs and times before the\n> > promotion seem impossible. Maybe if the stats file is getting written\n> > out at checkpoints, we could restore from that previous primary's file\n> > after promotion?\n>\n> I’m afraid that clocks of a Primary from previous timeline might be not in sync with ours.\n> It’s OK if it causes error, we just need to be prepared when they indicate values from future. Perhaps, by shifting their last point to our “PgStartLSN”.\n\nRegarding a standby being promoted. I plan to make a version of the\nLSNTimeStream functions which works on a standby by using\ngetRecordTimestamp() and inserts an LSN from the last record replayed\nand the associated timestamp. That should mean the LSNTimeStream on\nthe standby is roughly the same as the one on the primary since those\nrecords were inserted on the primary.\n\nAs for time going backwards in general, I've now made it so that\nvalues are only inserted if the times are monotonically increasing and\nthe LSN is the same or increasing. This should handle time going\nbackwards, either on the primary itself or after a standby is promoted\nif the timeline wasn't a perfect match.\n\n> > This brings up a second issue, which is that, currently, bgwriter\n> > won't insert into the time stream when wal_level is minimal. I'm not\n> > sure exactly how to resolve it because I am relying on the \"last\n> > important rec pointer\" and the LOG_SNAPSHOT_INTERVAL_MS to throttle\n> > when the bgwriter actually inserts new records into the LSNTimeStream.\n> > I could come up with a different timeout interval for updating the\n> > time stream. Perhaps I should do that?\n>\n> IDK. My knowledge of bgwriter is not enough to give a meaningful advise here.\n\nSee my note at top of the email.\n\n> >> lsn_ts_calculate_error_area() is prone to overflow. Even int64 does not seem capable to accommodate LSN*time. And the function may return negative result, despite claiming area as a result. It’s intended, but a little misleading.\n> >\n> > Ah, great point. I've fixed this.\n>\n> Well, not exactly. Result of lsn_ts_calculate_error_area() is still fabs()’ed upon usage. I’d recommend to fabs in function.\n> BTW lsn_ts_calculate_error_area() have no prototype.\n>\n> Also, I’m not a big fan of using IEEE 754 float in this function. This data type have 24 bits of significand bits.\n> Consider that current timestamp has 50 binary digits. Let’s assume realistic LSNs have same 50 bits.\n> Then our rounding error is 2^76 byte*microseconds.\n> Let’s assume we are interested to measure time on a scale of 1GB WAL records.\n> This gives us rounding error of 2^46 microseconds = 2^26 seconds = 64 million seconds = 2 years.\n> Seems like a gross error.\n>\n> If we use IEEE 754 doubles we have 53 significand bytes. And rounding error will be on a scale of 128 microseconds per GB, which is acceptable.\n>\n> So I think double is better than float here.\n>\n> Nitpicking, but I’d prefer to sum up (triangle2 + triangle3 + rectangle_part) before subtracting. This can save a bit of precision (smaller figures can have lesser exponent).\n\nOkay, thanks for the detail. See what you think about v7.\n\nSome perf testing of bgwriter updates are still a todo. I was thinking\nthat it might be bad to take an exclusive lock on the WAL stats data\nstructure for the entire time I am inserting a new value to the\nLSNTimeStream. I was thinking maybe I should take a share lock and\ncalculate which element to drop first and then take the exclusive\nlock? Or maybe I should make a separate lock for just the stream\nmember of PgStat_WalStats. Maybe it isn't worth it? I'm not sure.\n\n- Melanie",
"msg_date": "Wed, 7 Aug 2024 12:30:53 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "Melanie,\n\nAs I mentioned to you off-list, I feel like this needs some sort of\nrecency bias. Certainly vacuum, and really almost any conceivable user\nof this facility, is going to care more about accurate answers for new\ndata than for old data. If there's no recency bias, then I think that\neventually answers for more recent LSNs will start to become less\naccurate, since they've got to share the data structure with more and\nmore time from long ago. I don't think you've done anything about this\nin this version of the patch, but I might be wrong.\n\nOne way to make the standby more accurately mimic the primary would be\nto base entries on the timestamp-LSN data that is already present in\nthe WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or\nupdated entries on the primary when logging those records, the standby\ncould redo exactly what the primary did. A disadvantage of this\napproach is that if there are no commits for a while then your mapping\ngets out of date, but that might be something we could just tolerate.\nAnother possible solution is to log the changes you make on the\nprimary and have the standby replay those changes. Perhaps I'm wrong\nto advocate for such solutions, but it feels error-prone to have one\nalgorithm for the primary and a different algorithm for the standby.\nYou now basically have two things that can break and you have to debug\nwhat went wrong instead of just one.\n\nIn terms of testing this, I advocate not so much performance testing\nas accuracy testing. So for example if you intentionally change the\nLSN consumption rate during your test, e.g. high LSN consumption rate\nfor a while, then low for while, then high again for a while, and then\ngraph the contents of the final data structure, how well does the data\nstructure model what actually happened? Honestly, my whole concern\nhere is really around the lack of recency bias. If you simply took a\nsample every N seconds until the buffer was full and then repeatedly\nthinned the data by throwing away every other sample from the older\nhalf of the buffer, then it would be self-evident that accuracy for\nthe older data was going to degrade over time, but also that accuracy\nfor new data wasn't going to degrade no matter how long you ran the\nalgorithm, simply because the newest half of the data never gets\nthinned. But because you've chosen to throw away the point that leads\nto the least additional error (on an imaginary request distribution\nthat is just as likely to care about very old things as it is to care\nabout new ones), there's nothing to keep the algorithm from getting\ninto a state where it systematically throws away new data points and\nkeeps old ones.\n\nTo be clear, I'm not saying the algorithm I just mooted is the right\none or that it has no weaknesses; for example, it needlessly throws\naway precision that it doesn't have to lose when the rate of LSN\nconsumption is constant for a long time. I don't think that\nnecessarily matters because the algorithm doesn't need to be as\naccurate as possible; it just needs to be accurate enough to get the\njob done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2024 13:06:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <[email protected]> wrote:\n>\n> As I mentioned to you off-list, I feel like this needs some sort of\n> recency bias. Certainly vacuum, and really almost any conceivable user\n> of this facility, is going to care more about accurate answers for new\n> data than for old data. If there's no recency bias, then I think that\n> eventually answers for more recent LSNs will start to become less\n> accurate, since they've got to share the data structure with more and\n> more time from long ago. I don't think you've done anything about this\n> in this version of the patch, but I might be wrong.\n\nThat makes sense. This version of the patch set doesn't have a recency\nbias implementation. I plan to work on it but will need to do the\ntesting like you mentioned.\n\n> One way to make the standby more accurately mimic the primary would be\n> to base entries on the timestamp-LSN data that is already present in\n> the WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or\n> updated entries on the primary when logging those records, the standby\n> could redo exactly what the primary did. A disadvantage of this\n> approach is that if there are no commits for a while then your mapping\n> gets out of date, but that might be something we could just tolerate.\n> Another possible solution is to log the changes you make on the\n> primary and have the standby replay those changes. Perhaps I'm wrong\n> to advocate for such solutions, but it feels error-prone to have one\n> algorithm for the primary and a different algorithm for the standby.\n> You now basically have two things that can break and you have to debug\n> what went wrong instead of just one.\n\nYour point about maintaining two different systems for creating the\ntime stream being error prone makes sense. Honestly logging the\ncontents of the LSNTimeStream seems like it will be the simplest to\nmaintain and understand. I was a bit apprehensive to WAL log one part\nof a single stats structure (since the other stats aren't logged), but\nI think explaining why that's done is easier than explaining separate\nLSNTimeStream creation code for replicas.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 7 Aug 2024 15:39:49 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On 8/7/24 21:39, Melanie Plageman wrote:\n> On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <[email protected]> wrote:\n>>\n>> As I mentioned to you off-list, I feel like this needs some sort of\n>> recency bias. Certainly vacuum, and really almost any conceivable user\n>> of this facility, is going to care more about accurate answers for new\n>> data than for old data. If there's no recency bias, then I think that\n>> eventually answers for more recent LSNs will start to become less\n>> accurate, since they've got to share the data structure with more and\n>> more time from long ago. I don't think you've done anything about this\n>> in this version of the patch, but I might be wrong.\n> \n> That makes sense. This version of the patch set doesn't have a recency\n> bias implementation. I plan to work on it but will need to do the\n> testing like you mentioned.\n> \n\nI agree that it's likely we probably want more accurate results for\nrecent data, so some recency bias makes sense - for example for the\neager vacuuming that's definitely true.\n\nBut this was initially presented as a somewhat universal LSN/timestamp\nmapping, and in that case it might make sense to minimize the average\nerror - which I think is what lsntime_to_drop() currently does, by\ncalculating the \"area\" etc.\n\nMaybe it'd be good to approach this from the opposite direction, say\nwhat \"accuracy guarantees\" we want to provide, and then design the\nstructure / algorithm to ensure that. Otherwise we may end up with an\ninfinite discussion about algorithms with unclear idea which one is the\nbest choice.\n\nAnd I'm sure \"users\" of the LSN/Timestamp mapping may get confused about\nwhat to expect, without reasonably clear guarantees.\n\nFor example, it seems to me a \"good\" accuracy guarantee would be:\n\n Given a LSN, the age of the returned timestamp is less than 10% off\n the actual timestamp. The timestamp precision is in seconds.\n\nThis means that if LSN was written 100 seconds ago, it would be OK to\nget an answer in the 90-110 seconds range. For LSN from 1h ago, the\nacceptable range would be 3600s +/- 360s. And so on. The 10% is just\narbitrary, maybe it should be lower - doesn't matter much.\n\nHow could we do this? We have 1s precision, so we start with buckets for\neach seconds. And we want to allow merging stuff nicely. The smallest\nmerges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n1s -> 10s -> 100s -> 1000s instead.\n\nSo we start with 100x one-second buckets\n\n[A_0, A_1, ..., A_99] -> 100 x 1s buckets\n[B_0, B_1, ..., B_99] -> 100 x 10s buckets\n[C_0, C_1, ..., C_99] -> 100 x 100s buckets\n[D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n\nWe start by adding data into A_k buckets. After filling all 100 of them,\nwe grab the oldest 10 buckets, and combine/move them into B_k. And so\non, until B is gets full too. Then we grab the 10 oldest B_k entries,\nand move them into C. and so on. For D the oldest entries would get\ndiscarded, or we could add another layer with each bucket representing\n10k seconds.\n\nA-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need\n(or want) to keep a longer history?\n\nThese arrays are larger than what the current patch does, ofc. That has\n64 x 16B entries, so 1kB. These arrays have ~6kB - but I'm pretty sure\nit could be made more compact, by growing the buckets slower. With 10x\nit's just simpler to think about, and also - 6kB seems pretty decent.\n\nNote: I just realized the patch does LOG_STREAM_INTERVAL_MS = 30s, so\nthe 1s accuracy seems like an overkill, and it could be much smaller.\n\n\n>> One way to make the standby more accurately mimic the primary would be\n>> to base entries on the timestamp-LSN data that is already present in\n>> the WAL, i.e. {COMMIT|ABORT} [PREPARED] records. If you only added or\n>> updated entries on the primary when logging those records, the standby\n>> could redo exactly what the primary did. A disadvantage of this\n>> approach is that if there are no commits for a while then your mapping\n>> gets out of date, but that might be something we could just tolerate.\n>> Another possible solution is to log the changes you make on the\n>> primary and have the standby replay those changes. Perhaps I'm wrong\n>> to advocate for such solutions, but it feels error-prone to have one\n>> algorithm for the primary and a different algorithm for the standby.\n>> You now basically have two things that can break and you have to debug\n>> what went wrong instead of just one.\n> \n> Your point about maintaining two different systems for creating the\n> time stream being error prone makes sense. Honestly logging the\n> contents of the LSNTimeStream seems like it will be the simplest to\n> maintain and understand. I was a bit apprehensive to WAL log one part\n> of a single stats structure (since the other stats aren't logged), but\n> I think explaining why that's done is easier than explaining separate\n> LSNTimeStream creation code for replicas.\n> \n\nIsn't this a sign this does not quite fit into pgstats? Even if this\nhappens to deal with unsafe restarts, replica promotions and so on, what\nif the user just does pg_stat_reset? That already causes trouble because\nwe simply forget deleted/updated/inserted tuples. If we also forget data\nused for freezing heuristics, that does not seem great ...\n\nWouldn't it be better to write this into WAL as part of a checkpoint (or\nsomething like that?), and make bgwriter to not only add LSN/timestamp\ninto the stream, but also write it into WAL. It's already waking up, on\nidle systems ~32B written to WAL does not matter, and on busy system\nit's just noise.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Thu, 8 Aug 2024 20:34:40 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n> How could we do this? We have 1s precision, so we start with buckets for\n> each seconds. And we want to allow merging stuff nicely. The smallest\n> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n> 1s -> 10s -> 100s -> 1000s instead.\n>\n> So we start with 100x one-second buckets\n>\n> [A_0, A_1, ..., A_99] -> 100 x 1s buckets\n> [B_0, B_1, ..., B_99] -> 100 x 10s buckets\n> [C_0, C_1, ..., C_99] -> 100 x 100s buckets\n> [D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n>\n> We start by adding data into A_k buckets. After filling all 100 of them,\n> we grab the oldest 10 buckets, and combine/move them into B_k. And so\n> on, until B is gets full too. Then we grab the 10 oldest B_k entries,\n> and move them into C. and so on. For D the oldest entries would get\n> discarded, or we could add another layer with each bucket representing\n> 10k seconds.\n\nYeah, this kind of algorithm makes sense to me, although as you say\nlater, I don't think we need this amount of precision. I also think\nyou're right to point out that this provides certain guaranteed\nbehavior.\n\n> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need\n> (or want) to keep a longer history?\n\nI think there is a difference of opinion about this between Melanie\nand I. I feel like we should be designing something that does the\nexact job we need done for the freezing stuff, and if anyone else can\nuse it, that's a bonus. For that, I feel that 300h is more than\nplenty. The goal of the freezing stuff, roughly speaking, is to answer\nthe question \"will this be unfrozen real soon?\". \"Real soon\" could\narguably mean a minute or an hour, but I don't think it makes sense\nfor it to be a week. If we're freezing data now that has a good chance\nof being unfrozen again within 7 days, we should just freeze it\nanyway. The cost of freezing isn't really all that high. If we keep\nfreezing pages that are going to be unfrozen again within seconds or\nminutes, we pay those freezing costs enough times that they become\nmaterial, but I have difficulty imagining that it ever matters if we\nre-freeze the same page every week. It's OK to be wrong as long as we\naren't wrong too often, and I think that being wrong once per page per\nweek isn't too often.\n\nBut I think Melanie was hoping to create something more general, which\non one level is understandable, but on the other hand it's unclear\nwhat the goals are exactly. If we limit our scope to specifically\nVACUUM, we can make reasonable guesses about how much precision we\nneed and for how long. But a hypothetical other client of this\ninfrastructure could need anything at all, which makes it very unclear\nwhat the best design is, IMHO.\n\n> Isn't this a sign this does not quite fit into pgstats? Even if this\n> happens to deal with unsafe restarts, replica promotions and so on, what\n> if the user just does pg_stat_reset? That already causes trouble because\n> we simply forget deleted/updated/inserted tuples. If we also forget data\n> used for freezing heuristics, that does not seem great ...\n\n+1.\n\n> Wouldn't it be better to write this into WAL as part of a checkpoint (or\n> something like that?), and make bgwriter to not only add LSN/timestamp\n> into the stream, but also write it into WAL. It's already waking up, on\n> idle systems ~32B written to WAL does not matter, and on busy system\n> it's just noise.\n\nI am not really sure of the best place to put this data. I agree that\npgstat doesn't feel like quite the right place. But I'm not quite sure\nthat putting it into every checkpoint is the right idea either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2024 14:59:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On 8/8/24 20:59, Robert Haas wrote:\n> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n>> How could we do this? We have 1s precision, so we start with buckets for\n>> each seconds. And we want to allow merging stuff nicely. The smallest\n>> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n>> 1s -> 10s -> 100s -> 1000s instead.\n>>\n>> So we start with 100x one-second buckets\n>>\n>> [A_0, A_1, ..., A_99] -> 100 x 1s buckets\n>> [B_0, B_1, ..., B_99] -> 100 x 10s buckets\n>> [C_0, C_1, ..., C_99] -> 100 x 100s buckets\n>> [D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n>>\n>> We start by adding data into A_k buckets. After filling all 100 of them,\n>> we grab the oldest 10 buckets, and combine/move them into B_k. And so\n>> on, until B is gets full too. Then we grab the 10 oldest B_k entries,\n>> and move them into C. and so on. For D the oldest entries would get\n>> discarded, or we could add another layer with each bucket representing\n>> 10k seconds.\n> \n> Yeah, this kind of algorithm makes sense to me, although as you say\n> later, I don't think we need this amount of precision. I also think\n> you're right to point out that this provides certain guaranteed\n> behavior.\n> \n>> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need\n>> (or want) to keep a longer history?\n> \n> I think there is a difference of opinion about this between Melanie\n> and I. I feel like we should be designing something that does the\n> exact job we need done for the freezing stuff, and if anyone else can\n> use it, that's a bonus. For that, I feel that 300h is more than\n> plenty. The goal of the freezing stuff, roughly speaking, is to answer\n> the question \"will this be unfrozen real soon?\". \"Real soon\" could\n> arguably mean a minute or an hour, but I don't think it makes sense\n> for it to be a week. If we're freezing data now that has a good chance\n> of being unfrozen again within 7 days, we should just freeze it\n> anyway. The cost of freezing isn't really all that high. If we keep\n> freezing pages that are going to be unfrozen again within seconds or\n> minutes, we pay those freezing costs enough times that they become\n> material, but I have difficulty imagining that it ever matters if we\n> re-freeze the same page every week. It's OK to be wrong as long as we\n> aren't wrong too often, and I think that being wrong once per page per\n> week isn't too often.\n> \n> But I think Melanie was hoping to create something more general, which\n> on one level is understandable, but on the other hand it's unclear\n> what the goals are exactly. If we limit our scope to specifically\n> VACUUM, we can make reasonable guesses about how much precision we\n> need and for how long. But a hypothetical other client of this\n> infrastructure could need anything at all, which makes it very unclear\n> what the best design is, IMHO.\n> \n\nI don't have a strong opinion on this. I agree with you it's better to\nhave a good solution for the problem at hand than a poor solution for\nhypothetical use cases. I don't have a clear idea what the other use\ncases would be, which makes it hard to say what precision/history would\nbe necessary. But I also understand the wish to make it useful for a\nwider set of use cases, when possible. I'd try to do the same thing.\n\nBut I think a clear description of the precision guarantees helps to\nachieve that (even if the algorithm could be different).\n\nIf the only argument ends up being about how precise it needs to be and\nhow much history we need to cover, I think that's fine because that's\njust a matter of setting a couple config parameters.\n\n>> Isn't this a sign this does not quite fit into pgstats? Even if this\n>> happens to deal with unsafe restarts, replica promotions and so on, what\n>> if the user just does pg_stat_reset? That already causes trouble because\n>> we simply forget deleted/updated/inserted tuples. If we also forget data\n>> used for freezing heuristics, that does not seem great ...\n> \n> +1.\n> \n>> Wouldn't it be better to write this into WAL as part of a checkpoint (or\n>> something like that?), and make bgwriter to not only add LSN/timestamp\n>> into the stream, but also write it into WAL. It's already waking up, on\n>> idle systems ~32B written to WAL does not matter, and on busy system\n>> it's just noise.\n> \n> I am not really sure of the best place to put this data. I agree that\n> pgstat doesn't feel like quite the right place. But I'm not quite sure\n> that putting it into every checkpoint is the right idea either.\n> \n\nIs there a reason not to make this just another SLRU, just like we do\nfor commit_ts? I'm not saying it's perfect, but it's an approach we\nalready use to solve these issues.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 9 Aug 2024 02:39:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 8:39 PM Tomas Vondra <[email protected]> wrote:\n> Is there a reason not to make this just another SLRU, just like we do\n> for commit_ts? I'm not saying it's perfect, but it's an approach we\n> already use to solve these issues.\n\nAn SLRU is essentially an infinitely large array that grows at one end\nand shrinks at the other -- but this is a fixed-size data structure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2024 20:53:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n>\n> On 8/7/24 21:39, Melanie Plageman wrote:\n> > On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <[email protected]> wrote:\n> >>\n> >> As I mentioned to you off-list, I feel like this needs some sort of\n> >> recency bias. Certainly vacuum, and really almost any conceivable user\n> >> of this facility, is going to care more about accurate answers for new\n> >> data than for old data. If there's no recency bias, then I think that\n> >> eventually answers for more recent LSNs will start to become less\n> >> accurate, since they've got to share the data structure with more and\n> >> more time from long ago. I don't think you've done anything about this\n> >> in this version of the patch, but I might be wrong.\n> >\n> > That makes sense. This version of the patch set doesn't have a recency\n> > bias implementation. I plan to work on it but will need to do the\n> > testing like you mentioned.\n> >\n>\n> I agree that it's likely we probably want more accurate results for\n> recent data, so some recency bias makes sense - for example for the\n> eager vacuuming that's definitely true.\n>\n> But this was initially presented as a somewhat universal LSN/timestamp\n> mapping, and in that case it might make sense to minimize the average\n> error - which I think is what lsntime_to_drop() currently does, by\n> calculating the \"area\" etc.\n>\n> Maybe it'd be good to approach this from the opposite direction, say\n> what \"accuracy guarantees\" we want to provide, and then design the\n> structure / algorithm to ensure that. Otherwise we may end up with an\n> infinite discussion about algorithms with unclear idea which one is the\n> best choice.\n>\n> And I'm sure \"users\" of the LSN/Timestamp mapping may get confused about\n> what to expect, without reasonably clear guarantees.\n>\n> For example, it seems to me a \"good\" accuracy guarantee would be:\n>\n> Given a LSN, the age of the returned timestamp is less than 10% off\n> the actual timestamp. The timestamp precision is in seconds.\n>\n> This means that if LSN was written 100 seconds ago, it would be OK to\n> get an answer in the 90-110 seconds range. For LSN from 1h ago, the\n> acceptable range would be 3600s +/- 360s. And so on. The 10% is just\n> arbitrary, maybe it should be lower - doesn't matter much.\n\nI changed this patch a bit to only provide ranges with an upper and\nlower bound from the SQL callable functions. While the size of the\nrange provided could be part of our \"accuracy guarantee\", I'm not sure\nif we have to provide that.\n\n> How could we do this? We have 1s precision, so we start with buckets for\n> each seconds. And we want to allow merging stuff nicely. The smallest\n> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n> 1s -> 10s -> 100s -> 1000s instead.\n>\n> So we start with 100x one-second buckets\n>\n> [A_0, A_1, ..., A_99] -> 100 x 1s buckets\n> [B_0, B_1, ..., B_99] -> 100 x 10s buckets\n> [C_0, C_1, ..., C_99] -> 100 x 100s buckets\n> [D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n>\n> We start by adding data into A_k buckets. After filling all 100 of them,\n> we grab the oldest 10 buckets, and combine/move them into B_k. And so\n> on, until B is gets full too. Then we grab the 10 oldest B_k entries,\n> and move them into C. and so on. For D the oldest entries would get\n> discarded, or we could add another layer with each bucket representing\n> 10k seconds.\n\nI originally had an algorithm that stored old values somewhat like\nthis (each element stored 2x logical members of the preceding\nelement). When I was testing algorithms, I abandoned this method\nbecause it was less accurate than the method which calculates the\ninterpolation error \"area\". But, this would be expected -- it would be\nless accurate for older values.\n\nI'm currently considering an algorithm that uses a combination of the\ninterpolation error and the age of the point. I'm thinking of adding\nto or dividing the error of each point by \"now - that point's time (or\nlsn)\". This would lead me to be more likely to drop points that are\nolder.\n\nThis is a bit different than \"combining\" buckets, but it seems like it\nmight allow us to drop unneeded recent points when they are very\nregular.\n\n> Isn't this a sign this does not quite fit into pgstats? Even if this\n> happens to deal with unsafe restarts, replica promotions and so on, what\n> if the user just does pg_stat_reset? That already causes trouble because\n> we simply forget deleted/updated/inserted tuples. If we also forget data\n> used for freezing heuristics, that does not seem great ...\n>\n> Wouldn't it be better to write this into WAL as part of a checkpoint (or\n> something like that?), and make bgwriter to not only add LSN/timestamp\n> into the stream, but also write it into WAL. It's already waking up, on\n> idle systems ~32B written to WAL does not matter, and on busy system\n> it's just noise.\n\nI was imagining adding a new type of WAL record that contains just the\nLSN and time and writing it out in bgwriter. Is that not what you are\nthinking?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 8 Aug 2024 21:02:15 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 3:00 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n> > A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need\n> > (or want) to keep a longer history?\n>\n> I think there is a difference of opinion about this between Melanie\n> and I. I feel like we should be designing something that does the\n> exact job we need done for the freezing stuff, and if anyone else can\n> use it, that's a bonus. For that, I feel that 300h is more than\n> plenty. The goal of the freezing stuff, roughly speaking, is to answer\n> the question \"will this be unfrozen real soon?\". \"Real soon\" could\n> arguably mean a minute or an hour, but I don't think it makes sense\n> for it to be a week. If we're freezing data now that has a good chance\n> of being unfrozen again within 7 days, we should just freeze it\n> anyway. The cost of freezing isn't really all that high. If we keep\n> freezing pages that are going to be unfrozen again within seconds or\n> minutes, we pay those freezing costs enough times that they become\n> material, but I have difficulty imagining that it ever matters if we\n> re-freeze the same page every week. It's OK to be wrong as long as we\n> aren't wrong too often, and I think that being wrong once per page per\n> week isn't too often.\n>\n> But I think Melanie was hoping to create something more general, which\n> on one level is understandable, but on the other hand it's unclear\n> what the goals are exactly. If we limit our scope to specifically\n> VACUUM, we can make reasonable guesses about how much precision we\n> need and for how long. But a hypothetical other client of this\n> infrastructure could need anything at all, which makes it very unclear\n> what the best design is, IMHO.\n\nI'm fine with creating something that is optimized for use with\nfreezing. I proposed this LSNTimeStream patch as a separate project\nbecause 1) Andres suggested it would be useful for other things 2) it\nwould make the adaptive freezing project smaller if this goes in\nfirst. The adaptive freezing has two different fuzzy bits (this\nLSNTimeStream and then the accumulator which is used to determine if a\npage is older than most pages which were unfrozen too soon). I was\nhoping to find an independent use for one of the fuzzy bits to move it\nforward.\n\nBut, I do think we should optimize the data thinning strategy for\nvacuum's adaptive freezing.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 8 Aug 2024 21:29:15 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On 8/9/24 03:29, Melanie Plageman wrote:\n> On Thu, Aug 8, 2024 at 3:00 PM Robert Haas <[email protected]> wrote:\n>>\n>> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n>>> A-D is already enough to cover 30h, with A-E it'd be ~300h. Do we need\n>>> (or want) to keep a longer history?\n>>\n>> I think there is a difference of opinion about this between Melanie\n>> and I. I feel like we should be designing something that does the\n>> exact job we need done for the freezing stuff, and if anyone else can\n>> use it, that's a bonus. For that, I feel that 300h is more than\n>> plenty. The goal of the freezing stuff, roughly speaking, is to answer\n>> the question \"will this be unfrozen real soon?\". \"Real soon\" could\n>> arguably mean a minute or an hour, but I don't think it makes sense\n>> for it to be a week. If we're freezing data now that has a good chance\n>> of being unfrozen again within 7 days, we should just freeze it\n>> anyway. The cost of freezing isn't really all that high. If we keep\n>> freezing pages that are going to be unfrozen again within seconds or\n>> minutes, we pay those freezing costs enough times that they become\n>> material, but I have difficulty imagining that it ever matters if we\n>> re-freeze the same page every week. It's OK to be wrong as long as we\n>> aren't wrong too often, and I think that being wrong once per page per\n>> week isn't too often.\n>>\n>> But I think Melanie was hoping to create something more general, which\n>> on one level is understandable, but on the other hand it's unclear\n>> what the goals are exactly. If we limit our scope to specifically\n>> VACUUM, we can make reasonable guesses about how much precision we\n>> need and for how long. But a hypothetical other client of this\n>> infrastructure could need anything at all, which makes it very unclear\n>> what the best design is, IMHO.\n> \n> I'm fine with creating something that is optimized for use with\n> freezing. I proposed this LSNTimeStream patch as a separate project\n> because 1) Andres suggested it would be useful for other things 2) it\n> would make the adaptive freezing project smaller if this goes in\n> first. The adaptive freezing has two different fuzzy bits (this\n> LSNTimeStream and then the accumulator which is used to determine if a\n> page is older than most pages which were unfrozen too soon). I was\n> hoping to find an independent use for one of the fuzzy bits to move it\n> forward.\n> \n> But, I do think we should optimize the data thinning strategy for\n> vacuum's adaptive freezing.\n> \n\n+1 to this\n\nIMHO if Andres thinks this would be useful for something else, it'd be\nnice if he could explain what the other use cases are. Otherwise it's\nnot clear how to make it work for them.\n\nThe one other use case I can think of is monitoring - being able to look\nat WAL throughput over time. Which seems OK, but it also can accept very\nlow resolution in distant past.\n\nFWIW it still makes sense to do this as a separate patch, before the\nmain \"freezing\" one.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 9 Aug 2024 14:52:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "\n\nOn 8/9/24 03:02, Melanie Plageman wrote:\n> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n>>\n>> On 8/7/24 21:39, Melanie Plageman wrote:\n>>> On Wed, Aug 7, 2024 at 1:06 PM Robert Haas <[email protected]> wrote:\n>>>>\n>>>> As I mentioned to you off-list, I feel like this needs some sort of\n>>>> recency bias. Certainly vacuum, and really almost any conceivable user\n>>>> of this facility, is going to care more about accurate answers for new\n>>>> data than for old data. If there's no recency bias, then I think that\n>>>> eventually answers for more recent LSNs will start to become less\n>>>> accurate, since they've got to share the data structure with more and\n>>>> more time from long ago. I don't think you've done anything about this\n>>>> in this version of the patch, but I might be wrong.\n>>>\n>>> That makes sense. This version of the patch set doesn't have a recency\n>>> bias implementation. I plan to work on it but will need to do the\n>>> testing like you mentioned.\n>>>\n>>\n>> I agree that it's likely we probably want more accurate results for\n>> recent data, so some recency bias makes sense - for example for the\n>> eager vacuuming that's definitely true.\n>>\n>> But this was initially presented as a somewhat universal LSN/timestamp\n>> mapping, and in that case it might make sense to minimize the average\n>> error - which I think is what lsntime_to_drop() currently does, by\n>> calculating the \"area\" etc.\n>>\n>> Maybe it'd be good to approach this from the opposite direction, say\n>> what \"accuracy guarantees\" we want to provide, and then design the\n>> structure / algorithm to ensure that. Otherwise we may end up with an\n>> infinite discussion about algorithms with unclear idea which one is the\n>> best choice.\n>>\n>> And I'm sure \"users\" of the LSN/Timestamp mapping may get confused about\n>> what to expect, without reasonably clear guarantees.\n>>\n>> For example, it seems to me a \"good\" accuracy guarantee would be:\n>>\n>> Given a LSN, the age of the returned timestamp is less than 10% off\n>> the actual timestamp. The timestamp precision is in seconds.\n>>\n>> This means that if LSN was written 100 seconds ago, it would be OK to\n>> get an answer in the 90-110 seconds range. For LSN from 1h ago, the\n>> acceptable range would be 3600s +/- 360s. And so on. The 10% is just\n>> arbitrary, maybe it should be lower - doesn't matter much.\n> \n> I changed this patch a bit to only provide ranges with an upper and\n> lower bound from the SQL callable functions. While the size of the\n> range provided could be part of our \"accuracy guarantee\", I'm not sure\n> if we have to provide that.\n> \n\nI wouldn't object to providing the timestamp range, along with the\nestimate. That seems potentially quite useful for other use cases - it\nprovides a very clear guarantee.\n\nThe thing that concerns me a bit is that maybe it's an implementation\ndetail. I mean, we might choose to rework the structure in a way that\ndoes not track the ranges like this ... Doesn't seem likely, though.\n\n>> How could we do this? We have 1s precision, so we start with buckets for\n>> each seconds. And we want to allow merging stuff nicely. The smallest\n>> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n>> 1s -> 10s -> 100s -> 1000s instead.\n>>\n>> So we start with 100x one-second buckets\n>>\n>> [A_0, A_1, ..., A_99] -> 100 x 1s buckets\n>> [B_0, B_1, ..., B_99] -> 100 x 10s buckets\n>> [C_0, C_1, ..., C_99] -> 100 x 100s buckets\n>> [D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n>>\n>> We start by adding data into A_k buckets. After filling all 100 of them,\n>> we grab the oldest 10 buckets, and combine/move them into B_k. And so\n>> on, until B is gets full too. Then we grab the 10 oldest B_k entries,\n>> and move them into C. and so on. For D the oldest entries would get\n>> discarded, or we could add another layer with each bucket representing\n>> 10k seconds.\n> \n> I originally had an algorithm that stored old values somewhat like\n> this (each element stored 2x logical members of the preceding\n> element). When I was testing algorithms, I abandoned this method\n> because it was less accurate than the method which calculates the\n> interpolation error \"area\". But, this would be expected -- it would be\n> less accurate for older values.\n> \n> I'm currently considering an algorithm that uses a combination of the\n> interpolation error and the age of the point. I'm thinking of adding\n> to or dividing the error of each point by \"now - that point's time (or\n> lsn)\". This would lead me to be more likely to drop points that are\n> older.\n> \n> This is a bit different than \"combining\" buckets, but it seems like it\n> might allow us to drop unneeded recent points when they are very\n> regular.\n> \n\nTBH I'm a bit lost in how the various patch versions merge the data.\nMaybe there is a perfect algorithm, keeping a perfectly accurate\napproximation in the smallest space, but does that really matter? If we\nneeded to keep many instances / very long history, maybe it's matter.\n\nBut we need one instance, and we seem to agree it's enough to have a\ncouple days of history at most. And even the super wasteful struct I\ndescribed above would only need ~8kB for that.\n\nI suggest we do the simplest and most obvious algorithm possible, at\nleast for now. Focusing on this part seems like a distraction from the\nfreezing thing you actually want to do.\n\n>> Isn't this a sign this does not quite fit into pgstats? Even if this\n>> happens to deal with unsafe restarts, replica promotions and so on, what\n>> if the user just does pg_stat_reset? That already causes trouble because\n>> we simply forget deleted/updated/inserted tuples. If we also forget data\n>> used for freezing heuristics, that does not seem great ...\n>>\n>> Wouldn't it be better to write this into WAL as part of a checkpoint (or\n>> something like that?), and make bgwriter to not only add LSN/timestamp\n>> into the stream, but also write it into WAL. It's already waking up, on\n>> idle systems ~32B written to WAL does not matter, and on busy system\n>> it's just noise.\n> \n> I was imagining adding a new type of WAL record that contains just the\n> LSN and time and writing it out in bgwriter. Is that not what you are\n> thinking?\n> \n\nNow sure, I was thinking we would do two things:\n\n1) bgwriter writes the (LSN,timestamp) into WAL, and also updates the\n in-memory struct\n\n2) during checkpoint we flush the in-memory struct to disk, so that we\n have it after restart / crash\n\nI haven't thought about this very much, but I think this would address\nboth the crash/recovery/restart on the primary, and on replicas.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 9 Aug 2024 15:09:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 9:02 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n> >\n> > Maybe it'd be good to approach this from the opposite direction, say\n> > what \"accuracy guarantees\" we want to provide, and then design the\n> > structure / algorithm to ensure that. Otherwise we may end up with an\n> > infinite discussion about algorithms with unclear idea which one is the\n> > best choice.\n> >\n> > And I'm sure \"users\" of the LSN/Timestamp mapping may get confused about\n> > what to expect, without reasonably clear guarantees.\n> >\n> > For example, it seems to me a \"good\" accuracy guarantee would be:\n> >\n> > Given a LSN, the age of the returned timestamp is less than 10% off\n> > the actual timestamp. The timestamp precision is in seconds.\n> >\n> > This means that if LSN was written 100 seconds ago, it would be OK to\n> > get an answer in the 90-110 seconds range. For LSN from 1h ago, the\n> > acceptable range would be 3600s +/- 360s. And so on. The 10% is just\n> > arbitrary, maybe it should be lower - doesn't matter much.\n>\n> I changed this patch a bit to only provide ranges with an upper and\n> lower bound from the SQL callable functions. While the size of the\n> range provided could be part of our \"accuracy guarantee\", I'm not sure\n> if we have to provide that.\n\nOkay, so as I think about evaluating a few new algorithms, I realize\nthat we do need some sort of criteria. I started listing out what I\nfeel is \"reasonable\" accuracy and plotting it to see if the\nrelationship is linear/exponential/etc. I think it would help to get\ninput on what would be \"reasonable\" accuracy.\n\nI thought that the following might be acceptable:\nThe first column is how old the value I am looking for actually is,\nthe second column is how off I am willing to have the algorithm tell\nme it is (+/-):\n\n1 second, 1 minute\n1 minute, 10 minute\n1 hour, 1 hour\n1 day, 6 hours\n1 week, 12 hours\n1 month, 1 day\n6 months, 1 week\n\nColumn 1 over column 2 produces a line like in the attached pic. I'd\nbe interested in others' opinions of error tolerance.\n\n- Melanie",
"msg_date": "Fri, 9 Aug 2024 09:09:27 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 8/9/24 03:02, Melanie Plageman wrote:\n> > On Thu, Aug 8, 2024 at 2:34 PM Tomas Vondra <[email protected]> wrote:\n> >> each seconds. And we want to allow merging stuff nicely. The smallest\n> >> merges we could do is 1s -> 2s -> 4s -> 8s -> ... but let's say we do\n> >> 1s -> 10s -> 100s -> 1000s instead.\n> >>\n> >> So we start with 100x one-second buckets\n> >>\n> >> [A_0, A_1, ..., A_99] -> 100 x 1s buckets\n> >> [B_0, B_1, ..., B_99] -> 100 x 10s buckets\n> >> [C_0, C_1, ..., C_99] -> 100 x 100s buckets\n> >> [D_0, D_1, ..., D_99] -> 100 x 1000s buckets\n> >>\n> >> We start by adding data into A_k buckets. After filling all 100 of them,\n> >> we grab the oldest 10 buckets, and combine/move them into B_k. And so\n> >> on, until B is gets full too. Then we grab the 10 oldest B_k entries,\n> >> and move them into C. and so on. For D the oldest entries would get\n> >> discarded, or we could add another layer with each bucket representing\n> >> 10k seconds.\n> >\n> > I originally had an algorithm that stored old values somewhat like\n> > this (each element stored 2x logical members of the preceding\n> > element). When I was testing algorithms, I abandoned this method\n> > because it was less accurate than the method which calculates the\n> > interpolation error \"area\". But, this would be expected -- it would be\n> > less accurate for older values.\n> >\n> > I'm currently considering an algorithm that uses a combination of the\n> > interpolation error and the age of the point. I'm thinking of adding\n> > to or dividing the error of each point by \"now - that point's time (or\n> > lsn)\". This would lead me to be more likely to drop points that are\n> > older.\n> >\n> > This is a bit different than \"combining\" buckets, but it seems like it\n> > might allow us to drop unneeded recent points when they are very\n> > regular.\n> >\n>\n> TBH I'm a bit lost in how the various patch versions merge the data.\n> Maybe there is a perfect algorithm, keeping a perfectly accurate\n> approximation in the smallest space, but does that really matter? If we\n> needed to keep many instances / very long history, maybe it's matter.\n>\n> But we need one instance, and we seem to agree it's enough to have a\n> couple days of history at most. And even the super wasteful struct I\n> described above would only need ~8kB for that.\n>\n> I suggest we do the simplest and most obvious algorithm possible, at\n> least for now. Focusing on this part seems like a distraction from the\n> freezing thing you actually want to do.\n\nThe simplest thing to do would be to pick an arbitrary point in the\npast (say one week) and then throw out all the points (except the very\noldest to avoid extrapolation) from before that cliff. I would like to\nspend time on getting a new version of the freezing patch on the list,\nbut I think Robert had strong feelings about having a complete design\nfirst. I'll switch focus to that for a bit so that perhaps you all can\nsee how I am using the time -> LSN conversion and that could inform\nthe design of the data structure.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 9 Aug 2024 09:15:01 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <[email protected]> wrote:\n> >\n> > I suggest we do the simplest and most obvious algorithm possible, at\n> > least for now. Focusing on this part seems like a distraction from the\n> > freezing thing you actually want to do.\n>\n> The simplest thing to do would be to pick an arbitrary point in the\n> past (say one week) and then throw out all the points (except the very\n> oldest to avoid extrapolation) from before that cliff. I would like to\n> spend time on getting a new version of the freezing patch on the list,\n> but I think Robert had strong feelings about having a complete design\n> first. I'll switch focus to that for a bit so that perhaps you all can\n> see how I am using the time -> LSN conversion and that could inform\n> the design of the data structure.\n\nI realize this thought didn't make much sense since it is a fixed size\ndata structure. We would have to use some other algorithm to get rid\nof data if there are still too many points from within the last week.\n\nIn the adaptive freezing code, I use the time stream to answer a yes\nor no question. I translate a time in the past (now -\ntarget_freeze_duration) to an LSN so that I can determine if a page\nthat is being modified for the first time after having been frozen has\nbeen modified sooner than target_freeze_duration (a GUC value). If it\nis, that page was unfrozen too soon. So, my use case is to produce a\nyes or no answer. It doesn't matter very much how accurate I am if I\nam wrong. I count the page as having been unfrozen too soon or I\ndon't. So, it seems I care about the accuracy of data from now until\nnow - target_freeze_duration + margin of error a lot and data before\nthat not at all. While it is true that if I'm wrong about a page that\nwas older but near the cutoff, that might be better than being wrong\nabout a very recent page, it is still wrong.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 9 Aug 2024 11:48:28 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On 8/9/24 17:48, Melanie Plageman wrote:\n> On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman\n> <[email protected]> wrote:\n>>\n>> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <[email protected]> wrote:\n>>>\n>>> I suggest we do the simplest and most obvious algorithm possible, at\n>>> least for now. Focusing on this part seems like a distraction from the\n>>> freezing thing you actually want to do.\n>>\n>> The simplest thing to do would be to pick an arbitrary point in the\n>> past (say one week) and then throw out all the points (except the very\n>> oldest to avoid extrapolation) from before that cliff. I would like to\n>> spend time on getting a new version of the freezing patch on the list,\n>> but I think Robert had strong feelings about having a complete design\n>> first. I'll switch focus to that for a bit so that perhaps you all can\n>> see how I am using the time -> LSN conversion and that could inform\n>> the design of the data structure.\n> \n> I realize this thought didn't make much sense since it is a fixed size\n> data structure. We would have to use some other algorithm to get rid\n> of data if there are still too many points from within the last week.\n> \n\nNot sure I understand. Why would the fixed size of the struct mean we\ncan't discard too old data?\n\nI'd imagine we simply reclaim some of the slots and mark them as unused,\n\"move\" the data to make space for recent data, or something like that.\nOr just use something like a cyclic buffer, that wraps around and\noverwrites oldest data.\n\n> In the adaptive freezing code, I use the time stream to answer a yes\n> or no question. I translate a time in the past (now -\n> target_freeze_duration) to an LSN so that I can determine if a page\n> that is being modified for the first time after having been frozen has\n> been modified sooner than target_freeze_duration (a GUC value). If it\n> is, that page was unfrozen too soon. So, my use case is to produce a\n> yes or no answer. It doesn't matter very much how accurate I am if I\n> am wrong. I count the page as having been unfrozen too soon or I\n> don't. So, it seems I care about the accuracy of data from now until\n> now - target_freeze_duration + margin of error a lot and data before\n> that not at all. While it is true that if I'm wrong about a page that\n> was older but near the cutoff, that might be better than being wrong\n> about a very recent page, it is still wrong.\n> \n\nYeah. But isn't that a bit backwards? The decision can be wrong because\nthe estimate was too off, or maybe it was spot on and we still made a\nwrong decision. That's what happens with heuristics.\n\nI think a natural expectation is that the quality of the answers\ncorrelates with the accuracy of the data / estimates. With accurate\nresults (say we keep a perfect history, with no loss of precision for\nolder data) we should be doing the right decision most of the time. If\nnot, it's a lost cause, IMHO. And with lower accuracy it'd get worse,\notherwise why would we need the detailed data.\n\nBut now that I think about it, I'm not entirely sure I understand what\npoint are you making :-(\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 9 Aug 2024 19:03:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On 8/9/24 15:09, Melanie Plageman wrote:\n>\n> ...\n> \n> Okay, so as I think about evaluating a few new algorithms, I realize\n> that we do need some sort of criteria. I started listing out what I\n> feel is \"reasonable\" accuracy and plotting it to see if the\n> relationship is linear/exponential/etc. I think it would help to get\n> input on what would be \"reasonable\" accuracy.\n> \n> I thought that the following might be acceptable:\n> The first column is how old the value I am looking for actually is,\n> the second column is how off I am willing to have the algorithm tell\n> me it is (+/-):\n> \n> 1 second, 1 minute\n> 1 minute, 10 minute\n> 1 hour, 1 hour\n> 1 day, 6 hours\n> 1 week, 12 hours\n> 1 month, 1 day\n> 6 months, 1 week\n> \n\nI think the question is whether we want to make this useful for other\nplaces and/or people, or if it's fine to tailor this specifically for\nthe freezing patch.\n\nIf the latter (specific to the freezing patch), I don't see why would it\nmatter what we think - either it works for the patch, or not.\n\nBut if we want to make it more widely useful, I find it a bit strange\nthe relative accuracy *increases* for older data. I mean, we start with\nrelative error 6000% (60s/1s) and then we get to relative error ~4%\n(1w/24w). Isn't that a bit against the earlier discussion on needing\nbetter accuracy for recent data? Sure, the absolute accuracy is still\nbetter (1m <<< 1w). And if this is good enough for the freezing ...\n\n\n> Column 1 over column 2 produces a line like in the attached pic. I'd\n> be interested in others' opinions of error tolerance.\n> \n> - Melanie\n\nI don't understand what the axes on the chart are :-( Does \"A over B\"\nmean A is x-axis or y-axis?\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 9 Aug 2024 19:24:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 1:03 PM Tomas Vondra <[email protected]> wrote:\n>\n> On 8/9/24 17:48, Melanie Plageman wrote:\n> > On Fri, Aug 9, 2024 at 9:15 AM Melanie Plageman\n> > <[email protected]> wrote:\n> >>\n> >> On Fri, Aug 9, 2024 at 9:09 AM Tomas Vondra <[email protected]> wrote:\n> >>>\n> >>> I suggest we do the simplest and most obvious algorithm possible, at\n> >>> least for now. Focusing on this part seems like a distraction from the\n> >>> freezing thing you actually want to do.\n> >>\n> >> The simplest thing to do would be to pick an arbitrary point in the\n> >> past (say one week) and then throw out all the points (except the very\n> >> oldest to avoid extrapolation) from before that cliff. I would like to\n> >> spend time on getting a new version of the freezing patch on the list,\n> >> but I think Robert had strong feelings about having a complete design\n> >> first. I'll switch focus to that for a bit so that perhaps you all can\n> >> see how I am using the time -> LSN conversion and that could inform\n> >> the design of the data structure.\n> >\n> > I realize this thought didn't make much sense since it is a fixed size\n> > data structure. We would have to use some other algorithm to get rid\n> > of data if there are still too many points from within the last week.\n> >\n>\n> Not sure I understand. Why would the fixed size of the struct mean we\n> can't discard too old data?\n\nOh, we can discard old data. I was just saying that all of the data\nmight be newer than the cutoff, in which case we can't only discard\nold data if we want to make room for new data.\n\n> > In the adaptive freezing code, I use the time stream to answer a yes\n> > or no question. I translate a time in the past (now -\n> > target_freeze_duration) to an LSN so that I can determine if a page\n> > that is being modified for the first time after having been frozen has\n> > been modified sooner than target_freeze_duration (a GUC value). If it\n> > is, that page was unfrozen too soon. So, my use case is to produce a\n> > yes or no answer. It doesn't matter very much how accurate I am if I\n> > am wrong. I count the page as having been unfrozen too soon or I\n> > don't. So, it seems I care about the accuracy of data from now until\n> > now - target_freeze_duration + margin of error a lot and data before\n> > that not at all. While it is true that if I'm wrong about a page that\n> > was older but near the cutoff, that might be better than being wrong\n> > about a very recent page, it is still wrong.\n> >\n>\n> Yeah. But isn't that a bit backwards? The decision can be wrong because\n> the estimate was too off, or maybe it was spot on and we still made a\n> wrong decision. That's what happens with heuristics.\n>\n> I think a natural expectation is that the quality of the answers\n> correlates with the accuracy of the data / estimates. With accurate\n> results (say we keep a perfect history, with no loss of precision for\n> older data) we should be doing the right decision most of the time. If\n> not, it's a lost cause, IMHO. And with lower accuracy it'd get worse,\n> otherwise why would we need the detailed data.\n>\n> But now that I think about it, I'm not entirely sure I understand what\n> point are you making :-(\n\nMy only point was that we really don't need to produce *any* estimate\nfor a value from before the cutoff. We just need to estimate if it is\nbefore or after. So, while we need to keep enough data to get that\nanswer right, we don't need very old data at all. Which is different\nfrom how I was thinking about the LSNTimeStream feature before.\n\nOn Fri, Aug 9, 2024 at 1:24 PM Tomas Vondra <[email protected]> wrote:\n>\n> On 8/9/24 15:09, Melanie Plageman wrote:\n> >\n> > Okay, so as I think about evaluating a few new algorithms, I realize\n> > that we do need some sort of criteria. I started listing out what I\n> > feel is \"reasonable\" accuracy and plotting it to see if the\n> > relationship is linear/exponential/etc. I think it would help to get\n> > input on what would be \"reasonable\" accuracy.\n> >\n> > I thought that the following might be acceptable:\n> > The first column is how old the value I am looking for actually is,\n> > the second column is how off I am willing to have the algorithm tell\n> > me it is (+/-):\n> >\n> > 1 second, 1 minute\n> > 1 minute, 10 minute\n> > 1 hour, 1 hour\n> > 1 day, 6 hours\n> > 1 week, 12 hours\n> > 1 month, 1 day\n> > 6 months, 1 week\n> >\n>\n> I think the question is whether we want to make this useful for other\n> places and/or people, or if it's fine to tailor this specifically for\n> the freezing patch.\n>\n> If the latter (specific to the freezing patch), I don't see why would it\n> matter what we think - either it works for the patch, or not.\n\nI think the best way forward is to make it useful for the freezing\npatch and then, if it seems like exposing it makes sense, we can do\nthat and properly document what to expect.\n\n> But if we want to make it more widely useful, I find it a bit strange\n> the relative accuracy *increases* for older data. I mean, we start with\n> relative error 6000% (60s/1s) and then we get to relative error ~4%\n> (1w/24w). Isn't that a bit against the earlier discussion on needing\n> better accuracy for recent data? Sure, the absolute accuracy is still\n> better (1m <<< 1w). And if this is good enough for the freezing ...\n\nI was just writing out what seemed intuitively like something I would\nbe willing to tolerate as a user. But you are right that doesn't make\nsense -- for the accuracy to go up for older data. I just think being\nmonths off for any estimate seems bad no matter how old the data is --\nwhich is probably why I felt like 1 week of accuracy for data 6 months\nold seemed like a reasonable tolerance. But, perhaps that isn't\nuseful.\n\nAlso, I realized that my \"calculate the error area\" method strongly\nfavors keeping older data. Once you drop a point, the area between the\ntwo remaining points (on either side of it) will be larger because the\ndistance between them is greater with the dropped point. So there is a\nstrong bias toward dropping newer data. That seems bad.\n\n> > Column 1 over column 2 produces a line like in the attached pic. I'd\n> > be interested in others' opinions of error tolerance.\n>\n> I don't understand what the axes on the chart are :-( Does \"A over B\"\n> mean A is x-axis or y-axis?\n\nYea, that was confusing. A over B is the y axis so you could see the\nratios and x is just their position in the array (so meaningless).\nAttached is A on the x axis and B on the y axis.\n\n- Melanie",
"msg_date": "Fri, 9 Aug 2024 14:13:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 11:48 AM Melanie Plageman\n<[email protected]> wrote:\n> In the adaptive freezing code, I use the time stream to answer a yes\n> or no question. I translate a time in the past (now -\n> target_freeze_duration) to an LSN so that I can determine if a page\n> that is being modified for the first time after having been frozen has\n> been modified sooner than target_freeze_duration (a GUC value). If it\n> is, that page was unfrozen too soon. So, my use case is to produce a\n> yes or no answer. It doesn't matter very much how accurate I am if I\n> am wrong. I count the page as having been unfrozen too soon or I\n> don't. So, it seems I care about the accuracy of data from now until\n> now - target_freeze_duration + margin of error a lot and data before\n> that not at all. While it is true that if I'm wrong about a page that\n> was older but near the cutoff, that might be better than being wrong\n> about a very recent page, it is still wrong.\n\nI don't really think this is the right way to think about it.\n\nFirst, you'd really like target_freeze_duration to be something that\ncan be changed at runtime, but the data structure that you use for the\nLSN-time mapping has to be sized at startup time and can't change\nthereafter. So I think you should try to design the LSN-time mapping\nstructure so that it is fixed size -- i.e. independent of the value of\ntarget_freeze_duration -- but capable of producing sufficiently\ncorrect answers for all reasonable values of target_freeze_duration.\nThen the user can change the value to whatever they like without a\nrestart, and still get reasonable behavior. Meanwhile, you don't have\nto deal with a variable-size data structure. Woohoo!\n\nSecond, I guess I'm a bit confused about the statement that \"It\ndoesn't matter very much how accurate I am if I am wrong.\" What does\nthat really mean? We're going to look at the LSN of a page that we're\nthinking about freezing and use that to estimate the time since the\npage was last modified and use that to guess whether the page is\nlikely to be modified again soon and then use that to decide whether\nto freeze. Even if we always estimated the time since last\nmodification perfectly, we could still be wrong about what that means\nfor the future. And we won't estimate the last modification time\nperfectly in all cases, because even if we make perfect decisions\nabout which data points to throw away, we're still going to use linear\ninterpolation in between those points, and that can be wrong. And I\nthink it's pretty much impossible to make flawless decisions about\nwhich points to throw away, too.\n\nBut the point is that we just need to be close enough. If\ntarget_freeze_duration=10m and our page age estimates are off by an\naverage of 10s, we will still make the correct decision about whether\nto freeze most of the time, but if they are off by an average of 1m,\nwe'll be wrong more often, and if they're off by an average of 10m,\nwe'll be wrong way more often. When target_freeze_duration=2h, it's\nnot nearly so bad to be off by 10m. The probability that a page will\nbe modified again soon when it hasn't been modified in the last 1h54m\nis probably not that different from the probability when it hasn't\nbeen modified in 2h4m, but the probability of a page being modified\nagain soon when it hasn't been modified in the last 4m could well be\nquite different from when it hasn't been modified in the last 14m. So\nit's completely reasonable, IMHO, to set things up so that you have\nhigher accuracy for newer LSNs.\n\nI feel like you're making this a lot harder than it needs to be.\nActually, I think this is a hard problem in terms of where to actually\nstore the data -- as Tomas said, pgstat doesn't seem quite right, and\nit's not clear to me what is actually right. But in terms of actually\nwhat to do with the data structure, some kind of exponential thinning\nof the data seems like the obvious thing to do. Tomas suggested a\nversion of that and I suggested a version of that and you could pick\neither one or do something of your own, but I don't really feel like\nwe need or want an original algorithm here. It seems better to just do\nstuff we know works, and whose characteristics we can easily predict.\nThe only area where I feel like we might want some algorithmic\ninnovation is in terms of eliding redundant measurements when things\naren't really changing.\n\nBut even that seems pretty optional. If you don't do that, and the\nsystem sits there idle for a long time, you will have a needlessly\ninaccurate idea of how old the pages are compared to what you could\nhave had. But also, they will all still look old so you'll still\nfreeze them so you win. The end.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2024 14:29:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add LSN <-> time conversion functionality"
}
] |
[
{
"msg_contents": "Dear Sir/Madam, \n \n \n \n \n \n\n I have used Pdadmin on my Macbook for a few days, then suddenly it can not be opened anymore(after i tried to force to exit it when i tried to turn off the laptop), I have tried many solution to try to fix this issue but none of them worked. Therefore, can you please help me solve it? \n \n \n \n \n \n\n thanks, \n \n\n Anita \n \n \n \n \n \n \n \n \n \n \n-- \n\n \n \n \n \n \nGirl can run the world \n \n\n\n Dear Sir/Madam, \n\n\n\n\n\n I have used Pdadmin on my Macbook for a few days, then suddenly it can not be opened anymore(after i tried to force to exit it when i tried to turn off the laptop), I have tried many solution to try to fix this issue but none of them worked. Therefore, can you please help me solve it? \n\n\n\n\n\n thanks, \n\n\n Anita \n\n\n\n\n\n\n-- \n\n\n\n \n Girl can run the world",
"msg_date": "Thu, 28 Dec 2023 16:12:06 +0800",
"msg_from": "\"Anita\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pdadmin open on Macbook issue"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 04:12:06PM +0800, Anita wrote:\n> Dear Sir/Madam,\n> \n> I have used Pdadmin on my Macbook for a few days, then suddenly it can not be\n> opened anymore(after i tried to force to exit it when i tried to turn off the\n> laptop), I have tried many solution to try to fix this issue but none of them\n> worked. Therefore, can you please help me solve it?\n\nFor pgAdmin support, see:\n\n\thttps://www.pgadmin.org/support/list/\n\thttps://github.com/pgadmin-org/pgadmin4/issues\n\nor email\n\n\[email protected]\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Dec 2023 17:07:40 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pdadmin open on Macbook issue"
}
] |
[
{
"msg_contents": "Hi,\n\nThe function heap_page_prune (src/backend/access/heap/pruneheap.c)\nHas a comment:\n\n\"/*\n * presult->htsv is not initialized here because all ntuple spots in the\n * array will be set either to a valid HTSV_Result value or -1.\n */\n\nIMO, this is a little bogus and does not make sense.\n\nI think it would be more productive to initialize the entire array with -1,\nand avoid flagging, one by one, the items that should be -1.\n\nPatch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 28 Dec 2023 09:57:54 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 7:58 PM Ranier Vilela <[email protected]> wrote:\n> I think it would be more productive to initialize the entire array with -1, and avoid flagging, one by one, the items that should be -1.\n\nThis just moves an operation from one place to the other, while\nobliterating the explanatory comment, so I don't see an advantage.\n\n\n",
"msg_date": "Tue, 9 Jan 2024 16:31:20 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "Em ter., 9 de jan. de 2024 às 06:31, John Naylor <[email protected]>\nescreveu:\n\n> On Thu, Dec 28, 2023 at 7:58 PM Ranier Vilela <[email protected]> wrote:\n> > I think it would be more productive to initialize the entire array with\n> -1, and avoid flagging, one by one, the items that should be -1.\n>\n> This just moves an operation from one place to the other, while\n> obliterating the explanatory comment, so I don't see an advantage.\n>\nWell, I think that is precisely the case for using memset.\nThe way initialization is currently done is much slower and harmful to the\nbranch.\nOf course, the gain should be small, but it is fully justified for\nswitching to memset.\nRegarding the comment, once initialization is done via memset, such as\nprstate.marked, it becomes irrelevant and unnecessary.\n\nBest regards,\nRanier Vilela\n\nEm ter., 9 de jan. de 2024 às 06:31, John Naylor <[email protected]> escreveu:On Thu, Dec 28, 2023 at 7:58 PM Ranier Vilela <[email protected]> wrote:\n> I think it would be more productive to initialize the entire array with -1, and avoid flagging, one by one, the items that should be -1.\n\nThis just moves an operation from one place to the other, while\nobliterating the explanatory comment, so I don't see an advantage.Well, I think that is precisely the case for using memset.The way initialization is currently done is much slower and harmful to the branch.Of course, the gain should be small, but it is fully justified for switching to memset.Regarding the comment, once initialization is done via memset, such as prstate.marked, it becomes irrelevant and unnecessary.Best regards,Ranier Vilela",
"msg_date": "Sat, 13 Jan 2024 11:36:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "On Sat, Jan 13, 2024 at 9:36 PM Ranier Vilela <[email protected]> wrote:\n>\n> Em ter., 9 de jan. de 2024 às 06:31, John Naylor <[email protected]> escreveu:\n\n>> This just moves an operation from one place to the other, while\n>> obliterating the explanatory comment, so I don't see an advantage.\n>\n> Well, I think that is precisely the case for using memset.\n> The way initialization is currently done is much slower and harmful to the branch.\n> Of course, the gain should be small, but it is fully justified for switching to memset.\n\nWe haven't seen any evidence or reasoning for that. Simple\nrules-of-thumb are not enough.\n\n\n",
"msg_date": "Sun, 14 Jan 2024 20:55:28 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "\n\n> On 14 Jan 2024, at 18:55, John Naylor <[email protected]> wrote:\n> \n> On Sat, Jan 13, 2024 at 9:36 PM Ranier Vilela <[email protected]> wrote:\n>> \n>> Em ter., 9 de jan. de 2024 às 06:31, John Naylor <[email protected]> escreveu:\n> \n>>> This just moves an operation from one place to the other, while\n>>> obliterating the explanatory comment, so I don't see an advantage.\n>> \n>> Well, I think that is precisely the case for using memset.\n>> The way initialization is currently done is much slower and harmful to the branch.\n>> Of course, the gain should be small, but it is fully justified for switching to memset.\n> \n> We haven't seen any evidence or reasoning for that. Simple\n> rules-of-thumb are not enough.\n> \n\nHi Ranier,\n\nI’ll mark CF entry [0] as “Returned with Feedback”. Feel free to reopen item in this CF or submit to the next, if you want to continue working on this.\n\nI took a glance into the patch, and I would agree that setting field nonzero values with memset() is somewhat unusual. Please provide stronger evidence to do so.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4734/\n\n",
"msg_date": "Mon, 4 Mar 2024 11:18:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "Em seg., 4 de mar. de 2024 às 03:18, Andrey M. Borodin <[email protected]>\nescreveu:\n\n>\n>\n> > On 14 Jan 2024, at 18:55, John Naylor <[email protected]> wrote:\n> >\n> > On Sat, Jan 13, 2024 at 9:36 PM Ranier Vilela <[email protected]>\n> wrote:\n> >>\n> >> Em ter., 9 de jan. de 2024 às 06:31, John Naylor <\n> [email protected]> escreveu:\n> >\n> >>> This just moves an operation from one place to the other, while\n> >>> obliterating the explanatory comment, so I don't see an advantage.\n> >>\n> >> Well, I think that is precisely the case for using memset.\n> >> The way initialization is currently done is much slower and harmful to\n> the branch.\n> >> Of course, the gain should be small, but it is fully justified for\n> switching to memset.\n> >\n> > We haven't seen any evidence or reasoning for that. Simple\n> > rules-of-thumb are not enough.\n> >\n>\n> Hi Ranier,\n>\n> I’ll mark CF entry [0] as “Returned with Feedback”. Feel free to reopen\n> item in this CF or submit to the next, if you want to continue working on\n> this.\n>\n> I took a glance into the patch, and I would agree that setting field\n> nonzero values with memset() is somewhat unusual. Please provide stronger\n> evidence to do so.\n>\n I counted the calls with non-zero memset in the entire postgres code and\nthey are about 183 calls.\n\nI counted the calls with non-zero memset in the entire postgres code and\nthey are about 183 calls.\n\nAt least 4 calls with -1\n\nFile contrib\\pg_trgm\\trgm_op.c:\nmemset(lastpos, -1, sizeof(int) * len);\n\nFile src\\backend\\executor\\execPartition.c:\nmemset(pd->indexes, -1, sizeof(int) * partdesc->nparts);\n\nFile src\\backend\\partitioning\\partprune.c:\nmemset(subplan_map, -1, nparts * sizeof(int));\nmemset(subpart_map, -1, nparts * sizeof(int));\n\nDoes filling a memory area, one by one, with branches, need strong evidence\nto prove that it is better than filling a memory area, all at once, without\nbranches?\n\nbest regards,\nRanier Vilela\n\nEm seg., 4 de mar. de 2024 às 03:18, Andrey M. Borodin <[email protected]> escreveu:\n\n> On 14 Jan 2024, at 18:55, John Naylor <[email protected]> wrote:\n> \n> On Sat, Jan 13, 2024 at 9:36 PM Ranier Vilela <[email protected]> wrote:\n>> \n>> Em ter., 9 de jan. de 2024 às 06:31, John Naylor <[email protected]> escreveu:\n> \n>>> This just moves an operation from one place to the other, while\n>>> obliterating the explanatory comment, so I don't see an advantage.\n>> \n>> Well, I think that is precisely the case for using memset.\n>> The way initialization is currently done is much slower and harmful to the branch.\n>> Of course, the gain should be small, but it is fully justified for switching to memset.\n> \n> We haven't seen any evidence or reasoning for that. Simple\n> rules-of-thumb are not enough.\n> \n\nHi Ranier,\n\nI’ll mark CF entry [0] as “Returned with Feedback”. Feel free to reopen item in this CF or submit to the next, if you want to continue working on this.\n\nI took a glance into the patch, and I would agree that setting field nonzero values with memset() is somewhat unusual. Please provide stronger evidence to do so. I counted the calls with non-zero memset in the entire postgres code and they are about 183 calls.I counted the calls with non-zero memset in the entire postgres code and they are about 183 calls.At least 4 calls with -1File contrib\\pg_trgm\\trgm_op.c:memset(lastpos, -1, sizeof(int) * len);File src\\backend\\executor\\execPartition.c:memset(pd->indexes, -1, sizeof(int) * partdesc->nparts);File src\\backend\\partitioning\\partprune.c:memset(subplan_map, -1, nparts * sizeof(int));memset(subpart_map, -1, nparts * sizeof(int));\nDoes filling a memory area, one by one, with branches, need strong evidence to prove that it is better than filling a memory area, all at once, without branches? best regards,Ranier Vilela",
"msg_date": "Mon, 4 Mar 2024 08:47:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "\n\n> On 4 Mar 2024, at 16:47, Ranier Vilela <[email protected]> wrote:\n> \n> Does filling a memory area, one by one, with branches, need strong evidence to prove that it is better than filling a memory area, all at once, without branches? \n\nI apologise for being too quick to decide to mark the patch RwF. Wold you mind if restore patch as \"Needs review\" so we can have more feedback?\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Mar 2024 22:19:57 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-04 08:47:11 -0300, Ranier Vilela wrote:\n> Does filling a memory area, one by one, with branches, need strong evidence\n> to prove that it is better than filling a memory area, all at once, without\n> branches?\n\nThat's a bogus comparison:\n\na) the memset version modifies the whole array, rather than just elements that\n correspond to an item - often there will be far fewer items than the\n maximally possible number\n\nb) the memset version initializes array elements that will be set to an actual\n value\n\nc) switching to memset does not elide any branches, as the branch is still\n needed\n\nAnd even without those, it'd still not be obviously better to use an\nahead-of-time memset(), as storing lots of values at once is more likely to be\nbound by memory bandwidth than interleaving different work.\n\nAndres\n\n\n",
"msg_date": "Mon, 4 Mar 2024 10:33:55 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tidy fill hstv array (src/backend/access/heap/pruneheap.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nThe type of field fd_count is defined in winsock.h:\ntypedef unsigned int u_int;\n\nSo in the struct fd_set,\nthe field fd_count is unsigned int.\n\nThe pgwin32_select function has loops that use *int* as indices.\n\nQuestion: in Windows, the socket select function isn't missing some events?\n\nIf not, It would be a good prevention practice, using the correct type,\nright?\n\nPatch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 28 Dec 2023 14:45:40 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Windows sockets (select missing events?)"
},
{
"msg_contents": "Em qui., 28 de dez. de 2023 às 14:45, Ranier Vilela <[email protected]>\nescreveu:\n\n> Hi,\n>\n> The type of field fd_count is defined in winsock.h:\n> typedef unsigned int u_int;\n>\n> So in the struct fd_set,\n> the field fd_count is unsigned int.\n>\n> The pgwin32_select function has loops that use *int* as indices.\n>\n> Question: in Windows, the socket select function isn't missing some events?\n>\n> If not, It would be a good prevention practice, using the correct type,\n> right?\n>\n> Patch attached.\n>\nFix overflow with patch.\n\nbest regards,\nRanier Vilela",
"msg_date": "Sat, 30 Dec 2023 10:40:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Windows sockets (select missing events?)"
},
{
"msg_contents": "Hi Ranier,\n\nI doubt it really matters, unless perhaps it's upsetting some static\nanalysis tool? It's bounded by FD_SETSIZE, a small number.\n\nFWIW, I would like to delete pgwin32_select() in PG17. Before PG16\n(commit 7389aad6), the postmaster used it to receive incoming\nconnections, but that was replaced with latch.c code. I think the\nmain other place that uses select() in the backend is in the RADIUS\nauthentication code. That's arguably a bug, and should be replaced\nwith WaitEventSet too. I wrote a patch like that, but haven't got\nback to it yet:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGKxNoVjkMCksnj6z3BwiS3y2v6LN6z7_CisLK%2Brv%2B0V4g%40mail.gmail.com\n\nIf we could get that done, hopefully pgwin32_select() could go? Are\nthere any more callers hiding somewhere?\n\nI'd also like to remove various other socket API wrapper functions in\nthat file. They exist to support blocking mode (our Windows sockets\nare always in non-blocking mode at the OS level, but we simulate\nnon-blocking with our own user space wait loops in those wrappers),\nand we also have some kludges to support connectionless datagrams\n(formerly used for talking to the defunct stats backend, no longer\nneeded AFAICS). But when I started looking into all that, I realised\nthat I'd first need to work on removing a few more straggler places\nthat use blocking mode for short stretches in the backend. That is,\nmake it 100% true that sockets in the backend are always in\nnon-blocking mode. I think that seems like a good goal, because any\nplace still using blocking mode is a place where a backend might hang\nuninterruptibly (on any OS).\n\nA closely related problem is fixing the socket event-loss problems we\nhave on Windows https://commitfest.postgresql.org/46/3523/, which is\nmoving at a glacial pace because I am not a Windows guy. Eventually I\nwant to use long-lived WaitEventSet in lots of places for better\nefficiency on all systems, and that seems to require fixing that\nhistorical bug in our per-socket event tracking. I think the end\npoint of all that is: we'd lose the wrappers that deal with\nfake-blocking-mode and fake-signals, and lose the right to use\nblocking signals at all in the backend, but keep just a few short\nwin32 wrappers that help track events reliably.\n\n\n",
"msg_date": "Sun, 31 Dec 2023 10:53:49 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Windows sockets (select missing events?)"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nOur prod databases are still PG 9.6.24. We have one primary plus 3 stream\nreplications that are all working well for a long time. However, when I\npromoted one standby database to the primary role, we the the below error\nmessage from the PG log:\n=======================\n2023-12-01 06:57:35.541 UTC,,,1553,,6569738f.611,639,,2023-12-01 05:47:59\nUTC,,0,LOG,00000,\"server process (PID 31839) was terminated by signal 11:\nSegmentation fault\",\"Failed process was running: UPDATE xxxx SET\nemployee_id = (9489910) WHERE id = (1162120221)\",,,,,,,,\"\"\n\n\n\nHere is the message from dmesg:\n=======================\n[ 3676.406247] postgres[27789]: segfault at 0 ip 00005618bf79bfe4 sp\n00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n[ 3676.406265] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff\n0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa\n<0f> b6 17 89 d1\n 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n[ 3715.937850] postgres[27928]: segfault at 0 ip 00005618bf79bfe4 sp\n00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n[ 3715.937858] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff\n0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa\n<0f> b6 17 89 d1\n 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n[ 3732.278367] postgres[28212]: segfault at 0 ip 00005618bf79bfe4 sp\n00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n[ 3732.278384] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff\n0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa\n<0f> b6 17 89 d1\n 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n\nError 4 is the error related to unmapping memory. But the database works\nwell for long time as the standby database. After it was promoted to the\nprimary role, no memory parameter change at all.\n\nCould you give us some hint where to fix this issue?\n\nRegards,\n\nKevin\n\nHello hackers,Our prod databases are still PG 9.6.24. We have one primary plus 3 stream replications that are all working well for a long time. However, when I promoted one standby database to the primary role, we the the below error message from the PG log:=======================2023-12-01 06:57:35.541 UTC,,,1553,,6569738f.611,639,,2023-12-01 05:47:59 UTC,,0,LOG,00000,\"server process (PID 31839) was terminated by signal 11: Segmentation fault\",\"Failed process was running: UPDATE xxxx SET employee_id = (9489910) WHERE id = (1162120221)\",,,,,,,,\"\"Here is the message from dmesg:=======================[ 3676.406247] postgres[27789]: segfault at 0 ip 00005618bf79bfe4 sp 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000][ 3676.406265] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa <0f> b6 17 89 d1 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3[ 3715.937850] postgres[27928]: segfault at 0 ip 00005618bf79bfe4 sp 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000][ 3715.937858] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa <0f> b6 17 89 d1 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3[ 3732.278367] postgres[28212]: segfault at 0 ip 00005618bf79bfe4 sp 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000][ 3732.278384] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4 ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e fa <0f> b6 17 89 d1 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3Error 4 is the error related to unmapping memory. But the database works well for long time as the standby database. After it was promoted to the primary role, no memory parameter change at all.Could you give us some hint where to fix this issue?Regards,Kevin",
"msg_date": "Thu, 28 Dec 2023 15:09:47 -0500",
"msg_from": "Kevin Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "The segmentation fault of Postgresql 9.6.24"
},
{
"msg_contents": "\nOn 12/28/23 21:09, Kevin Wang wrote:\n> Hello hackers,\n> \n> Our prod databases are still PG 9.6.24. We have one primary plus 3\n> stream replications that are all working well for a long time. \n\nEverything is working well until the day it breaks ...\n\n> However, when I promoted one standby database to the primary role,\n> we the the below error message from the PG log:\n> =======================\n> 2023-12-01 06:57:35.541 UTC,,,1553,,6569738f.611,639,,2023-12-01\n> 05:47:59 UTC,,0,LOG,00000,\"server process (PID 31839) was terminated by\n> signal 11: Segmentation fault\",\"Failed process was running: UPDATE xxxx\n> SET employee_id = (9489910) WHERE id = (1162120221)\",,,,,,,,\"\"\n> \n> \n> \n> Here is the message from dmesg:\n> =======================\n> [ 3676.406247] postgres[27789]: segfault at 0 ip 00005618bf79bfe4 sp\n> 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n> [ 3676.406265] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4\n> ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e\n> fa <0f> b6 17 89 d1\n> 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n> [ 3715.937850] postgres[27928]: segfault at 0 ip 00005618bf79bfe4 sp\n> 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n> [ 3715.937858] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4\n> ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e\n> fa <0f> b6 17 89 d1\n> 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n> [ 3732.278367] postgres[28212]: segfault at 0 ip 00005618bf79bfe4 sp\n> 00007ffcd9a75dc8 error 4 in postgres[5618bf3db000+3f7000]\n> [ 3732.278384] Code: ff ff 48 83 c2 40 ff d0 e8 19 9c ff ff e8 44 0f c4\n> ff 0f 1f 40 00 f3 0f 1e fa e9 27 be cc ff 0f 1f 80 00 00 00 00 f3 0f 1e\n> fa <0f> b6 17 89 d1\n> 83 e1 03 80 f9 02 74 0f 80 fa 01 74 0a 48 89 f8 c3\n> \n> Error 4 is the error related to unmapping memory. But the database works\n> well for long time as the standby database. After it was promoted to the\n> primary role, no memory parameter change at all.\n> \n\nWhy do you think \"4\" means unmapping memory? 4 is error code for\n\"user-mode access\" (i.e. not invalid memory access from kernel).\n\n> Could you give us some hint where to fix this issue?\n> \n\nThis could be pretty much anything, and without seeing where exactly it\nfails it's impossible to say. I see you apparently hit the issue\nrepeatedly, and tall the information is *exactly* the same - addresses,\ncode, etc. Try decoding the addresses with addr2line, or even better get\na proper backtrace - either from a core file, or using gdb.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 28 Dec 2023 23:20:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The segmentation fault of Postgresql 9.6.24"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 11:20:18PM +0100, Tomas Vondra wrote:\n> This could be pretty much anything, and without seeing where exactly it\n> fails it's impossible to say. I see you apparently hit the issue\n> repeatedly, and tall the information is *exactly* the same - addresses,\n> code, etc. Try decoding the addresses with addr2line, or even better get\n> a proper backtrace - either from a core file, or using gdb.\n\nAlso, Postgres 9.6 went out of support on November 11, 2021:\n\n\thttps://www.postgresql.org/support/versioning/\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Dec 2023 17:40:31 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The segmentation fault of Postgresql 9.6.24"
}
] |
[
{
"msg_contents": "This is not the right mailing list for your question. Try the\npgadmin-support [1] mailing list. You may also want to include more details\nin your question, because it's not really possible to tell what's going\nwrong from your description.\n\n[1]: https://www.postgresql.org/list/pgadmin-support/\n\nThis is not the right mailing list for your question. Try the pgadmin-support [1] mailing list. You may also want to include more details in your question, because it's not really possible to tell what's going wrong from your description.[1]: https://www.postgresql.org/list/pgadmin-support/",
"msg_date": "Thu, 28 Dec 2023 21:02:04 -0800",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pdadmin open on Macbook issue"
}
] |
[
{
"msg_contents": "Hi.\nhttps://coverage.postgresql.org/contrib/pg_prewarm/autoprewarm.c.gcov.html\nfunction autoprewarm_start_worker never gets tested, but\nautoprewarm_start_worker listed in our doc\n(https://www.postgresql.org/docs/16/pgprewarm.html)\nMaybe we should test it.\n\nalso this part in function autoprewarm_main not tested.\n 247 : /* Compute the next dump time. */\n 248 0 : next_dump_time =\n 249 0 :\nTimestampTzPlusMilliseconds(last_dump_time,\n 250 :\nautoprewarm_interval * 1000);\n 251 : delay_in_ms =\n 252 0 :\nTimestampDifferenceMilliseconds(GetCurrentTimestamp(),\n 253 :\nnext_dump_time);\n 254 :\n 255 : /* Perform a dump if it's time. */\n 256 0 : if (delay_in_ms <= 0)\n 257 : {\n 258 0 : last_dump_time = GetCurrentTimestamp();\n 259 0 : apw_dump_now(true, false);\n 260 0 : continue;\n 261 : }\n 262 :\n 263 : /* Sleep until the next dump time. */\n 264 0 : (void) WaitLatch(MyLatch,\n 265 : WL_LATCH_SET |\nWL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n 266 : delay_in_ms,\n 267 : WAIT_EVENT_EXTENSION);\n\nI don't fully understand this module, but I kind of get the meaning of\nthe worker_spi module.\n\ni didn't configure shared_preload_libraries.\nI changed from\nworker.bgw_flags = BGWORKER_SHMEM_ACCESS;\nto\nworker.bgw_flags = BGWORKER_SHMEM_ACCESS |\nBGWORKER_BACKEND_DATABASE_CONNECTION;\n\nbut still, the following query will not list the autoprewarm background worker.\nSELECT datname, pid, state, backend_type, wait_event_type, wait_event\nFROM pg_stat_activity;\n\nI thought pg_stat_activity will list all related pid for this cluster.\nbut it seems not in this case.\nSo here, is it a good idea to make the autoprewarm background worker\nlisted in pg_stat_activity?\n\n\n",
"msg_date": "Fri, 29 Dec 2023 15:21:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "autoprewarm main function not tested background worker not listed in\n pg_stat_activity"
}
] |
[
{
"msg_contents": "Hi,\n\nCommitfest 2024-01 is starting in 3 days!\nPlease register the patches which have not yet registered. Also if\nsomeone has some pending patch that is not yet submitted, please\nsubmit and register for 2024-01 Commitfest.\nI will be having a look at the commitfest entries, correcting the\nstatus if any of them need to be corrected and update the authors.\nKindly keep the patch updated so that the reviewers/committers can\nreview the patches and get it committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 29 Dec 2023 14:41:55 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2024-01 starting in 3 days!"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 02:41:55PM +0530, vignesh C wrote:\n> Commitfest 2024-01 is starting in 3 days!\n> Please register the patches which have not yet registered. Also if\n> someone has some pending patch that is not yet submitted, please\n> submit and register for 2024-01 Commitfest.\n> I will be having a look at the commitfest entries, correcting the\n> status if any of them need to be corrected and update the authors.\n> Kindly keep the patch updated so that the reviewers/committers can\n> review the patches and get it committed.\n\nPlease be careful that it should be possible to register patches until\nthe 31st of December 23:59 AoE, as of:\nhttps://www.timeanddate.com/time/zones/aoe\n\nThe status of the CF should be switched after this time, not before.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:49:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2024-01 starting in 3 days!"
},
{
"msg_contents": "On Sun, 31 Dec 2023 at 06:19, Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Dec 29, 2023 at 02:41:55PM +0530, vignesh C wrote:\n> > Commitfest 2024-01 is starting in 3 days!\n> > Please register the patches which have not yet registered. Also if\n> > someone has some pending patch that is not yet submitted, please\n> > submit and register for 2024-01 Commitfest.\n> > I will be having a look at the commitfest entries, correcting the\n> > status if any of them need to be corrected and update the authors.\n> > Kindly keep the patch updated so that the reviewers/committers can\n> > review the patches and get it committed.\n>\n> Please be careful that it should be possible to register patches until\n> the 31st of December 23:59 AoE, as of:\n> https://www.timeanddate.com/time/zones/aoe\n>\n> The status of the CF should be switched after this time, not before.\n\nThanks, I will take care of this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 1 Jan 2024 08:49:59 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2024-01 starting in 3 days!"
}
] |
[
{
"msg_contents": "Hi!\n\nAs were discussed in [0] our overall goal is to make Postgres 64 bit XIDs.\nIt's obvious, that such a big patch set\ncouldn't possible to commit \"at once\". SLUR patch set [1] was committed a\nshort while ago as a first significant\nstep in this direction.\n\nThis thread is a next step in this enterprise. My objective here is to\ncommit some changes, which were mandatory,\nas far as I understand, for any type of 64 XIDs implementation. And I'm\nsure there will be points for discussion here.\n\nMy original intention was to make PGPROC->xmin, PGPROC->xid and\nPROC_HDR->xids 64bit. But in reality,\nit turned out to be much more difficult than I expected. On the one hand,\nthe patch became too big and on the other\nhand, it's heavily relayed on epoch and XID \"adjustment\" to FXID. Therefore,\nfor now, I decided to limit myself to\nmore atomic and independent changes. However, as I said above, these\nchanges are required for any implementation\nof 64bit XIDs.\n\nSo, PFA patches to make switch PGPROC->xid and XLogRecord->xl_xid to\nFullTransactionId.\n\nAs always, any opinions are very welcome!\n\n[0]\nhttps://www.postgresql.org/message-id/CACG=ezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe=pyyjVWA@mail.gmail.com\n[1]\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 29 Dec 2023 15:48:45 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "On Fri, 29 Dec 2023, 13:49 Maxim Orlov, <[email protected]> wrote:\n\n> Hi!\n>\n> As were discussed in [0] our overall goal is to make Postgres 64 bit\n> XIDs. It's obvious, that such a big patch set\n> couldn't possible to commit \"at once\". SLUR patch set [1] was committed a\n> short while ago as a first significant\n> step in this direction.\n>\n> This thread is a next step in this enterprise. My objective here is to\n> commit some changes, which were mandatory,\n> as far as I understand, for any type of 64 XIDs implementation. And I'm\n> sure there will be points for discussion here.\n>\n> My original intention was to make PGPROC->xmin, PGPROC->xid and\n> PROC_HDR->xids 64bit. But in reality,\n> it turned out to be much more difficult than I expected. On the one hand,\n> the patch became too big and on the other\n> hand, it's heavily relayed on epoch and XID \"adjustment\" to FXID. Therefore,\n> for now, I decided to limit myself to\n> more atomic and independent changes. However, as I said above, these\n> changes are required for any implementation\n> of 64bit XIDs.\n>\n> So, PFA patches to make switch PGPROC->xid\n>\n\nI think this could be fine, but ...\n\nand XLogRecord->xl_xid to FullTransactionId.\n>\n\nI don't think this is an actionable change, as this wastes 4 more bytes (or\n8 with alignment) in nearly all WAL records that don't use the\nHEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when\n64but-aligned) bytes per record. Unless something like [0] gets committed\nthis will add a significant write overhead to all operations, even if they\nare not doing anything that needs an XID.\n\nAlso, I don't think we need to support transactions that stay alive and\nchange things for longer than 2^31 concurrently created transactions, so we\ncould well add a fxid to each WAL segment header (and checkpoint record?)\nand calculate the fxid of each record as a relative fxid off of that.\n\n\nKind regards\n\nMatthias van de Meent\n\n[0] https://commitfest.postgresql.org/46/4386/\n\nOn Fri, 29 Dec 2023, 13:49 Maxim Orlov, <[email protected]> wrote:Hi!As were discussed in [0] our overall goal is to make Postgres 64 bit XIDs. It's obvious, that such a big patch set couldn't possible to commit \"at once\". SLUR patch set [1] was committed a short while ago as a first significant step in this direction.This thread is a next step in this enterprise. My objective here is to commit some changes, which were mandatory,as far as I understand, for any type of 64 XIDs implementation. And I'm sure there will be points for discussion here.My original intention was to make PGPROC->xmin, PGPROC->xid and PROC_HDR->xids 64bit. But in reality, it turned out to be much more difficult than I expected. On the one hand, the patch became too big and on the otherhand, it's heavily relayed on epoch and XID \"adjustment\" to FXID. Therefore, for now, I decided to limit myself to more atomic and independent changes. However, as I said above, these changes are required for any implementationof 64bit XIDs.So, PFA patches to make switch PGPROC->xid I think this could be fine, but ...and XLogRecord->xl_xid to FullTransactionId.I don't think this is an actionable change, as this wastes 4 more bytes (or 8 with alignment) in nearly all WAL records that don't use the HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when 64but-aligned) bytes per record. Unless something like [0] gets committed this will add a significant write overhead to all operations, even if they are not doing anything that needs an XID.Also, I don't think we need to support transactions that stay alive and change things for longer than 2^31 concurrently created transactions, so we could well add a fxid to each WAL segment header (and checkpoint record?) and calculate the fxid of each record as a relative fxid off of that.Kind regardsMatthias van de Meent[0] https://commitfest.postgresql.org/46/4386/",
"msg_date": "Fri, 29 Dec 2023 14:36:19 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 02:36:19PM +0100, Matthias van de Meent wrote:\n> I don't think this is an actionable change, as this wastes 4 more bytes (or\n> 8 with alignment) in nearly all WAL records that don't use the\n> HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when\n> 64but-aligned) bytes per record. Unless something like [0] gets committed\n> this will add a significant write overhead to all operations, even if they\n> are not doing anything that needs an XID.\n\nI was eyeing at the patches before reading your comment, and saw this\nacross the two patches:\n\n@@ -176,7 +176,7 @@ struct PGPROC\n \tLatch\t\tprocLatch;\t\t/* generic latch for process */\n \n \n-\tTransactionId xid;\t\t\t/* id of top-level transaction currently being\n+\tFullTransactionId xid;\t\t/* id of top-level transaction currently being\n[...]\n typedef struct XLogRecord\n {\n \tuint32\t\txl_tot_len;\t\t/* total len of entire record */\n-\tTransactionId xl_xid;\t\t/* xact id */\n+\tpg_crc32c\txl_crc;\t\t\t/* CRC for this record */\n+\tFullTransactionId xl_xid;\t/* xact id */\n\nAnd FWIW, echoing with Matthias, making these generic structures\narbitrary larger is a non-starter. We should try to make them\nshorter.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:44:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 16:36, Matthias van de Meent <\[email protected]> wrote:\n\n>\n> I don't think this is an actionable change, as this wastes 4 more bytes\n> (or 8 with alignment) in nearly all WAL records that don't use the\n> HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when\n> 64but-aligned) bytes per record. Unless something like [0] gets committed\n> this will add a significant write overhead to all operations, even if they\n> are not doing anything that needs an XID.\n>\n> Also, I don't think we need to support transactions that stay alive and\n> change things for longer than 2^31 concurrently created transactions, so we\n> could well add a fxid to each WAL segment header (and checkpoint record?)\n> and calculate the fxid of each record as a relative fxid off of that.\n>\n> Thank you for your reply. Yes, this is a good idea. Actually, this is\nexactly what I was trying to do at first.\nBut in this case, we have to add more locks on TransamVariables->nextXid,\nand I've abandoned the idea.\nMaybe, I should look in this direction.\n\nOn Sun, 31 Dec 2023 at 03:44, Michael Paquier <[email protected]> wrote:\n\n> And FWIW, echoing with Matthias, making these generic structures\n> arbitrary larger is a non-starter. We should try to make them\n> shorter.\n>\n\nYeah, obviously, this is patch make WAL bigger. I'll try to look into the\nidea of fxid calculation, as mentioned above.\nIt might in part be a \"chicken and the egg\" situation. It is very hard to\nsplit overall 64xid patch into smaller pieces\nwith each part been a meaningful and beneficial for current 32xids Postgres.\n\nAnyway, thanks for reply, appreciate it.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 29 Dec 2023 at 16:36, Matthias van de Meent <[email protected]> wrote:I don't think this is an actionable change, as this wastes 4 more bytes (or 8 with alignment) in nearly all WAL records that don't use the HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when 64but-aligned) bytes per record. Unless something like [0] gets committed this will add a significant write overhead to all operations, even if they are not doing anything that needs an XID.Also, I don't think we need to support transactions that stay alive and change things for longer than 2^31 concurrently created transactions, so we could well add a fxid to each WAL segment header (and checkpoint record?) and calculate the fxid of each record as a relative fxid off of that.Thank you for your reply. Yes, this is a good idea. Actually, this is exactly what I was trying to do at first. But in this case, we have to add more locks on TransamVariables->nextXid, and I've abandoned the idea. Maybe, I should look in this direction.On Sun, 31 Dec 2023 at 03:44, Michael Paquier <[email protected]> wrote:\nAnd FWIW, echoing with Matthias, making these generic structures\narbitrary larger is a non-starter. We should try to make them\nshorter.\nYeah, obviously, this is patch make WAL bigger. I'll try to look into the idea of fxid calculation, as mentioned above.It might in part be a \"chicken and the egg\" situation. It is very hard to split overall 64xid patch into smaller pieceswith each part been a meaningful and beneficial for current 32xids Postgres.Anyway, thanks for reply, appreciate it.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 1 Jan 2024 09:14:35 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 1:15 AM Maxim Orlov <[email protected]> wrote:\n> Yeah, obviously, this is patch make WAL bigger. I'll try to look into the idea of fxid calculation, as mentioned above.\n> It might in part be a \"chicken and the egg\" situation. It is very hard to split overall 64xid patch into smaller pieces\n> with each part been a meaningful and beneficial for current 32xids Postgres.\n\nI agree, but I think that's what we need to try to do. I mean, we\ndon't want to commit patches that individually make things *worse* on\nthe theory that when we do that enough times the result will be\noverall better. And even if the individual patches seem to be entirely\nneutral, that still doesn't offer great hope that the final result\nwill be an improvement.\n\nPersonally, I'm not convinced that \"use 64-bit XIDs everywhere\" is a\ndesirable goal. The space consumption issue here is just the tip of\nthe iceberg, and really exists in many places where we store XIDs. We\ndon't want to make tuple headers wider any more than we want to bloat\nthe WAL. We don't want to make the ProcArray bigger, either, not\nbecause of memory consumption but because of the speed of memory\naccess. What we really want to do is fix the practical problems that\n32-bit XIDs cause. AFAIK there are basically three such problems:\n\n1. SLRUs that are indexed by XID can wrap around, or try to, leading\nto confusion and bugs in the code and maybe errors when something\nfills up.\n\n2. If a table's relfrozenxid become older than ~2B, we have to stop\nXID assignment until the problem is cured.\n\n3. If a running transaction's XID age exceeds ~2B, we can't start new\ntransactions until the problem is cured.\n\nThere's already a patch for (1), I believe. I think we can cure that\nby indexing SLRUs by 64-bit integers without changing how XIDs are\nstored outside of SLRUs. Various people have attempted (2), for\nexample with an XID-per-page approach, but nothing's been committed\nyet. We can't attempt to cure (3) until (1) or (2) are fixed, and I\nthink that would require using 64-bit XIDs in the ProcArray, but note\nthat (3) is less serious: (1) and (2) constrain the size of the XID\nrange that can legally appear on disk, whereas (3) only constraints\nthe size of the XID range that can legally occur in memory. That makes\na big difference, because reducing the size of the XID range that can\nlegally appear on disk requires vacuuming all tables with older\nrelfrozenxids which in the worst case takes time proportional to the\ndisk utilization of the instance, whereas reducing the size of the XID\nrange that can legally appear in memory just requires ending\ntransactions (including possibly prepared transactions) and removing\nreplication slots, which can be done in O(1) time.\n\nMaybe this analysis I've just given isn't quite right, but my point is\nthat we should try to think hard about where in the system 32-bit XIDs\nsuck and for what reason, and use that as a guide to what to change\nfirst.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 14:58:39 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "\n\n\n\n\nOn 1/2/24 1:58 PM, Robert Haas wrote:\n\n\nMaybe this analysis I've just given isn't quite right, but my point is\nthat we should try to think hard about where in the system 32-bit XIDs\nsuck and for what reason, and use that as a guide to what to change\nfirst.\n\nVery much this. The biggest reason 2B XIDs are such an issue is\n because it's incredibly expensive to move the window forward,\n which is governed by on-disk stuff.\n\n\n\n",
"msg_date": "Tue, 2 Jan 2024 17:09:21 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
},
{
"msg_contents": "Hi, Maxim, and Happy New Year!\n\nOn Mon, 1 Jan 2024 at 10:15, Maxim Orlov <[email protected]> wrote:\n\n> On Fri, 29 Dec 2023 at 16:36, Matthias van de Meent <\n> [email protected]> wrote:\n>\n>>\n>> I don't think this is an actionable change, as this wastes 4 more bytes\n>> (or 8 with alignment) in nearly all WAL records that don't use the\n>> HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when\n>> 64but-aligned) bytes per record. Unless something like [0] gets committed\n>> this will add a significant write overhead to all operations, even if they\n>> are not doing anything that needs an XID.\n>>\n>> Also, I don't think we need to support transactions that stay alive and\n>> change things for longer than 2^31 concurrently created transactions, so we\n>> could well add a fxid to each WAL segment header (and checkpoint record?)\n>> and calculate the fxid of each record as a relative fxid off of that.\n>>\n>> Thank you for your reply. Yes, this is a good idea. Actually, this is\n> exactly what I was trying to do at first.\n> But in this case, we have to add more locks on TransamVariables->nextXid,\n> and I've abandoned the idea.\n> Maybe, I should look in this direction.\n>\n> On Sun, 31 Dec 2023 at 03:44, Michael Paquier <[email protected]> wrote:\n>\n>> And FWIW, echoing with Matthias, making these generic structures\n>> arbitrary larger is a non-starter. We should try to make them\n>> shorter.\n>>\n>\n> Yeah, obviously, this is patch make WAL bigger. I'll try to look into the\n> idea of fxid calculation, as mentioned above.\n> It might in part be a \"chicken and the egg\" situation. It is very hard to\n> split overall 64xid patch into smaller pieces\n> with each part been a meaningful and beneficial for current 32xids\n> Postgres.\n>\n\nThe discussion quoted by you as OP [0] shows that there is a wide range of\nopinions on whether we need 64-bit XIDs and what is the better way to\nimplement it. We can also continue discussing implementation details in\nthat thread [0].\n\nI agree that the step towards something big should not make things worse\njust now. Does increasing the WAL header for several bytes make performance\nchange detectable at all? Maybe it's worth simulating a workload with a\nlarge amount of WAL records and seeing how it works. I think it's regarding\nthis patchset only and could remove or substantiate the main questions\nabout the current patchset.\n\n[0]\nhttps://www.postgresql.org/message-id/CACG=ezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe=pyyjVWA@mail.gmail.com\n\nKind regards,\nPavel Borisov.\n\nHi, Maxim, and Happy New Year!On Mon, 1 Jan 2024 at 10:15, Maxim Orlov <[email protected]> wrote:On Fri, 29 Dec 2023 at 16:36, Matthias van de Meent <[email protected]> wrote:I don't think this is an actionable change, as this wastes 4 more bytes (or 8 with alignment) in nearly all WAL records that don't use the HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when 64but-aligned) bytes per record. Unless something like [0] gets committed this will add a significant write overhead to all operations, even if they are not doing anything that needs an XID.Also, I don't think we need to support transactions that stay alive and change things for longer than 2^31 concurrently created transactions, so we could well add a fxid to each WAL segment header (and checkpoint record?) and calculate the fxid of each record as a relative fxid off of that.Thank you for your reply. Yes, this is a good idea. Actually, this is exactly what I was trying to do at first. But in this case, we have to add more locks on TransamVariables->nextXid, and I've abandoned the idea. Maybe, I should look in this direction.On Sun, 31 Dec 2023 at 03:44, Michael Paquier <[email protected]> wrote:\nAnd FWIW, echoing with Matthias, making these generic structures\narbitrary larger is a non-starter. We should try to make them\nshorter.\nYeah, obviously, this is patch make WAL bigger. I'll try to look into the idea of fxid calculation, as mentioned above.It might in part be a \"chicken and the egg\" situation. It is very hard to split overall 64xid patch into smaller pieceswith each part been a meaningful and beneficial for current 32xids Postgres.The discussion quoted by you as OP [0] shows that there is a wide range of opinions on whether we need 64-bit XIDs and what is the better way to implement it. We can also continue discussing implementation details in that thread [0].I agree that the step towards something big should not make things worse just now. Does increasing the WAL header for several bytes make performance change detectable at all? Maybe it's worth simulating a workload with a large amount of WAL records and seeing how it works. I think it's regarding this patchset only and could remove or substantiate the main questions about the current patchset.[0] https://www.postgresql.org/message-id/CACG=ezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe=pyyjVWA@mail.gmail.comKind regards,Pavel Borisov.",
"msg_date": "Wed, 3 Jan 2024 14:33:27 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Next step towards 64bit XIDs: Switch to FullTransactionId for\n PGPROC->xid and XLogRecord->xl_xid"
}
] |
[
{
"msg_contents": "Hi!\n\nRecently, one of our customers came to us with the question: why do reindex\nutility does not support multiple jobs for indices (-i opt)?\nAnd, of course, it is because we cannot control the concurrent processing\nof multiple indexes on the same relation. This was\ndiscussed somewhere in [0], I believe. So, customer have to make a shell\nscript to do his business and so on.\n\nBut. This seems to be not that complicated to split indices by parent\ntables and do reindex in multiple jobs? Or I miss something?\nPFA patch implementing this.\n\nAs always, any opinions are very welcome!\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 29 Dec 2023 16:15:35 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 04:15:35PM +0300, Maxim Orlov wrote:\n> Recently, one of our customers came to us with the question: why do reindex\n> utility does not support multiple jobs for indices (-i opt)?\n> And, of course, it is because we cannot control the concurrent processing\n> of multiple indexes on the same relation. This was\n> discussed somewhere in [0], I believe. So, customer have to make a shell\n> script to do his business and so on.\n\nYep, that should be the correct thread. As far as I recall, one major\nreason was code simplicity because dealing with parallel jobs at table\nlevel is a no-brainer on the client side (see 0003): we know that\nrelations with physical storage will never interact with each other.\n\n> But. This seems to be not that complicated to split indices by parent\n> tables and do reindex in multiple jobs? Or I miss something?\n> PFA patch implementing this.\n\n+ appendPQExpBufferStr(&catalog_query,\n+ \"WITH idx as (\\n\"\n+ \" SELECT c.relname, ns.nspname\\n\"\n+ \" FROM pg_catalog.pg_class c,\\n\"\n+ \" pg_catalog.pg_namespace ns\\n\"\n+ \" WHERE c.relnamespace OPERATOR(pg_catalog.=) ns.oid AND\\n\"\n+ \" c.oid OPERATOR(pg_catalog.=) ANY(ARRAY['\\n\");\n\nThe problem may be actually trickier than that, no? Could there be\nother factors to take into account for their classification, like\ntheir sizes (typically, we'd want to process the biggest one first, I\nguess)?\n--\nMichael",
"msg_date": "Tue, 6 Feb 2024 15:21:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "\n> On 6 Feb 2024, at 11:21, Michael Paquier <[email protected]> wrote:\n> \n> The problem may be actually trickier than that, no? Could there be\n> other factors to take into account for their classification, like\n> their sizes (typically, we'd want to process the biggest one first, I\n> guess)?\n\nMaxim, what do you think about it?\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 4 Mar 2024 11:26:21 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "Andrey M. Borodin писал(а) 2024-03-04 09:26:\n>> On 6 Feb 2024, at 11:21, Michael Paquier <[email protected]> wrote:\n>> \n>> The problem may be actually trickier than that, no? Could there be\n>> other factors to take into account for their classification, like\n>> their sizes (typically, we'd want to process the biggest one first, I\n>> guess)?\n> \n> Maxim, what do you think about it?\n> \n> \n> Best regards, Andrey Borodin.\n\nI agree that, as is the case with tables in REINDEX SCHEME, indices \nshould be sorted accordingly to their size.\n\nAttaching the second version of patch, with indices being sorted by \nsize.\n\nBest regards,\nSvetlana Derevyanko",
"msg_date": "Mon, 11 Mar 2024 11:38:40 +0300",
"msg_from": "Svetlana Derevyanko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Tue, 6 Feb 2024 at 09:22, Michael Paquier <[email protected]> wrote:\n\n>\n> The problem may be actually trickier than that, no? Could there be\n> other factors to take into account for their classification, like\n> their sizes (typically, we'd want to process the biggest one first, I\n> guess)?\n\n\nSorry for a late reply. Thanks for an explanation. This is sounds\nreasonable to me.\nSvetlana had addressed this in the patch v2.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 6 Feb 2024 at 09:22, Michael Paquier <[email protected]> wrote:\nThe problem may be actually trickier than that, no? Could there be\nother factors to take into account for their classification, like\ntheir sizes (typically, we'd want to process the biggest one first, I\nguess)? Sorry for a late reply. Thanks for an explanation. This is sounds reasonable to me.Svetlana had addressed this in the patch v2.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 11 Mar 2024 16:43:49 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 3:44 PM Maxim Orlov <[email protected]> wrote:\n> On Tue, 6 Feb 2024 at 09:22, Michael Paquier <[email protected]> wrote:\n>> The problem may be actually trickier than that, no? Could there be\n>> other factors to take into account for their classification, like\n>> their sizes (typically, we'd want to process the biggest one first, I\n>> guess)?\n>\n>\n> Sorry for a late reply. Thanks for an explanation. This is sounds reasonable to me.\n> Svetlana had addressed this in the patch v2.\n\nI think this patch is a nice improvement. But it doesn't seem to be\nimplemented in the right way. There is no guarantee that\nget_parallel_object_list() will return tables in the same order as\nindexes. Especially when there is \"ORDER BY idx.relpages\". Also,\nsort_indices_by_tables() has quadratic complexity (probably OK since\ninput list shouldn't be too lengthy) and a bit awkward.\n\nI've revised the patchset. Now appropriate ordering is made in SQL\nquery. The original list of indexes is modified to match the list of\ntables. The tables are ordered by the size of its greatest index,\nwithin table indexes are ordered by size.\n\nI'm going to further revise this patch, mostly comments and the commit message.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 20 Mar 2024 19:19:02 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 7:19 PM Alexander Korotkov <[email protected]> wrote:\n> On Mon, Mar 11, 2024 at 3:44 PM Maxim Orlov <[email protected]> wrote:\n> > On Tue, 6 Feb 2024 at 09:22, Michael Paquier <[email protected]> wrote:\n> >> The problem may be actually trickier than that, no? Could there be\n> >> other factors to take into account for their classification, like\n> >> their sizes (typically, we'd want to process the biggest one first, I\n> >> guess)?\n> >\n> >\n> > Sorry for a late reply. Thanks for an explanation. This is sounds reasonable to me.\n> > Svetlana had addressed this in the patch v2.\n>\n> I think this patch is a nice improvement. But it doesn't seem to be\n> implemented in the right way. There is no guarantee that\n> get_parallel_object_list() will return tables in the same order as\n> indexes. Especially when there is \"ORDER BY idx.relpages\". Also,\n> sort_indices_by_tables() has quadratic complexity (probably OK since\n> input list shouldn't be too lengthy) and a bit awkward.\n>\n> I've revised the patchset. Now appropriate ordering is made in SQL\n> query. The original list of indexes is modified to match the list of\n> tables. The tables are ordered by the size of its greatest index,\n> within table indexes are ordered by size.\n>\n> I'm going to further revise this patch, mostly comments and the commit message.\n\nHere goes the revised patch. I'm going to push this if there are no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 22 Mar 2024 17:45:05 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "Alexander Korotkov <[email protected]> writes:\n> Here goes the revised patch. I'm going to push this if there are no objections.\n\nQuite a lot of the buildfarm is complaining about this:\n\nreindexdb.c: In function 'reindex_one_database':\nreindexdb.c:434:54: error: 'indices_tables_cell' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 434 | strcmp(prev_index_table_name, indices_tables_cell->val) == 0)\n | ~~~~~~~~~~~~~~~~~~~^~~~~\n\nI noticed it first on mamba, which is set up with -Werror, but a\nscrape of the buildfarm logs shows many other animals reporting this\nas a warning. I think they are right. Even granting that the\ncompiler realizes that \"parallel && process_type == REINDEX_INDEX\" is\nenough to reach the one place where indices_tables_cell is\ninitialized, that's not really enough, because that place is\n\n if (indices_tables_list)\n indices_tables_cell = indices_tables_list->head;\n\nSo I believe this code will crash if get_parallel_object_list returns\nan empty list. Initializing indices_tables_cell to NULL in its\ndeclaration would stop the compiler warning, but if I'm right it\nwill do nothing to prevent that crash. This needs a bit more effort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Mar 2024 22:06:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 10:07 AM Tom Lane <[email protected]> wrote:\n\n> Alexander Korotkov <[email protected]> writes:\n> > Here goes the revised patch. I'm going to push this if there are no\n> objections.\n>\n> Quite a lot of the buildfarm is complaining about this:\n>\n> reindexdb.c: In function 'reindex_one_database':\n> reindexdb.c:434:54: error: 'indices_tables_cell' may be used uninitialized\n> in this function [-Werror=maybe-uninitialized]\n> 434 | strcmp(prev_index_table_name, indices_tables_cell->val) == 0)\n> | ~~~~~~~~~~~~~~~~~~~^~~~~\n>\n> I noticed it first on mamba, which is set up with -Werror, but a\n> scrape of the buildfarm logs shows many other animals reporting this\n> as a warning.\n\n\nI noticed the similar warning on cfbot:\nhttps://cirrus-ci.com/task/6298504306360320?logs=gcc_warning#L448\n\nreindexdb.c: In function ‘reindex_one_database’:\nreindexdb.c:437:24: error: ‘indices_tables_cell’ may be used uninitialized\nin this function [-Werror=maybe-uninitialized]\n 437 | indices_tables_cell = indices_tables_cell->next;\n | ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAlthough it's complaining on line 437 not 434, I think they are the same\nissue.\n\n\n> I think they are right. Even granting that the\n> compiler realizes that \"parallel && process_type == REINDEX_INDEX\" is\n> enough to reach the one place where indices_tables_cell is\n> initialized, that's not really enough, because that place is\n>\n> if (indices_tables_list)\n> indices_tables_cell = indices_tables_list->head;\n>\n> So I believe this code will crash if get_parallel_object_list returns\n> an empty list. Initializing indices_tables_cell to NULL in its\n> declaration would stop the compiler warning, but if I'm right it\n> will do nothing to prevent that crash. This needs a bit more effort.\n\n\nAgreed. And the comment of get_parallel_object_list() says that it may\nindeed return NULL.\n\nBTW, on line 373, it checks 'process_list' and bails out if this list is\nNULL. But it seems to me that 'process_list' cannot be NULL in this\ncase, because it's initialized to be 'user_list' and we have asserted\nthat user_list is not NULL on line 360. I wonder if we should check\nindices_tables_list instead of process_list on line 373.\n\nThanks\nRichard\n\nOn Mon, Mar 25, 2024 at 10:07 AM Tom Lane <[email protected]> wrote:Alexander Korotkov <[email protected]> writes:\n> Here goes the revised patch. I'm going to push this if there are no objections.\n\nQuite a lot of the buildfarm is complaining about this:\n\nreindexdb.c: In function 'reindex_one_database':\nreindexdb.c:434:54: error: 'indices_tables_cell' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 434 | strcmp(prev_index_table_name, indices_tables_cell->val) == 0)\n | ~~~~~~~~~~~~~~~~~~~^~~~~\n\nI noticed it first on mamba, which is set up with -Werror, but a\nscrape of the buildfarm logs shows many other animals reporting this\nas a warning.I noticed the similar warning on cfbot:https://cirrus-ci.com/task/6298504306360320?logs=gcc_warning#L448reindexdb.c: In function ‘reindex_one_database’:reindexdb.c:437:24: error: ‘indices_tables_cell’ may be used uninitialized in this function [-Werror=maybe-uninitialized] 437 | indices_tables_cell = indices_tables_cell->next; | ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~Although it's complaining on line 437 not 434, I think they are the sameissue. I think they are right. Even granting that the\ncompiler realizes that \"parallel && process_type == REINDEX_INDEX\" is\nenough to reach the one place where indices_tables_cell is\ninitialized, that's not really enough, because that place is\n\n if (indices_tables_list)\n indices_tables_cell = indices_tables_list->head;\n\nSo I believe this code will crash if get_parallel_object_list returns\nan empty list. Initializing indices_tables_cell to NULL in its\ndeclaration would stop the compiler warning, but if I'm right it\nwill do nothing to prevent that crash. This needs a bit more effort.Agreed. And the comment of get_parallel_object_list() says that it mayindeed return NULL.BTW, on line 373, it checks 'process_list' and bails out if this list isNULL. But it seems to me that 'process_list' cannot be NULL in thiscase, because it's initialized to be 'user_list' and we have assertedthat user_list is not NULL on line 360. I wonder if we should checkindices_tables_list instead of process_list on line 373.ThanksRichard",
"msg_date": "Mon, 25 Mar 2024 10:47:10 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 4:47 AM Richard Guo <[email protected]> wrote:\n> On Mon, Mar 25, 2024 at 10:07 AM Tom Lane <[email protected]> wrote:\n>>\n>> Alexander Korotkov <[email protected]> writes:\n>> > Here goes the revised patch. I'm going to push this if there are no objections.\n>>\n>> Quite a lot of the buildfarm is complaining about this:\n>>\n>> reindexdb.c: In function 'reindex_one_database':\n>> reindexdb.c:434:54: error: 'indices_tables_cell' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n>> 434 | strcmp(prev_index_table_name, indices_tables_cell->val) == 0)\n>> | ~~~~~~~~~~~~~~~~~~~^~~~~\n>>\n>> I noticed it first on mamba, which is set up with -Werror, but a\n>> scrape of the buildfarm logs shows many other animals reporting this\n>> as a warning.\n>\n>\n> I noticed the similar warning on cfbot:\n> https://cirrus-ci.com/task/6298504306360320?logs=gcc_warning#L448\n>\n> reindexdb.c: In function ‘reindex_one_database’:\n> reindexdb.c:437:24: error: ‘indices_tables_cell’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 437 | indices_tables_cell = indices_tables_cell->next;\n> | ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> Although it's complaining on line 437 not 434, I think they are the same\n> issue.\n>\n>>\n>> I think they are right. Even granting that the\n>> compiler realizes that \"parallel && process_type == REINDEX_INDEX\" is\n>> enough to reach the one place where indices_tables_cell is\n>> initialized, that's not really enough, because that place is\n>>\n>> if (indices_tables_list)\n>> indices_tables_cell = indices_tables_list->head;\n>>\n>> So I believe this code will crash if get_parallel_object_list returns\n>> an empty list. Initializing indices_tables_cell to NULL in its\n>> declaration would stop the compiler warning, but if I'm right it\n>> will do nothing to prevent that crash. This needs a bit more effort.\n>\n>\n> Agreed. And the comment of get_parallel_object_list() says that it may\n> indeed return NULL.\n>\n> BTW, on line 373, it checks 'process_list' and bails out if this list is\n> NULL. But it seems to me that 'process_list' cannot be NULL in this\n> case, because it's initialized to be 'user_list' and we have asserted\n> that user_list is not NULL on line 360. I wonder if we should check\n> indices_tables_list instead of process_list on line 373.\n\nThank you. I'm going to deal with this in the next few hours.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Mar 2024 09:06:29 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add Index-level REINDEX with multiple jobs"
}
] |
[
{
"msg_contents": "Hi,\n\nThis idea first came from remove_useless_groupby_columns does not need to record constraint dependencie[0] which points out that\nunique index whose columns all have NOT NULL constraints could also take the work with primary key when removing useless GROUP BY columns.\nI study it and implement the idea.\n\nEx:\n\ncreate temp table t2 (a int, b int, c int not null, primary key (a, b), unique(c));\n\n explain (costs off) select * from t2 group by a,b,c;\n QUERY PLAN\n ----------------------\n HashAggregate\n Group Key: c\n -> Seq Scan on t2\n\nThe plan drop column a, b as I did a little more.\n\nFor the query, as t2 has primary key (a, b), before this patch, we could drop column c because {a, b} are PK.\nAnd we have an unique index(c) with NOT NULL constraint now, we could drop column {a, b}, just keep {c}.\n\nWhile we have multiple choices, group by a, b (c is removed by PK) and group by c (a, b are removed by unique not null index)\nAnd I implement it to choose the one with less columns so that we can drop as more columns as possible.\nI think it’s good for planner to save some cost like Sort less columns.\nThere may be better one for some reason like: try to keep PK for planner?\nI’m not sure about that and it seems not worth much complex.\n\nThe NOT NULL constraint may also be computed from primary keys, ex:\ncreate temp table t2 (a int, b int, c int not null, primary key (a, b), unique(a));\nPrimary key(a, b) ensure a is NOT NULL and we have a unique index(a), but it will introduce more complex to check if a unique index could be used.\nI also doubt it worths doing that..\nSo my patch make it easy: check unique index’s columns, it’s a valid candidate if all of that have NOT NULL constraint.\nAnd we choose a best one who has the least column numbers in get_min_unique_not_null_attnos(), as the reason: less columns mean that more group by columns could be removed.\n\ncreate temp table t3 (a int, b int, c int not null, d int not null, primary key (a, b), unique(c, d));\n-- Test primary key beats unique not null index.\nexplain (costs off) select * from t3 group by a,b,c,d;\n QUERY PLAN\n----------------------\n HashAggregate\n Group Key: a, b\n -> Seq Scan on t3\n(3 rows)\n\ncreate temp table t4 (a int, b int not null, c int not null, d int not null, primary key (a, b), unique(b, c), unique(d));\n-- Test unique not null index with less columns wins.\nexplain (costs off) select * from t4 group by a,b,c,d;\n QUERY PLAN\n----------------------\n HashAggregate\n Group Key: d\n -> Seq Scan on t4\n(3 rows)\n\nThe unique Indices could have overlaps with primary keys and indices themselves.\n\ncreate temp table t5 (a int not null, b int not null, c int not null, d int not null, unique (a, b), unique(b, c), unique(a, c, d));\n-- Test unique not null indices have overlap.\nexplain (costs off) select * from t5 group by a,b,c,d;\n QUERY PLAN\n----------------------\n HashAggregate\n Group Key: a, b\n -> Seq Scan on t5\n(3 rows)\n\n\n\n\n\n[0]https://www.postgresql.org/message-id/flat/CAApHDvrdYa%3DVhOoMe4ZZjZ-G4ALnD-xuAeUNCRTL%2BPYMVN8OnQ%40mail.gmail.com\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Fri, 29 Dec 2023 23:04:34 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove useless GROUP BY columns considering unique index"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 11:05 PM Zhang Mingli <[email protected]> wrote:\n>\n> Hi,\n>\n> This idea first came from remove_useless_groupby_columns does not need to record constraint dependencie[0] which points out that\n> unique index whose columns all have NOT NULL constraints could also take the work with primary key when removing useless GROUP BY columns.\n> I study it and implement the idea.\n>\n> [0]https://www.postgresql.org/message-id/flat/CAApHDvrdYa%3DVhOoMe4ZZjZ-G4ALnD-xuAeUNCRTL%2BPYMVN8OnQ%40mail.gmail.com\n>\n\ncosmetic issues:\n+--\n+-- Test removal of redundant GROUP BY columns using unique not null index.\n+-- materialized view\n+--\n+create temp table t1 (a int, b int, c int, primary key (a, b), unique(c));\n+create temp table t2 (a int, b int, c int not null, primary key (a,\nb), unique(c));\n+create temp table t3 (a int, b int, c int not null, d int not null,\nprimary key (a, b), unique(c, d));\n+create temp table t4 (a int, b int not null, c int not null, d int\nnot null, primary key (a, b), unique(b, c), unique(d));\n+create temp table t5 (a int not null, b int not null, c int not null,\nd int not null, unique (a, b), unique(b, c), unique(a, c, d));\n+\n+-- Test unique index without not null constraint should not be used.\n+explain (costs off) select * from t1 group by a,b,c;\n+\n+-- Test unique not null index beats primary key.\n+explain (costs off) select * from t2 group by a,b,c;\n+\n+-- Test primary key beats unique not null index.\n+explain (costs off) select * from t3 group by a,b,c,d;\n+\n+-- Test unique not null index with less columns wins.\n+explain (costs off) select * from t4 group by a,b,c,d;\n+\n+-- Test unique not null indices have overlap.\n+explain (costs off) select * from t5 group by a,b,c,d;\n+\n+drop table t1;\n+drop table t2;\n+drop table t3;\n+drop table t4;\n+\n\nline `materailzed view` is unnecessary?\nyou forgot to drop table t5.\n\ncreate temp table p_t2 (\n a int not null,\n b int not null,\n c int not null,\n d int,\n unique(a), unique(a,b),unique(a,b,c)\n) partition by list(a);\ncreate temp table p_t2_1 partition of p_t2 for values in(1);\ncreate temp table p_t2_2 partition of p_t2 for values in(2);\nexplain (costs off, verbose) select * from p_t2 group by a,b,c,d;\nyour patch output:\n QUERY PLAN\n--------------------------------------------------------------\n HashAggregate\n Output: p_t2.a, p_t2.b, p_t2.c, p_t2.d\n Group Key: p_t2.a\n -> Append\n -> Seq Scan on pg_temp.p_t2_1\n Output: p_t2_1.a, p_t2_1.b, p_t2_1.c, p_t2_1.d\n -> Seq Scan on pg_temp.p_t2_2\n Output: p_t2_2.a, p_t2_2.b, p_t2_2.c, p_t2_2.d\n(8 rows)\n\nso it seems to work with partitioned tables, maybe you should add some\ntest cases with partition tables.\n\n\n- * XXX This is only used to create derived tables, so NO INHERIT constraints\n- * are always skipped.\n+ * XXX When used to create derived tables, set skip_no_inherit tp true,\n+ * so that NO INHERIT constraints will be skipped.\n */\n List *\n-RelationGetNotNullConstraints(Oid relid, bool cooked)\n+RelationGetNotNullConstraints(Oid relid, bool cooked, bool skip_no_inherit)\n {\n List *notnulls = NIL;\n Relation constrRel;\n@@ -815,7 +815,7 @@ RelationGetNotNullConstraints(Oid relid, bool cooked)\n\n if (conForm->contype != CONSTRAINT_NOTNULL)\n continue;\n- if (conForm->connoinherit)\n+ if (skip_no_inherit && conForm->connoinherit)\n continue;\n\nI don't think you need to refactor RelationGetNotNullConstraints.\nhowever i found it hard to explain it, (which one is parent, which one\nis children is very confusing for me).\nso i use the following script to test it:\nDROP TABLE ATACC1, ATACC2;\nCREATE TABLE ATACC1 (a int);\nCREATE TABLE ATACC2 (b int,c int, unique(c)) INHERITS (ATACC1);\nALTER TABLE ATACC1 ADD NOT NULL a;\n-- ALTER TABLE ATACC1 ADD NOT NULL a NO INHERIT;\nALTER TABLE ATACC2 ADD unique(a);\nexplain (costs off, verbose) select * from ATACC2 group by a,b,c;\n-------------------------\n\ncreate table t_0_3 (a int not null, b int not null, c int not null, d\nint not null, unique (a, b), unique(b, c) DEFERRABLE, unique(d));\nexplain (costs off, verbose) select * from t_0_3 group by a,b,c,d;\n QUERY PLAN\n--------------------------------\n HashAggregate\n Output: a, b, c, d\n Group Key: t_0_3.a, t_0_3.b\n -> Seq Scan on public.t_0_3\n Output: a, b, c, d\n(5 rows)\n\nthe above part seems not right, it should be `Group Key: t_0_3.d`\nso i changed to:\n\n/* Skip constraints that are not UNIQUE */\n+ if (con->contype != CONSTRAINT_UNIQUE)\n+ continue;\n+\n+ /* Skip unique constraints that are condeferred */\n+ if (con->condeferrable && con->condeferred)\n+ continue;\n\nI guess you probably have noticed, in the function\nremove_useless_groupby_columns,\nget_primary_key_attnos followed by get_min_unique_not_null_attnos.\nAlso, get_min_unique_not_null_attnos main code is very similar to\nget_primary_key_attnos.\n\nThey both do `pg_constraint = table_open(ConstraintRelationId,\nAccessShareLock);`\nyou might need to explain why we must open pg_constraint twice.\neither it's cheap to do it or another reason.\n\nIf scanning twice is expensive, we may need to refactor get_primary_key_attnos.\nget_primary_key_attnos only called in check_functional_grouping,\nremove_useless_groupby_columns.\n\nI added the patch to commitfest:\nhttps://commitfest.postgresql.org/46/4751/\n\n\n",
"msg_date": "Sun, 31 Dec 2023 22:37:46 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless GROUP BY columns considering unique index"
},
{
"msg_contents": "On Sat, 30 Dec 2023 at 04:05, Zhang Mingli <[email protected]> wrote:\n> So my patch make it easy: check unique index’s columns, it’s a valid candidate if all of that have NOT NULL constraint.\n> And we choose a best one who has the least column numbers in get_min_unique_not_null_attnos(), as the reason: less columns mean that more group by columns could be removed.\n\nThis patch no longer applies. We no longer catalogue NOT NULL\nconstraints, which this patch is coded to rely upon. (Likely it could\njust look at pg_attribute.attnotnull instead)\n\nAside from that, I'm not sure about the logic to prefer removing\nfunctionally dependent columns using the constraint with the least\ncolumns. If you had the PRIMARY KEY on two int columns and a unique\nindex on a 1MB varlena column, I think using the primary key would be\na better choice to remove functionally dependent columns of. Maybe\nit's ok to remove any functionally dependant columns on the primary\nkey first and try removing functionally dependent columns on unique\nconstraints and a 2nd step (or maybe only if none can be removed using\nthe PK?)\n\nAlso, why constraints rather than unique indexes? You'd need to check\nfor expression indexes and likely ignore those due to no ability to\ndetect NOT NULL for expressions.\n\nAlso, looking at the patch to how you've coded\nget_min_unique_not_null_attnos(), it looks like you're very optimistic\nabout that being a constraint that has columns mentioned in the GROUP\nBY clause. It looks like it won't work if the UNIQUE constraint with\nthe least columns gets no mention in the GROUP BY clause. That'll\ncause performance regressions from what we have today when the primary\nkey is mentioned and columns can be removed using that but a UNIQUE\nconstraint exists which has no corresponding GROUP BY columns and\nfewer unique columns than the PK. That's not good and it'll need to be\nfixed.\n\nSet to waiting on author.\n\nDavid\n\n\n",
"msg_date": "Thu, 12 Sep 2024 13:43:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless GROUP BY columns considering unique index"
},
{
"msg_contents": "On 12.09.24 03:43, David Rowley wrote:\n> On Sat, 30 Dec 2023 at 04:05, Zhang Mingli <[email protected]> wrote:\n>> So my patch make it easy: check unique index’s columns, it’s a valid candidate if all of that have NOT NULL constraint.\n>> And we choose a best one who has the least column numbers in get_min_unique_not_null_attnos(), as the reason: less columns mean that more group by columns could be removed.\n> \n> This patch no longer applies. We no longer catalogue NOT NULL\n> constraints, which this patch is coded to rely upon.\n\nWork is ongoing to revive the patch that catalogs not-null constraints: \n<https://commitfest.postgresql.org/49/5224/>. This patch should \nprobably wait for that patch at the moment.\n\n> (Likely it could just look at pg_attribute.attnotnull instead)\n\nThat won't work because you can't record dependencies on that. (This is \none of the reasons for cataloging not-null constraints as real constraints.)\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 09:28:30 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless GROUP BY columns considering unique index"
},
{
"msg_contents": "On Wed, 18 Sept 2024 at 19:28, Peter Eisentraut <[email protected]> wrote:\n>\n> On 12.09.24 03:43, David Rowley wrote:\n> > (Likely it could just look at pg_attribute.attnotnull instead)\n>\n> That won't work because you can't record dependencies on that. (This is\n> one of the reasons for cataloging not-null constraints as real constraints.)\n\nI'm not seeing any need to record constraint dependencies for this\noptimisation. It would be different for detecting functional\ndependencies in a view using a unique constraint+not null constraints\nfor ungrouped columns, but that's not what this is. This is just a\nplanner optimisation. The plan can be invalidated by a relcache\ninvalidation, which will happen if someone does ALTER TABLE DROP NOT\nNULL.\n\nFor reference, see 5b736e9cf.\n\nDavid\n\n\n",
"msg_date": "Wed, 18 Sep 2024 19:50:15 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless GROUP BY columns considering unique index"
},
{
"msg_contents": "Hi, all\n\n\nI haven't paid attention to this topic in a long time, thanks all for the advices, I will study them then update.\n\nThanks again.\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Sep 18, 2024 at 15:50 +0800, David Rowley <[email protected]>, wrote:\n> On Wed, 18 Sept 2024 at 19:28, Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 12.09.24 03:43, David Rowley wrote:\n> > > (Likely it could just look at pg_attribute.attnotnull instead)\n> >\n> > That won't work because you can't record dependencies on that. (This is\n> > one of the reasons for cataloging not-null constraints as real constraints.)\n>\n> I'm not seeing any need to record constraint dependencies for this\n> optimisation. It would be different for detecting functional\n> dependencies in a view using a unique constraint+not null constraints\n> for ungrouped columns, but that's not what this is. This is just a\n> planner optimisation. The plan can be invalidated by a relcache\n> invalidation, which will happen if someone does ALTER TABLE DROP NOT\n> NULL.\n>\n> For reference, see 5b736e9cf.\n>\n> David\n\n\n\n\n\n\n\nHi, all\n\n\n\n\nI haven't paid attention to this topic in a long time, thanks all for the advices, I will study them then update.\n\nThanks again.\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\nOn Sep 18, 2024 at 15:50 +0800, David Rowley <[email protected]>, wrote:\nOn Wed, 18 Sept 2024 at 19:28, Peter Eisentraut <[email protected]> wrote:\n\nOn 12.09.24 03:43, David Rowley wrote:\n(Likely it could just look at pg_attribute.attnotnull instead)\n\nThat won't work because you can't record dependencies on that. (This is\none of the reasons for cataloging not-null constraints as real constraints.)\n\nI'm not seeing any need to record constraint dependencies for this\noptimisation. It would be different for detecting functional\ndependencies in a view using a unique constraint+not null constraints\nfor ungrouped columns, but that's not what this is. This is just a\nplanner optimisation. The plan can be invalidated by a relcache\ninvalidation, which will happen if someone does ALTER TABLE DROP NOT\nNULL.\n\nFor reference, see 5b736e9cf.\n\nDavid",
"msg_date": "Mon, 23 Sep 2024 23:13:57 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless GROUP BY columns considering unique index"
}
] |
[
{
"msg_contents": "Currently the only way to set GUCs from a client is by using SET\ncommands or set them in the StartupMessage. I think it would be very\nuseful to be able to change settings using a message at the protocol\nlevel. For the following reasons:\n\n1. Protocol messages are much easier to inspect for connection poolers\nthan queries\n2. It paves the way for GUCs that can only be set using a protocol\nmessage (and not using SET).\n3. Being able to change GUCs while in an aborted transaction.\n4. Have an easy way to use the value contained in a ParameterStatus\nmessage as the value for a GUC\n\nI attached a patch that adds a new protocol message to set a GUC\nvalue. There's definitely still some more work that needs to be done\n(docs for new libpq APIs, protocol version bump, working protocol\nversion negotiation). But the core functionality is working. At this\npoint I'd mainly like some feedback on the general idea.\n\nThe sections below go into more detail on each of the reasons I mentioned above:\n\n(1) PgBouncer does not inspect query strings, to avoid having to\nwrite/maintain a SQL parser (even a partial one). But that means that\nclients cannot configure any behaviour of PgBouncer for their session.\nA few examples of useful things to configure would be:\na. allow changing between session and transaction pooling on the same\nconnection.\nb. intercepting changes to search_path, and routing different schemas\nto different machines (useful for Citus its schema based sharding).\nc. intercepting changing of pgbouncer.sharding_key, and route to\ndifferent machines based on this value.\n\n(2) There are currently multiple threads ongoing that propose very\nsimilar protocol changes for very different purposes. Essentially all\nof them boil down to sending a protocol message to the server to\nchange some other protocol behaviour. And the reason why they cannot\nuse GUCs, is because the driver and/or connection pooler need to know\nwhat the setting is and be able to choose it without a user running\nsome SQL suddenly changing the value. The threads I'm talking about\nare: Choosing specific types that use binary format for encoding [1].\nChanging what GUCs are reported to the client using ParameterStatus\n(a.k.a configurable GUC_REPORT) [2]. Changing the compression method\nthat is used to compress messages[3].\n\nAnother benefit could be to allow a connection pooler to configure\ncertain settings to not be changeable with SQL. For instance if a\npooler could ensure that a client couldn't later change\nsession_authorization, it could use session_authorization to set the\nuser and then multiplex client connections from different users over\nthe same connection to the database.\n\n(3) For psql it's useful to be able to control what messages it gets a\nParameterStatus for, even when the transaction is in aborted state.\nBecause that way it could decide what parameters status updates to\nrequest based on the prompt it needs to display. And the prompt can be\nchanged even during an aborted transaction.\n\n(4) PgBouncer uses the value contained in the ParameterStatus message\nto correctly set some GUCs back to their expected value. But to do\nthis you currently need to use a SET query, which in turn requires\nquoting the value using SQL quoting . This wouldn't be so bad, except\nthat GUC_LIST_QUOTE exists. Parameters with GUC_LIST_QUOTE have each\nitem in the list returned **double** quoted, and then those double\nquoted items separated by commas. But to be able to correctly set\nthem, they need to be given each separately **single** quoted and\nseparated by commas. Doing that would require a lot of parsing logic\nto replace double quotes with single quotes correctly. For now\npgbouncer only handles the empty string case correctly, for the\nsituations where the difference between double and single quotes\nmatters[4].\n\n[1]: https://www.postgresql.org/message-id/flat/CA%2BTgmoZyAh%2BhdN8zvHeN40n9vTstw8K1KjuWdgDuAMMbFAZqHg%40mail.gmail.com#e3a603bbc091e796148a2d660a4a1c1f\n[2]: https://www.postgresql.org/message-id/flat/CAFj8pRBFU-WzzQhNrwRHn67N0Ug8a9-0-9BOo69PPtcHiBDQMA@mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/AB607155-8FED-4C8C-B702-205B33884CBB%40yandex-team.ru#961c695d190cdccb3975a157b22ce9d8\n[4]: https://github.com/pgbouncer/pgbouncer/blob/fb468025d61e1ffdc6dbc819558f45414e0a176e/src/varcache.c#L172-L183\n\nP.S. I included authors and some reviewers of the threads I mentioned\nfor 2 in the CC. Since this patch is meant to be a generic protocol\nchange that could be used by all of them.",
"msg_date": "Fri, 29 Dec 2023 18:29:06 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Hi\n\npá 29. 12. 2023 v 18:29 odesílatel Jelte Fennema-Nio <[email protected]> napsal:\n\n> Currently the only way to set GUCs from a client is by using SET\n> commands or set them in the StartupMessage. I think it would be very\n> useful to be able to change settings using a message at the protocol\n> level. For the following reasons:\n>\n> 1. Protocol messages are much easier to inspect for connection poolers\n> than queries\n> 2. It paves the way for GUCs that can only be set using a protocol\n> message (and not using SET).\n> 3. Being able to change GUCs while in an aborted transaction.\n> 4. Have an easy way to use the value contained in a ParameterStatus\n> message as the value for a GUC\n>\n> I attached a patch that adds a new protocol message to set a GUC\n> value. There's definitely still some more work that needs to be done\n> (docs for new libpq APIs, protocol version bump, working protocol\n> version negotiation). But the core functionality is working. At this\n> point I'd mainly like some feedback on the general idea.\n>\n> The sections below go into more detail on each of the reasons I mentioned\n> above:\n>\n> (1) PgBouncer does not inspect query strings, to avoid having to\n> write/maintain a SQL parser (even a partial one). But that means that\n> clients cannot configure any behaviour of PgBouncer for their session.\n> A few examples of useful things to configure would be:\n> a. allow changing between session and transaction pooling on the same\n> connection.\n> b. intercepting changes to search_path, and routing different schemas\n> to different machines (useful for Citus its schema based sharding).\n> c. intercepting changing of pgbouncer.sharding_key, and route to\n> different machines based on this value.\n>\n> (2) There are currently multiple threads ongoing that propose very\n> similar protocol changes for very different purposes. Essentially all\n> of them boil down to sending a protocol message to the server to\n> change some other protocol behaviour. And the reason why they cannot\n> use GUCs, is because the driver and/or connection pooler need to know\n> what the setting is and be able to choose it without a user running\n> some SQL suddenly changing the value. The threads I'm talking about\n> are: Choosing specific types that use binary format for encoding [1].\n> Changing what GUCs are reported to the client using ParameterStatus\n> (a.k.a configurable GUC_REPORT) [2]. Changing the compression method\n> that is used to compress messages[3].\n>\n> Another benefit could be to allow a connection pooler to configure\n> certain settings to not be changeable with SQL. For instance if a\n> pooler could ensure that a client couldn't later change\n> session_authorization, it could use session_authorization to set the\n> user and then multiplex client connections from different users over\n> the same connection to the database.\n>\n> (3) For psql it's useful to be able to control what messages it gets a\n> ParameterStatus for, even when the transaction is in aborted state.\n> Because that way it could decide what parameters status updates to\n> request based on the prompt it needs to display. And the prompt can be\n> changed even during an aborted transaction.\n>\n> (4) PgBouncer uses the value contained in the ParameterStatus message\n> to correctly set some GUCs back to their expected value. But to do\n> this you currently need to use a SET query, which in turn requires\n> quoting the value using SQL quoting . This wouldn't be so bad, except\n> that GUC_LIST_QUOTE exists. Parameters with GUC_LIST_QUOTE have each\n> item in the list returned **double** quoted, and then those double\n> quoted items separated by commas. But to be able to correctly set\n> them, they need to be given each separately **single** quoted and\n> separated by commas. Doing that would require a lot of parsing logic\n> to replace double quotes with single quotes correctly. For now\n> pgbouncer only handles the empty string case correctly, for the\n> situations where the difference between double and single quotes\n> matters[4].\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoZyAh%2BhdN8zvHeN40n9vTstw8K1KjuWdgDuAMMbFAZqHg%40mail.gmail.com#e3a603bbc091e796148a2d660a4a1c1f\n> [2]:\n> https://www.postgresql.org/message-id/flat/CAFj8pRBFU-WzzQhNrwRHn67N0Ug8a9-0-9BOo69PPtcHiBDQMA@mail.gmail.com\n> [3]:\n> https://www.postgresql.org/message-id/flat/AB607155-8FED-4C8C-B702-205B33884CBB%40yandex-team.ru#961c695d190cdccb3975a157b22ce9d8\n> [4]:\n> https://github.com/pgbouncer/pgbouncer/blob/fb468025d61e1ffdc6dbc819558f45414e0a176e/src/varcache.c#L172-L183\n>\n> P.S. I included authors and some reviewers of the threads I mentioned\n> for 2 in the CC. Since this patch is meant to be a generic protocol\n> change that could be used by all of them.\n>\n\n+1\n\nPavel\n\nHipá 29. 12. 2023 v 18:29 odesílatel Jelte Fennema-Nio <[email protected]> napsal:Currently the only way to set GUCs from a client is by using SET\ncommands or set them in the StartupMessage. I think it would be very\nuseful to be able to change settings using a message at the protocol\nlevel. For the following reasons:\n\n1. Protocol messages are much easier to inspect for connection poolers\nthan queries\n2. It paves the way for GUCs that can only be set using a protocol\nmessage (and not using SET).\n3. Being able to change GUCs while in an aborted transaction.\n4. Have an easy way to use the value contained in a ParameterStatus\nmessage as the value for a GUC\n\nI attached a patch that adds a new protocol message to set a GUC\nvalue. There's definitely still some more work that needs to be done\n(docs for new libpq APIs, protocol version bump, working protocol\nversion negotiation). But the core functionality is working. At this\npoint I'd mainly like some feedback on the general idea.\n\nThe sections below go into more detail on each of the reasons I mentioned above:\n\n(1) PgBouncer does not inspect query strings, to avoid having to\nwrite/maintain a SQL parser (even a partial one). But that means that\nclients cannot configure any behaviour of PgBouncer for their session.\nA few examples of useful things to configure would be:\na. allow changing between session and transaction pooling on the same\nconnection.\nb. intercepting changes to search_path, and routing different schemas\nto different machines (useful for Citus its schema based sharding).\nc. intercepting changing of pgbouncer.sharding_key, and route to\ndifferent machines based on this value.\n\n(2) There are currently multiple threads ongoing that propose very\nsimilar protocol changes for very different purposes. Essentially all\nof them boil down to sending a protocol message to the server to\nchange some other protocol behaviour. And the reason why they cannot\nuse GUCs, is because the driver and/or connection pooler need to know\nwhat the setting is and be able to choose it without a user running\nsome SQL suddenly changing the value. The threads I'm talking about\nare: Choosing specific types that use binary format for encoding [1].\nChanging what GUCs are reported to the client using ParameterStatus\n(a.k.a configurable GUC_REPORT) [2]. Changing the compression method\nthat is used to compress messages[3].\n\nAnother benefit could be to allow a connection pooler to configure\ncertain settings to not be changeable with SQL. For instance if a\npooler could ensure that a client couldn't later change\nsession_authorization, it could use session_authorization to set the\nuser and then multiplex client connections from different users over\nthe same connection to the database.\n\n(3) For psql it's useful to be able to control what messages it gets a\nParameterStatus for, even when the transaction is in aborted state.\nBecause that way it could decide what parameters status updates to\nrequest based on the prompt it needs to display. And the prompt can be\nchanged even during an aborted transaction.\n\n(4) PgBouncer uses the value contained in the ParameterStatus message\nto correctly set some GUCs back to their expected value. But to do\nthis you currently need to use a SET query, which in turn requires\nquoting the value using SQL quoting . This wouldn't be so bad, except\nthat GUC_LIST_QUOTE exists. Parameters with GUC_LIST_QUOTE have each\nitem in the list returned **double** quoted, and then those double\nquoted items separated by commas. But to be able to correctly set\nthem, they need to be given each separately **single** quoted and\nseparated by commas. Doing that would require a lot of parsing logic\nto replace double quotes with single quotes correctly. For now\npgbouncer only handles the empty string case correctly, for the\nsituations where the difference between double and single quotes\nmatters[4].\n\n[1]: https://www.postgresql.org/message-id/flat/CA%2BTgmoZyAh%2BhdN8zvHeN40n9vTstw8K1KjuWdgDuAMMbFAZqHg%40mail.gmail.com#e3a603bbc091e796148a2d660a4a1c1f\n[2]: https://www.postgresql.org/message-id/flat/CAFj8pRBFU-WzzQhNrwRHn67N0Ug8a9-0-9BOo69PPtcHiBDQMA@mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/AB607155-8FED-4C8C-B702-205B33884CBB%40yandex-team.ru#961c695d190cdccb3975a157b22ce9d8\n[4]: https://github.com/pgbouncer/pgbouncer/blob/fb468025d61e1ffdc6dbc819558f45414e0a176e/src/varcache.c#L172-L183\n\nP.S. I included authors and some reviewers of the threads I mentioned\nfor 2 in the CC. Since this patch is meant to be a generic protocol\nchange that could be used by all of them.+1Pavel",
"msg_date": "Fri, 29 Dec 2023 19:09:16 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 2023-12-29 at 18:29 +0100, Jelte Fennema-Nio wrote:\n> 2. It paves the way for GUCs that can only be set using a protocol\n> message (and not using SET).\n\nThat sounds useful for GUCs that can interfere with the client, such as\nclient_encoding or the proposed GUC in you referred to at [1].\n\n> There's definitely still some more work that needs to be done\n> (docs for new libpq APIs, protocol version bump, working protocol\n> version negotiation).\n\nThat is my biggest concern right now: what will new clients connecting\nto old servers do?\n\nIf the version is bumped, should we look around for other unrelated\nprotocol changes to make at the same time? Do we want a more generic\nform of negotiation?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Dec 2023 10:32:15 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> Currently the only way to set GUCs from a client is by using SET\n> commands or set them in the StartupMessage. I think it would be very\n> useful to be able to change settings using a message at the protocol\n> level. For the following reasons:\n\n> 1. Protocol messages are much easier to inspect for connection poolers\n> than queries\n\nUnless you somehow forbid clients from setting GUCs in the old way,\nexactly how will that help a pooler?\n\n> 2. It paves the way for GUCs that can only be set using a protocol\n> message (and not using SET).\n\nThis is assuming facts not in evidence. Why would we want such a thing?\n\n> 3. Being able to change GUCs while in an aborted transaction.\n\nI'm really dubious that that's sane. How will it interact with, for\nexample, changes to the same GUC done in the now-failed transaction?\nOr for that matter, changes that happen later in the current\ntransaction? It seems like you can't even think about this unless\nyou deem GUC changes made this way to be non-transactional, which\nseems very wart-y and full of consistency problems.\n\n> 4. Have an easy way to use the value contained in a ParameterStatus\n> message as the value for a GUC\n\nI agree that GUC_LIST_QUOTE is a mess, but \"I'm too lazy to deal\nwith that\" seems like a rather poor argument for instead expending\nthe amount of labor involved in a protocol change.\n\nOn the whole, this feels like you are trying to force some things\ninto the GUC model that should not be there. I do perceive that\nthere are things that could be protocol-level variables, but\ntrying to say they are a kind of GUC seems like it will create\nmore problems than it solves.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Dec 2023 13:38:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 2023-12-29 at 13:38 -0500, Tom Lane wrote:\n> > 2. It paves the way for GUCs that can only be set using a protocol\n> > message (and not using SET).\n> \n> This is assuming facts not in evidence. Why would we want such a\n> thing?\n\nThe problem came up during the binary_formats GUC discussion: it\ndoesn't really make sense to change that with a SQL query, and doing so\ncan cause strange things to happen.\n\nWe already have the issue with client_encoding and binary format COPY,\nso arguably it's not worth trying to solve it. But protocol-only GUCs\nwas one idea that came up.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Dec 2023 11:14:46 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Fri, 2023-12-29 at 13:38 -0500, Tom Lane wrote:\n>> This is assuming facts not in evidence. Why would we want such a\n>> thing?\n\n> The problem came up during the binary_formats GUC discussion: it\n> doesn't really make sense to change that with a SQL query, and doing so\n> can cause strange things to happen.\n> We already have the issue with client_encoding and binary format COPY,\n> so arguably it's not worth trying to solve it. But protocol-only GUCs\n> was one idea that came up.\n\nYeah, there's definitely an issue about what level of the client-side\nsoftware ought to be able to set such parameters. I'm not sure that\n\"only the lowest level\" is the correct answer though. As an example,\nlibpq doesn't especially care what encoding it's dealing with, nor\n(AFAIR) whether COPY data is text or binary. The calling application\nprobably cares, but then we end up needing a bunch of new plumbing to\npass the settings through. That's not really providing a lot of\nvalue-add IMO.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Dec 2023 14:36:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 19:32, Jeff Davis <[email protected]> wrote:\n> On Fri, 2023-12-29 at 18:29 +0100, Jelte Fennema-Nio wrote:\n> > There's definitely still some more work that needs to be done\n> > (docs for new libpq APIs, protocol version bump, working protocol\n> > version negotiation).\n>\n> That is my biggest concern right now: what will new clients connecting\n> to old servers do?\n>\n> If the version is bumped, should we look around for other unrelated\n> protocol changes to make at the same time? Do we want a more generic\n> form of negotiation?\n\nThis is not that big of a deal. Since it's only an addition of a new\nmessage type, it's completely backwards compatible with the current\nprotocol version. i.e. as long as a client just doesn't send it when\nthe server reports an older protocol version everything works fine.\nThe protocol already has version negotiation built in and the server\nimplements it in a reasonable way. The only problem is that no-one\nbothered to implement the client side of it in libpq, because it\nwasn't necessary yet since 3.x only had a single minor version.\n\nPatch v20230911-0003[5] from thread [3] implements client side\nhandling in (imho) a sane way. I think it probably still needs some\nsmall tweaks and discussion on if this is the exact user facing API\nthat we want. But there's no big hurdles implementation or protocol\nwise to make the next libpq client backwards compatible with old\nservers. I think it's worth merging something like that patch anyway,\nbecause that's necessary for pretty much any protocol changes we would\nlike to do. After that there's pretty much no downside to bumping\nminor versions of the protocol anymore, so we could even do it every\nrelease if needed. Thus I don't think it's necessary to bulk any\nprotocol changes together.\n\n[3]: https://www.postgresql.org/message-id/flat/AB607155-8FED-4C8C-B702-205B33884CBB%40yandex-team.ru#961c695d190cdccb3975a157b22ce9d8\n[5]: https://www.postgresql.org/message-id/attachment/150192/v20230911-0003-allow-to-connect-to-server-with-major-protocol-versi.patch\n\n\n",
"msg_date": "Sat, 30 Dec 2023 00:07:15 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 19:38, Tom Lane <[email protected]> wrote:\n>> Jelte Fennema-Nio <[email protected]> writes:\n> > 1. Protocol messages are much easier to inspect for connection poolers\n> > than queries\n>\n> Unless you somehow forbid clients from setting GUCs in the old way,\n> exactly how will that help a pooler?\n\nA co-operating client could choose to only use this new message type\nto edit GUCs such as search_path or pgbouncer.sharding_key using this\nmethod. That's already a big improvement over the status quo, where\nPgBouncer (and most other poolers) only understands GUC changes by\nobserving the ParameterStatus responses from Postgres. At which point\nit is obviously too late to make any routing decisions, because to get\nthe ParamaterStatus back the pooler already needs to have forwarded\nthe SET command to an actual Postgres server. The same holds for any\nsetting changes that are specific to the pooler and postgres doesn't\neven know about, such as pgbouncer.pool_mode\n\n> > 3. Being able to change GUCs while in an aborted transaction.\n>\n> I'm really dubious that that's sane. How will it interact with, for\n> example, changes to the same GUC done in the now-failed transaction?\n> Or for that matter, changes that happen later in the current\n> transaction? It seems like you can't even think about this unless\n> you deem GUC changes made this way to be non-transactional, which\n> seems very wart-y and full of consistency problems.\n\nI think that's a fair criticism of the current patch. I do think it's\nquite useful for drivers/poolers not to have to worry about their\nchanges to protocol settings being rolled back unexpectedly because\nthey made the change while the client was doing a transaction.\nParticularly because it's easy for poolers to detect when a Sync is\nsent without parsing queries, but not when a BEGIN is sent (PgBouncer\nuses the ReadyForQuery response from the server to detect if a\ntransaction is open or not). But I agree that this behaviour is\nfraught with problems for any non-protocol level settings.\n\nI feel like a reasonable solution would be to make this ParameterSet\nmessage transactional after all, but explicitly define the relevant\nprotocol-only GUCs to be non-transactional.\n\n> I agree that GUC_LIST_QUOTE is a mess, but \"I'm too lazy to deal\n> with that\" seems like a rather poor argument for instead expending\n> the amount of labor involved in a protocol change.\n\nFair enough, honestly this is more of a bonus benefit to me.\n\nOn Fri, 29 Dec 2023 at 20:36, Tom Lane <[email protected]> wrote:\n> Yeah, there's definitely an issue about what level of the client-side\n> software ought to be able to set such parameters. I'm not sure that\n> \"only the lowest level\" is the correct answer though. As an example,\n> libpq doesn't especially care what encoding it's dealing with, nor\n> (AFAIR) whether COPY data is text or binary. The calling application\n> probably cares, but then we end up needing a bunch of new plumbing to\n> pass the settings through. That's not really providing a lot of\n> value-add IMO.\n\nThe value-add that I think it provides is forcing the user to use a\nwell defined interface when requesting behavioral changes to the\nprotocol. A client/pooler probably still wants to allow a user to\nchange these protocol level settings, but it wants to exert some\ncontrol when that happens. Clients could do so by exposing a basic\nwrapper around PQparameterSet, and registering that they should parse\nresponses from postgres differently now. And poolers could do this by\nunderstanding the message, and taking a relevant action (such as\nenabling/disabling compression on outgoing messages). By having a well\ndefined interface, clients and poolers know when these protocol\nrelated settings are being changed and can possibly even slightly\nchange the value before sending it to the server (e.g. by adding a few\nextra GUCs to the list of GUCs that should be GUC_REPORTed because\nPgBouncer wants to track them).\n\nAchieving the same without this new ParameterSet message and\nPQparameterSet might seem possible too, but then all clients and\npoolers would need to implement a partial (and probably buggy) SQL\nparser to check if the query that is being sent is \"SET\nbinary_protocol = ...\". And even this would not really be enough,\nsince a function would be able to change the relevant GUC without the\nclient ever sending SET ...\n\nSo, to summarize: Every piece of software in between the user writing\nqueries and postgres sending responses has some expectation and\nrequirements on what stuff looks like at the protocol level. Requiring\nthose intermediaries to look at the application layer (i.e. the actual\nqueries) to understand what to expect at the protocol layer is a\nlayering violation. Thus having a generic way to change protocol level\nconfiguration options at the actual protocol level is needed to ensure\nlayer separation.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 00:31:36 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 29 Dec 2023 at 19:38, Tom Lane <[email protected]> wrote:\n> > 2. It paves the way for GUCs that can only be set using a protocol\n> > message (and not using SET).\n>\n> This is assuming facts not in evidence.\n\nHow about instead of talking about protocol-only GUCs (which are\nindeed currently a theoretical concept), we start considering this\npatch as a way to modify parameters for protocol extensions. Protocol\nextension parameters (starting with _pq_.) can currently only be\nconfigured using the StartupMessage and afterwards there is no way to\nmodify them.\n\nI believe [1], [2] and [3] from my initial email could all use such\nprotocol extension parameters, if those parameters could be changed\nafter the initial startup.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 00:49:40 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "> How about instead of talking about protocol-only GUCs (which are\n> indeed currently a theoretical concept), we start considering this\n> patch as a way to modify parameters for protocol extensions. Protocol\n> extension parameters (starting with _pq_.) can currently only be\n> configured using the StartupMessage and afterwards there is no way to\n> modify them.\n>\n> I believe [1], [2] and [3] from my initial email could all use such\n> protocol extension parameters, if those parameters could be changed\n> after the initial startup.\n\nWhat if we allowed the client to send `ParameterStatus` messages to\nthe server to reconfigure protocol extensions that were requested at\nstartup? Such processing would be independent of the transaction\nlifecycle since protocol-level options aren't related to transactions.\nAny errors in the set value would be handled with an `ErrorResponse`\n(including if an extension was not reconfigurable after connection\nstartup), and success with a new `ReadyForQuery` message. The actual\neffect of an extension change must be delayed until after the\nReadyForQuery has been transmitted, though I don't know if that is\nfeasible or if we would need to instead implicitly assume changes were\nsuccessful and just close the connection on error. We wouldn't need\nto bump the protocol version since a client wouldn't (shouldn't) send\nthese messages unless it successfully negotiated the relevant protocol\nextension(s) at startup.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 09:05:27 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, 30 Dec 2023 at 16:05, Jacob Burroughs\n<[email protected]> wrote:\n> What if we allowed the client to send `ParameterStatus` messages to\n> the server to reconfigure protocol extensions that were requested at\n> startup?\n\nSending ParameterStatus from the frontend is not possible, because\nSync already claimed the 'S' message type on the frontend side. (and\neven if that weren't the case I think the name ParameterStatus is not\ndescriptive of what the message would do when you'd send it from the\nfrontend.\n\n> Such processing would be independent of the transaction\n> lifecycle since protocol-level options aren't related to transactions.\n> Any errors in the set value would be handled with an `ErrorResponse`\n> (including if an extension was not reconfigurable after connection\n> startup), and success with a new `ReadyForQuery` message.\n\nIf we only consider modifying protocol extension parameters with\nParameterSet, then I think I like the idea of reusing ReadyForQuery\ninstead of introducing ParamaterSetComplete. I like that it implicitly\nmakes clear that you should not send ParameterStatus while you already\nhave an open pipeline for protocol extension parameters. If we go this\napproach, we should probably explicitly disallow sending ParameterSet\nwhile a pipeline.\n\nHowever, if we also allow using ParameterSet to change regular runtime\nparameters, then I think this approach makes less sense. Because then\nyou might want to batch regular runtime parameter together with other\nactions in a pipeline, and use the expected \"ignore everything after\nthe first error\"\n\n> The actual\n> effect of an extension change must be delayed until after the\n> ReadyForQuery has been transmitted, though I don't know if that is\n> feasible or if we would need to instead implicitly assume changes were\n> successful and just close the connection on error.\n\nI'm trying to think about how a client would want to handle a failure\nwhen changing a protocol extension. Is there really an action it can\nreasonably take at that point? I guess it might depend on the exact\nextension, but I do think that in many cases closing the connection is\nthe only reasonable response. Maybe the server should even close the\nconnection with a FATAL error when it receives a ParameterSet for a\nprotocol extension but it fails to apply it.\n\n> We wouldn't need\n> to bump the protocol version since a client wouldn't (shouldn't) send\n> these messages unless it successfully negotiated the relevant protocol\n> extension(s) at startup.\n\nI think I'd still prefer to bump the minor version, even if it's just\nso that we've done it for the first time and we fixed all the libpq\nissues with it. But I also think it makes sense from a versioning\nperspective, if there's new message types that can be sent by the\nclient, which do not correspond to a protocol extension, then I think\nthe only reasonable thing is to update the version number. Otherwise\nyou'd basically need to define the ParameterSet message to be a part\nof every new protocol extension that you would add. That seems more\nconfusing than saying that version 3.1 supports this new ParameterSet\nmessage type.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 18:57:03 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, 30 Dec 2023 at 00:07, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 29 Dec 2023 at 19:32, Jeff Davis <[email protected]> wrote:\n> > That is my biggest concern right now: what will new clients connecting\n> > to old servers do?\n>\n> This is not that big of a deal. Since it's only an addition of a new\n> message type, it's completely backwards compatible with the current\n> protocol version. i.e. as long as a client just doesn't send it when\n> the server reports an older protocol version everything works fine.\n> The protocol already has version negotiation built in and the server\n> implements it in a reasonable way. The only problem is that no-one\n> bothered to implement the client side of it in libpq, because it\n> wasn't necessary yet since 3.x only had a single minor version.\n>\n> Patch v20230911-0003[5] from thread [3] implements client side\n> handling in (imho) a sane way.\n\nOkay I updated this patchset to start with better handling of the\nNegotiateProtocolVersion packet. The implementation is quite different\nfrom [5] after all, but the core idea is the same. It also allows the\nconnection to continue to work in case of a missing protocol\nextension, which is necessary for the libpq compression patchset[6].\nIn case the protocol extension is a requirement, the client can still\nchoose to disconnect by checking the return value of the newly added\nPQunsupportedProtocolExtensions function.\n\nI also fixed the tests of my final patch, but haven't changed the\nbehaviour of it in any of the suggested ways.\n\n[3]: https://www.postgresql.org/message-id/flat/AB607155-8FED-4C8C-B702-205B33884CBB%40yandex-team.ru#961c695d190cdccb3975a157b22ce9d8",
"msg_date": "Tue, 2 Jan 2024 15:57:37 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 1:38 PM Tom Lane <[email protected]> wrote:\n> Jelte Fennema-Nio <[email protected]> writes:\n> > 1. Protocol messages are much easier to inspect for connection poolers\n> > than queries\n>\n> Unless you somehow forbid clients from setting GUCs in the old way,\n> exactly how will that help a pooler?\n\nI agree that for this to work out we need the things that you can set\nthis way to be able to be set in only this way. But I'm also a huge\nfan of the idea -- if done correctly, it would solve the problem of an\nend client sneaking SET ROLE or SET SESSION AUTHORIZATION past the\npooler, which is a huge issue that we really ought to address.\n\n> > 2. It paves the way for GUCs that can only be set using a protocol\n> > message (and not using SET).\n>\n> This is assuming facts not in evidence. Why would we want such a thing?\n\nSee above.\n\n> > 3. Being able to change GUCs while in an aborted transaction.\n>\n> I'm really dubious that that's sane. How will it interact with, for\n> example, changes to the same GUC done in the now-failed transaction?\n> Or for that matter, changes that happen later in the current\n> transaction? It seems like you can't even think about this unless\n> you deem GUC changes made this way to be non-transactional, which\n> seems very wart-y and full of consistency problems.\n\nI agree with these complaints.\n\n> > 4. Have an easy way to use the value contained in a ParameterStatus\n> > message as the value for a GUC\n>\n> I agree that GUC_LIST_QUOTE is a mess, but \"I'm too lazy to deal\n> with that\" seems like a rather poor argument for instead expending\n> the amount of labor involved in a protocol change.\n\nNot sure about this one. It seems unwarranted to introduce an\naccusation of laziness when someone is finally making the effort to\naddress what is IMV a pretty serious deficiency in our current\nimplementation, but I have no educated opinion about what if anything\nought to be done about GUC_LIST_QUOTE or how that relates to the\npresent effort.\n\n> On the whole, this feels like you are trying to force some things\n> into the GUC model that should not be there. I do perceive that\n> there are things that could be protocol-level variables, but\n> trying to say they are a kind of GUC seems like it will create\n> more problems than it solves.\n\nThis is not a bad thought. If we made the things that were settable\nwith this mechanism distinct from the set of things that are settable\nas GUCs, that might work out better. For example, it completely the\nobjection to (3). But I'm not 100% sure, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 12:51:24 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 18:51, Robert Haas <[email protected]> wrote:\n> > On the whole, this feels like you are trying to force some things\n> > into the GUC model that should not be there. I do perceive that\n> > there are things that could be protocol-level variables, but\n> > trying to say they are a kind of GUC seems like it will create\n> > more problems than it solves.\n>\n> This is not a bad thought. If we made the things that were settable\n> with this mechanism distinct from the set of things that are settable\n> as GUCs, that might work out better. For example, it completely the\n> objection to (3). But I'm not 100% sure, either.\n\nIt seems like the general sentiment in the thread is that protocol\nparameters are different enough from GUCs that they should be handled\nseparately. And I think I agree. Partially because of the\ntransactional reasons mentioned upthread, but also because allowing to\nchange defaults of protocol parameters in postgresql.conf seems like a\nreally bad idea. If the client does not specify the protocol\nparameter, then they almost certainly want whatever value the default\nhas been for ages. Giving a DBA the ability to change that seems like\na recipe to accidentally break many clients.\n\nIt does cause some new questions for this patchset though:\n- Since we currently don't have any protocol parameters. How do I test\nit? Should I add a debug protocol parameter specifically for this\npurpose? Or should my tests just always error at the moment?\n- If I create a debug protocol parameter, what designs should it\ninherit from GUCs? e.g. parsing and input validation sounds useful.\nAnd where should it be stored e.g. protocol_params.c?\n- How do you get the value of a protocol parameter status? Do we\nexpect the client to keep track of what it has sent? Do we always send\na ParameterStatus message whenever it is changed? Or should we add a\nParamaterGet protocol message too?\n\n> > > 4. Have an easy way to use the value contained in a ParameterStatus\n> > > message as the value for a GUC\n> >\n> > I agree that GUC_LIST_QUOTE is a mess, but \"I'm too lazy to deal\n> > with that\" seems like a rather poor argument for instead expending\n> > the amount of labor involved in a protocol change.\n>\n> Not sure about this one. It seems unwarranted to introduce an\n> accusation of laziness when someone is finally making the effort to\n> address what is IMV a pretty serious deficiency in our current\n> implementation, but I have no educated opinion about what if anything\n> ought to be done about GUC_LIST_QUOTE or how that relates to the\n> present effort.\n\nTo clarify, the main thing that I want to do is take the value from\nParameterStatus and somehow, without having to escape it, set that\nvalue for that GUC for a different session. As explained above, I now\nthink that this newly proposed protocol message is a bad fit for this.\nBut I still think that that is not a weird thing to want.\n\nThe current situation is that you get a value in ParameterStatus, but\nbefore it's useful you first need to do some escaping. And to know how\nto escape it requires you to maintain a hardcoded list of GUCs that\nare GUC_LIST_QUOTE (which might change in the next Postgres release).\nI see two options to address this:\n\n1. Add another protocol message that sets GUCs instead of protocol\nparameters (which would behave just like SET, i.e. it would be\ntransactional)\n2. Support preparing \"SET search_path = $1\" and then bind a single\nstring to it. i.e. have this PSQL command not fail with a syntax\nerror:\n> SET search_path = $1; \\bind '\"user\", public';\nERROR: 42601: syntax error at or near \"$1\"\nLINE 1: SET search_path = $1;\n\nI'll take a look at implementing option 2, since I have a feeling\nthat's less likely to be controversial.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 00:20:56 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "What if we created a new guc flag `GUC_PROTOCOL_EXTENSION` (or a\nbetter name), used that for any of the GUCs options that should *only*\nbe set via a `ParameterSet` protocol message, and then prevent\nchanging those through SET/RESET/RESET ALL (but I don't see a reason\nto prevent reading them through SHOW). (I would imagine most/all\n`GUC_PROTOCOL_EXTENSION` GUCs would also set `GUC_NOT_IN_SAMPLE`,\n`GUC_DISALLOW_IN_FILE`, and maybe `GUC_NO_SHOW_ALL`.). Looking back at\nuse cases in the original message of this thread, I would imagine at\nleast the \"configurable GUC report\" and \"protocol compression\" would\nlikely want to be flagged with `GUC_PROTOCOL_EXTENSION` since both\nwould seem like things that would require low-level client\ninvolvement/support when they change.\n\nI was already drafting this email before the last one in the thread\ncame through, but I think this proposal addresses a few things from\nit:\n> - Since we currently don't have any protocol parameters. How do I test\n> it? Should I add a debug protocol parameter specifically for this\n> purpose? Or should my tests just always error at the moment?\n`protcocol_managed_params` would serve as the example/test parameter\nin this patch series\n> - If I create a debug protocol parameter, what designs should it\n> inherit from GUCs? e.g. parsing and input validation sounds useful.\n> And where should it be stored e.g. protocol_params.c?\nI'm thinking using flag(s) on GUCs would get useful mechanics without\nneeding to implement an entirely separate param system. It appears\nthere are existing flags that cover almost everything we would want\nexcept for actually marking a GUC as a protocol extension.\n> - How do you get the value of a protocol parameter status? Do we\n> expect the client to keep track of what it has sent? Do we always send\n> a ParameterStatus message whenever it is changed? Or should we add a\n> ParamaterGet protocol message too?\nI would think it would be reasonable for a client to track its own\nParameterSets if it has a reason to care about them, since presumably\nit needs to do something differently after setting them anyways?\nThough I could see an argument for sending `ParameterStatus` if we\nwant that to serve as an \"ack\" so a client could wait until it had\nactive confirmation from a server that a new parameter value was\napplied when necessary.\n\n> To clarify, the main thing that I want to do is take the value from\n> ParameterStatus and somehow, without having to escape it, set that\n> value for that GUC for a different session. As explained above, I now\n> think that this newly proposed protocol message is a bad fit for this.\n> But I still think that that is not a weird thing to want.\n\nNow, for the possibly nutty part of my proposal, what if we added a\nGUC string list named `protcocol_managed_params` that had the\n`GUC_PROTOCOL_EXTENSION` flag set, which would be a list of GUC names\nto treat as if `GUC_PROTOCOL_EXTENSION` is set on them within the\ncontext of the session. If a client wanted to use `ParameterSet` to\nmanage a non-`GUC_PROTOCOL_EXTENSION` managed parameter, it would\nfirst issue a `ParameterSet` to add the parameter to the\n`protcocol_managed_params` list, and then issue a `ParameterSet` to\nactually set the parameter. If for some reason (e.g. some pgbouncer\nuse cases) the client then wanted the parameter to be settable\n\"normally\" it would issue a third `ParameterSet` to remove the\nparameter from `protcocol_managed_params`. Regarding transactional\nsemantics, I *think* it would be reasonable to specify that changes\nmade through `ParameterSet` are not transactional and that\n`protcocol_managed_params` itself cannot be changed while a\ntransaction is active. That way within a given transaction a given\nparameter has *only* transactional semantics or has *only*\nnontransactional semantics, which I think avoids the potential\nconsistency problems? I think this would both address the pgbouncer\nuse case where it wants to be able to reflect a ParameterStatus\nmessage back to the server while being agnostic to its contents while\nalso addressing the \"SET ROLE\"/\"SET SESSION AUTHORIZATION\" issue: a\npooler would just add `session_authorization` to the\n`protcocol_managed_params` list and then it would have full control\nover the user by not passing along `ParameterSet` messages setting the\nuser from the client. (I think it would generally be reasonable to\nstill allow setting the role within the restricted\nsession_authorization role, but that would be a pooler decision not a\nPG one.)\n\n\n",
"msg_date": "Tue, 2 Jan 2024 17:43:44 -0600",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 3 Jan 2024 at 00:43, Jacob Burroughs <[email protected]> wrote:\n> What if we...\n\nGreat suggestions! Attached is a v3 version of the patchset that\nimplements all of this, including documentation.",
"msg_date": "Wed, 3 Jan 2024 16:54:30 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "New patchset attached, where I split up the patches in smaller logical units.\nNote that the first 4 patches in this series are not making any\nprotocol changes. All they do is set up infrastructure in the code\nthat allows us to make protocol changes in the future.\n\nI hope that those 4 should all be fairly uncontroversial, especially\npatch 1 seems a no-brainer to me. Note that this infrastructure would\nbe needed for any patchset that introduces protocol changes.\n\nThe 5th only bumps the version\n\nThe 6th introduces the _pq_.protocol_managed_parms protocol parameter\n\nThe 7th adds the new ParameterSet protocol message",
"msg_date": "Fri, 5 Jan 2024 16:13:48 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 10:14 AM Jelte Fennema-Nio <[email protected]> wrote:\n> New patchset attached, where I split up the patches in smaller logical units.\n> Note that the first 4 patches in this series are not making any\n> protocol changes. All they do is set up infrastructure in the code\n> that allows us to make protocol changes in the future.\n>\n> I hope that those 4 should all be fairly uncontroversial, especially\n> patch 1 seems a no-brainer to me. Note that this infrastructure would\n> be needed for any patchset that introduces protocol changes.\n>\n> The 5th only bumps the version\n>\n> The 6th introduces the _pq_.protocol_managed_parms protocol parameter\n>\n> The 7th adds the new ParameterSet protocol message\n\nApologies in advance if this sounds harsh but ... I don't like this\ndesign. I have two main complaints.\n\nFirst, I don't see a reason to bump the protocol version. The whole\nreason for adding support for protocol options (_pq_.whatever) is so\nthat we wouldn't have to change the protocol version to add new\nmessage types. At some point we may want to make a change that's large\nenough that we have to do that, or a large enough number of small\nchanges that it seems worthwhile, but as long as we can add new\nfeatures without bumping the protocol version, it seems advantageous\nto avoid doing so. It seems easier to reason about and less likely to\nbreak older clients.\n\nSecond, I don't really like the idea of selectively turning GUCs into\nprotocol-managed parameters. I think there are a relatively small\nnumber of things that we'd ever want to manage on a protocol level,\nbut this design would force us to make it work for any GUC whatsoever.\nThat seems like it adds a lot of complexity for not much benefit. If\nsomebody makes a random GUC into a protocol-managed parameter and then\nsomebody updates postgresql.conf and then the config file is reloaded,\nI guess we need to refuse to adopt the new value of that parameter?\nThat doesn't seem like a lot of fun to implement. What about the fact\nthat GUCs are transactional and protocol-managed parameters maybe\nshouldn't be? We can dodge a lot of complexity here, I think, if we\njust put the things into this new mechanism that have a clear reason\nto be there.\n\nTo answer a few questions from upthread (MHO):\n\n> - Since we currently don't have any protocol parameters. How do I test\n> it? Should I add a debug protocol parameter specifically for this\n> purpose? Or should my tests just always error at the moment?\n\nI think we should start by picking one or two protocol-managed\nparameters that we want to add, and then adding those in a way that is\ndistinct from the GUC system. I don't think we should add an abstract\nsystem divorced from any particular application.\n\n> - How do you get the value of a protocol parameter status? Do we\n> expect the client to keep track of what it has sent? Do we always send\n> a ParameterStatus message whenever it is changed? Or should we add a\n> ParamaterGet protocol message too?\n\nI would expect that the client would have full control over these\nvalues and so the configured value would always be the default (which\nshould be non-configurable to avoid ambiguity) unless the client set\nit to something else (in which case the client should know the value).\nSo I would think that we'd only need a message to set parameter\nvalues, not one to get them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 11:28:16 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Second, I don't really like the idea of selectively turning GUCs into\n> protocol-managed parameters. I think there are a relatively small\n> number of things that we'd ever want to manage on a protocol level,\n> but this design would force us to make it work for any GUC whatsoever.\n\nI'd not been following along for the last few days, but I agree that\nwe don't want to make it apply to any GUC at all.\n\n> I think we should start by picking one or two protocol-managed\n> parameters that we want to add, and then adding those in a way that is\n> distinct from the GUC system. I don't think we should add an abstract\n> system divorced from any particular application.\n\nThere is a lot of infrastructure we'll have to re-invent if\nwe make this completely independent of GUCs, notably:\n* a way to establish the initial/default value\n* a way to display the active value\n\nSo my thought was that this should be implemented as an (unchangeable)\nflag bit for a GUC variable, GUC_PROTOCOL_ONLY or something like that,\nand then we would refuse SQL-based set attempts on that. The behavior\nwould end up being very much like PGC_BACKEND variables, in that we\ncould allow all the existing setting methods to work to establish\na session's initial value; but after that, it can only change within\nthat session via a protocol message from the client. With that\nrule, it's okay for the protocol message to be nontransactional since\nthere's no interaction with transactions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jan 2024 11:53:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 11:53 AM Tom Lane <[email protected]> wrote:\n> There is a lot of infrastructure we'll have to re-invent if\n> we make this completely independent of GUCs, notably:\n> * a way to establish the initial/default value\n> * a way to display the active value\n>\n> So my thought was that this should be implemented as an (unchangeable)\n> flag bit for a GUC variable, GUC_PROTOCOL_ONLY or something like that,\n> and then we would refuse SQL-based set attempts on that. The behavior\n> would end up being very much like PGC_BACKEND variables, in that we\n> could allow all the existing setting methods to work to establish\n> a session's initial value; but after that, it can only change within\n> that session via a protocol message from the client. With that\n> rule, it's okay for the protocol message to be nontransactional since\n> there's no interaction with transactions.\n\nMaybe, but it seems like it might be complicated to make that work\nwith the existing GUC code. GUCs are fundamentally transactional, I\nthink.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:08:39 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Jan 2024 at 18:08, Robert Haas <[email protected]> wrote:\n> On Fri, Jan 5, 2024 at 11:53 AM Tom Lane <[email protected]> wrote:\n> > There is a lot of infrastructure we'll have to re-invent if\n> > we make this completely independent of GUCs, notably:\n> > * a way to establish the initial/default value\n> > * a way to display the active value\n> >\n> > So my thought was that this should be implemented as an (unchangeable)\n> > flag bit for a GUC variable, GUC_PROTOCOL_ONLY or something like that,\n> > and then we would refuse SQL-based set attempts on that. The behavior\n> > would end up being very much like PGC_BACKEND variables, in that we\n> > could allow all the existing setting methods to work to establish\n> > a session's initial value; but after that, it can only change within\n> > that session via a protocol message from the client. With that\n> > rule, it's okay for the protocol message to be nontransactional since\n> > there's no interaction with transactions.\n>\n> Maybe, but it seems like it might be complicated to make that work\n> with the existing GUC code. GUCs are fundamentally transactional, I\n> think.\n\nThey are not fundamentally transactional afaict based on the changes\nthat were needed so far. It makes sense too, because e.g. SIGHUP\nshould change the GUC value if the config changed no matter if the\ncurrent transaction aborts or succeeds.\n\nBased on my experience when writing the current set of patches I think\nthe GUC infrastructure fits quite well with protocol extension\nparameters. When you add a new flag bit it feels very natural\n(whatever you may call this flag GUC_PROTOCOL_ONLY,\nGUC_PROTOCOL_EXTENSION, or something else).\n\n\n",
"msg_date": "Fri, 5 Jan 2024 18:20:46 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:20 PM Jelte Fennema-Nio <[email protected]> wrote:\n> They are not fundamentally transactional afaict based on the changes\n> that were needed so far. It makes sense too, because e.g. SIGHUP\n> should change the GUC value if the config changed no matter if the\n> current transaction aborts or succeeds.\n\nWell, AtEOXact_GUC either reverts or puts back changes to GUC values\nthat have happened in that (sub)transaction, depending on whether the\n(sub)transaction committed or aborted. To make that work, there's a\n\"stack\" of GUC values for any given setting. For a non-transactional\nvalue, we wouldn't have all that infrastructure...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:26:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Jan 2024 at 17:28, Robert Haas <[email protected]> wrote:\n> First, I don't see a reason to bump the protocol version. The whole\n> reason for adding support for protocol options (_pq_.whatever) is so\n> that we wouldn't have to change the protocol version to add new\n> message types. At some point we may want to make a change that's large\n> enough that we have to do that, or a large enough number of small\n> changes that it seems worthwhile, but as long as we can add new\n> features without bumping the protocol version, it seems advantageous\n> to avoid doing so. It seems easier to reason about and less likely to\n> break older clients.\n\nWhile I agree that it's not strictly necessary, I also feel that you\nthink the a minor protocol version bump a much bigger deal than it is\n(afaict). The protocol is designed in such a way that bumping the\nminor version can be done without any problems. There is no\npossibility of breaking older clients, because the server will\nsilently downgrade to the version that the client asks for.\n\nI would be okay not doing the actual bump, but I think at least having\nthe infrastructure in libpq to support a future bump would be quite\nuseful (i.e. patch 0002 and 0003).\n\n> > - How do you get the value of a protocol parameter status? Do we\n> > expect the client to keep track of what it has sent? Do we always send\n> > a ParameterStatus message whenever it is changed? Or should we add a\n> > ParamaterGet protocol message too?\n>\n> I would expect that the client would have full control over these\n> values and so the configured value would always be the default (which\n> should be non-configurable to avoid ambiguity) unless the client set\n> it to something else (in which case the client should know the value).\n> So I would think that we'd only need a message to set parameter\n> values, not one to get them.\n\nBased on my short experience writing these patches, even for\ntesting/debugging it's quite useful to be able to do SHOW\n_pq_.some_protocol_parameter\n\nI think that's a major benefit of re-using the GUC system.\n\n\n",
"msg_date": "Fri, 5 Jan 2024 18:31:56 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Jan 5, 2024 at 11:53 AM Tom Lane <[email protected]> wrote:\n>> So my thought was that this should be implemented as an (unchangeable)\n>> flag bit for a GUC variable, GUC_PROTOCOL_ONLY or something like that,\n>> and then we would refuse SQL-based set attempts on that. The behavior\n>> would end up being very much like PGC_BACKEND variables, in that we\n>> could allow all the existing setting methods to work to establish\n>> a session's initial value; but after that, it can only change within\n>> that session via a protocol message from the client. With that\n>> rule, it's okay for the protocol message to be nontransactional since\n>> there's no interaction with transactions.\n\n> Maybe, but it seems like it might be complicated to make that work\n> with the existing GUC code. GUCs are fundamentally transactional, I\n> think.\n\nI think it'd be quite simple. As I said, it's just a small variation\non how some GUCs already work. The only thing that's really\ntransactional is SQL-driven updates, which'd be disallowed for this\nclass of variables.\n\n(After consuming a little more caffeine, I wonder if the class ought\nto be defined by a new PGC_XXX context value, rather than a flag bit.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jan 2024 12:35:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 12:35 PM Tom Lane <[email protected]> wrote:\n> I think it'd be quite simple. As I said, it's just a small variation\n> on how some GUCs already work. The only thing that's really\n> transactional is SQL-driven updates, which'd be disallowed for this\n> class of variables.\n\nWell, I know better than to tell you something is hard if you think\nit's easy. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:41:56 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Okay, attempt number 5 attached. The primary changes with the previous\nversion are:\n\n1. Split up commits a bit differently. I think each commit now stands\non its own and is an incremental improvement that could be committed\nwithout any of the later ones being necessary. Descriptions of why\neach commit is useful can be found in the commit message. Especially\nthe first 3 (and even first 4) seem rather uncontroversial to me.\n\n2. ParameterSet now works for normal backend parameters too. For\nnormal parameters it works just like SET does (so in a transactional\nmanner). For protocol extension parameters it still works too, but\nthere it errors when trying to change protocol extension parameters\nwithin a transaction. Thus (imho) elegantly avoiding confusing\nsituations around rolling back protocol extension parameters.\n\n3. As Tom suggested, it now uses a PGC_PROTOCOL context for the\nprotocol extension GUCs, instead of using a GUC_PROTOCOL_EXTENSION\nflag. This definitely seems like the cleanest way of adding \"protocol\nonly\" parameters to the current GUC system.\n\n4. _pq_.protocol_managed_params its list of GUCs can now only contain\nGUCs that have the PGC_USERSET or PGC_SUSET context.\n\nOn Fri, 5 Jan 2024 at 17:28, Robert Haas <[email protected]> wrote:\n> First, I don't see a reason to bump the protocol version.\n\nIt's still bumping the protocol version. I think this is a necessity\nwith the current changeset, since ParameterSet now applies to plain\nGUCs too and. As I clarified in a previous email, this does **not**\nbreak old clients, since the server silently downgrades when an older\nprotocol version is requested.\n\n> Second, I don't really like the idea of selectively turning GUCs into\n> protocol-managed parameters. I think there are a relatively small\n> number of things that we'd ever want to manage on a protocol level,\n> but this design would force us to make it work for any GUC whatsoever.\n\nIt now limits the possible GUCs that can be changed to PGC_USERSET and\nPGC_SUSET. If desired, we could limit it even further by using an\nallowlist of reasonable GUCs or set a flag on GUCs that can be\n\"upgraded\" . Things that seem at least reasonable to me are \"role\",\n\"session_authorization\", \"client_encoding\".\n\n> If somebody makes a random GUC into a protocol-managed parameter and then\n> somebody updates postgresql.conf and then the config file is reloaded,\n> I guess we need to refuse to adopt the new value of that parameter?\n\nThis was actually really easy to do after changing to PGC_PROTOCOL.\nThis exact behaviour is needed for PGC_BACKEND parameters, so I simply\nused that same small if statement for PGC_PROTOCOL.\n\nIf you still think we shouldn't do this, then the only other option I\ncan think of is to only allow GUCs with the GUC_DISALLOW_IN_FILE flag.\nThis would rule out adding client_encoding in the list though, but\nusing role and session_authorization would still be possible.",
"msg_date": "Mon, 8 Jan 2024 23:51:53 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "v6 attached with the following changes:\n\n1. Fixed rebase conflicts with master\n\n2. removed PGC_S_PROTOCOL (but kept PGC_PROTOCOL and PGC_SU_PROTOCOL).\nThis extra source level was not needed. And after some more testing I\nrealized this extra source level even caused problems, since protocol\nmessages could not override values set by SET commands anymore.\n\n3. Added a new patch (0010) with a protocol parameter to configure\nwhich GUCs are GUC_REPORT. This is partially to show that the GUC\ninterface makes sense for protocol parameters, but also because this\nwould be an extremely useful feature for connection poolers. And [2]\nwould be able to use this too.\n\n4. Don't error, but only warn, if a GUC provided to\n_pq_.protocol_managed_params is unknown. It seemed useful to be able\nto specify GUCs in this list that not all Postgres versions support in\nthe StartupMessage, without having to guess what postgres version\nyou're going to connect to.\n\n[2]: https://www.postgresql.org/message-id/flat/CAFj8pRBFU-WzzQhNrwRHn67N0Ug8a9-0-9BOo69PPtcHiBDQMA@mail.gmail.com",
"msg_date": "Wed, 10 Jan 2024 18:26:36 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 5:52 PM Jelte Fennema-Nio <[email protected]> wrote:\n> It's still bumping the protocol version. I think this is a necessity\n> with the current changeset, since ParameterSet now applies to plain\n> GUCs too and. As I clarified in a previous email, this does **not**\n> break old clients, since the server silently downgrades when an older\n> protocol version is requested.\n\nCould you explain why you think that the protocol version bump is necessary?\n\n> > Second, I don't really like the idea of selectively turning GUCs into\n> > protocol-managed parameters. I think there are a relatively small\n> > number of things that we'd ever want to manage on a protocol level,\n> > but this design would force us to make it work for any GUC whatsoever.\n>\n> It now limits the possible GUCs that can be changed to PGC_USERSET and\n> PGC_SUSET. If desired, we could limit it even further by using an\n> allowlist of reasonable GUCs or set a flag on GUCs that can be\n> \"upgraded\" . Things that seem at least reasonable to me are \"role\",\n> \"session_authorization\", \"client_encoding\".\n\nI don't know whether that limit helps anything or not, and you haven't\nreally said why you imposed it. Personally, I think that the login\nrole should be changeable via a protocol message and make it just as\nif we'd logged in using the selected role originally, except that a\nfurther protocol message can change it again (when not in\ntransaction). SET ROLE and SET SESSION AUTHORIZATION would behave in\naccordance with the idea that it was the originally selected role.\nThen, a connected client can't distinguish between being directly\nconnected to the database in a session created for that role and being\nconnected to a pooler which has used this protocol message to create a\nsession that is effectively for that role. With your design, the\nclient can see behavioral differences between those cases.\n\nI agree that client_encoding feels like a protocol parameter rather\nthan a GUC as we know them today. How to get there with reasonable\nimpact on backward compatibility, I'm not sure. I'm still afraid that\ntrying to allow this kind of nail-down for a broad range of GUCs (even\nif not all) is going to be messy. But I'm also plenty willing to\nlisten to contrary opinions. I hear yours, but I wonder what others\nthink? Tom particularly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 16:12:11 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 10 Jan 2024 at 22:12, Robert Haas <[email protected]> wrote:\n> Could you explain why you think that the protocol version bump is necessary?\n\nPatch 0006 adds a new protocol message to the protocol. Somehow the\nclient needs to be told that the server understands that message.\nUsing the protocol version seems like the best/simplest/cleanest way\nto do that to me.\n\nIn theory we could add a dedicated protocol extension parameter (e.g.\n_pq_.enable_protocol_level_set) that the client would need to set to\ntrue before it would be able to use ParameterSet. But that just sounds\nlike introducing unnecessary complexity to me.\n\nBumping the protocol version carries exactly the same level of risk as\nadding new protocol extension parameters. Both will always allow old\nclients to connect to the newer server. And both also allow a new\nclient to connect to an old server just fine as well, as long as that\nserver has ae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed (which was\nintroduced in PG11.0 and was backpatched to all then supported PG\nversions).\n\n> > It now limits the possible GUCs that can be changed to PGC_USERSET and\n> > PGC_SUSET. If desired, we could limit it even further by using an\n> > allowlist of reasonable GUCs or set a flag on GUCs that can be\n> > \"upgraded\" . Things that seem at least reasonable to me are \"role\",\n> > \"session_authorization\", \"client_encoding\".\n>\n> I don't know whether that limit helps anything or not, and you haven't\n> really said why you imposed it.\n\nThe main reason I did this is to make sure that the required context\ncan only be hardened, not softened. e.g. it would be very bad if\nPGC_POSTMASTER GUCs could suddenly be changed with a protocol message.\nSo it was more meant as fixing a bug, than really reducing the number\nof GUCs this has an impact on significantly.\n\n> I'm still afraid that\n> trying to allow this kind of nail-down for a broad range of GUCs (even\n> if not all) is going to be messy. But I'm also plenty willing to\n> listen to contrary opinions. I hear yours, but I wonder what others\n> think? Tom particularly.\n\nI wouldn't mind heavily reducing the GUCs that can be nailed-down like\nthis. For my usecase (connection pooling) I only really care about it\nbeing possible to nail-down session_authorization.\n\nHonestly, I care more about patch 0010 than patch 0008. Patch 0008\nsimply seemed like the easiest way to demonstrate the ParameterSet\nmessage.\n\n\n",
"msg_date": "Wed, 10 Jan 2024 23:53:50 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 10 Jan 2024 at 23:53, Jelte Fennema-Nio <[email protected]> wrote:\n> Honestly, I care more about patch 0010 than patch 0008. Patch 0008\n> simply seemed like the easiest way to demonstrate the ParameterSet\n> message.\n\nSo to be clear, if you consider 0008 the most controversial/risky part\nof this patchset (which it sounds like that's the case). I'd be fine\nwith removing that for now. IMHO the first 7 patches would be very\nuseful on their own, because they unblock any other patches that want\nto introduce protocol changes (examples of those are 0008 and 0010).\n\nDo you think that is a good idea? I could fairly easily modify the\ntests in 0009 to remove any things related to 0008.\n\n\n",
"msg_date": "Thu, 11 Jan 2024 17:20:25 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 11 Jan 2024 at 17:20, Jelte Fennema-Nio <[email protected]> wrote:\n> So to be clear, if you consider 0008 the most controversial/risky part\n> of this patchset (which it sounds like that's the case). I'd be fine\n> with removing that for now.\n\nI haven't removed 0008 yet, since I'd like some feedback first if that\nmakes sense. But I did add two new patches in the middle of the\npatchset (which shift the later patch numbers by 2):\n\n0007: Adds a new \\parameterset meta-command to psql to be able to more\neasily test the new ParameterSet message\n\n0008: Shows warning in psql if the server is not supporting the newest\nprotocol version.",
"msg_date": "Tue, 16 Jan 2024 14:43:23 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 8:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> I haven't removed 0008 yet, since I'd like some feedback first if that\n> makes sense. But I did add two new patches in the middle of the\n> patchset (which shift the later patch numbers by 2):\n>\n> 0007: Adds a new \\parameterset meta-command to psql to be able to more\n> easily test the new ParameterSet message\n>\n> 0008: Shows warning in psql if the server is not supporting the newest\n> protocol version.\n\nSorry for not getting back to this right away; there are quite a few\nthreads competing for my attention.\n\nI think it's flat-out not viable to bump the protocol version to 3.1\nany time in the next few years. NegotiateProtocolVerison has existed\nsince ae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed (2017-11-21, 11.0,\n10.2, 9.6.7, 9.5.11, 9.4.16, 9.3.21), but libpq didn't handle it until\nbbf9c282ce92272ed7bf6771daf0f9efa209e61b (2022-11-17, 16.0) -- and\neven that handling doesn't really seem like what we want, because it\nlooks like it will reject anything where the protocol version doesn't\nmatch exactly, rather than downgrading. To fix that, I think we need\nsome parts of what you've got in 0002, where we have an earliest and a\nlatest minor protocol version that we'll accept, and the server is\nallowed to downgrade from the latest thing we support, just as long as\nthey don't try to downgrade below the earliest thing we support.\n\nBut I think we would want to have those changes in all supported\nbranches before we could think of requesting 3.1 or higher by default.\nImagine that in v17 we both added server support for protocol version\n3.1 and also adopted your 0001. Then, a v17 libpq would fail to\nconnect to a v16 or earlier PostgreSQL instance. In effect, it would\nbe a complete wire compatibility break. There's no way that such a\npatch is going to be acceptable. If I were to commit a patch from you\nor anyone else that does that, many angry people would show up to yell\nat both of us. So unless I'm misunderstanding the situation, 0001 is\npretty much dead on arrival for now and for quite a while to come.\nThat doesn't necessarily mean that we couldn't *optionally* request\n3.1, e.g. controlled by a connection keyword. I would imagine that the\nuser would write e.g. 'user=rhaas password=banana protocolroles=true'\nand libpq would say \"oh, because the user wanted protocolroles=true I\nneed to request at least 3.1\" -- but if that weren't there, the server\nwould still request only 3.0 and nothing would break.\n\nAlso, I'm pretty doubtful that we want\nPQunsupportedProtocolExtensions(). That seems like allowing the client\nto have too much direct visibility into what's happening at the\nprotocol level. That kind of interface might make sense if we were\ntrying to support unknown protocol extensions from third parties, but\nfor stuff in core, I think there should be specific APIs that relate\nto specific features e.g. you call PQsetWireProtocolRole(char\n*whatever) and it returns success or failure according to whether that\ncapability is available without telling you how that's negotiated on\nthe wire.\n\nSo in summary, I think parts of 0002 are a good idea, but 0001 is not realistic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 15:36:06 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 21:36, Robert Haas <[email protected]> wrote:\n> Sorry for not getting back to this right away; there are quite a few\n> threads competing for my attention.\n\nNo worries, I know it's a busy time.\n\n> But I think we would want to have those changes in all supported\n> branches before we could think of requesting 3.1 or higher by default.\n> Imagine that in v17 we both added server support for protocol version\n> 3.1 and also adopted your 0001. Then, a v17 libpq would fail to\n> connect to a v16 or earlier PostgreSQL instance. In effect, it would\n> be a complete wire compatibility break. There's no way that such a\n> patch is going to be acceptable. If I were to commit a patch from you\n> or anyone else that does that, many angry people would show up to yell\n> at both of us. So unless I'm misunderstanding the situation, 0001 is\n> pretty much dead on arrival for now and for quite a while to come.\n\nIt's understandable you're worried about breaking clients, but afaict\n**you are actually misunderstanding the situation**. I totally agree\nthat we cannot bump the protocol version without also merging 0002\n(that's why the version bump is in patch 0005 not patch 0001). But\n0002 does **not** need to be backported to all supported branches. The\nonly libpq versions that can ever receive a NegotiateVersionProtocol\nare ones that request 3.1, and since none of the pre-17 libpq versions\never request 3.1 there's no need for them to be able to handle\nNegotiateVersionProtocol.\n\nIf you try out the patchset, you will see that you can connect with\npsql16 to postgres17 and psql17 to postgres16. Both without any\nproblems. The only time when you will get problems is if you connect\nto a server from before these versions that you mentioned (2017-11-21,\n11.0, 10.2, 9.6.7, 9.5.11, 9.4.16, 9.3.21)\n\n> Also, I'm pretty doubtful that we want\n> PQunsupportedProtocolExtensions().\n\nI definitely think we should include this API. As a client author (and\neven user), you might want to know what features are supported by the\nserver you are connected to. That way you can avoid calling functions\nthat would otherwise fail. This is also the reason why\nPQprotocolVersion() and PQserverVersion() exist. IMHO\nPQunsupportedProtocolExtensions() is simply a natural addition to\nthose already existing feature-discovery APIs.\n\nI'll move the addition of PQunsupportedProtocolExtensions() to a\nseparate patch though, since I do agree that it's a separate item from\nthe rest of 0002.\n\n> That seems like allowing the client\n> to have too much direct visibility into what's happening at the\n> protocol level. That kind of interface might make sense if we were\n> trying to support unknown protocol extensions from third parties, but\n> for stuff in core, I think there should be specific APIs that relate\n> to specific features e.g. you call PQsetWireProtocolRole(char\n> *whatever) and it returns success or failure according to whether that\n> capability is available without telling you how that's negotiated on\n> the wire.\n\nI think we have a very different idea of what is a desirable API for\nclient authors that use libpq to build their clients. libpq its API is\npretty low level, so I think it makes total sense for client authors\nto know what protocol extension parameters exist. It seems totally\nacceptable for me to have them call PQsetParameter directly:\n\nPQsetParameter(\"_pq_.protocol_roles\", \"true\")\nPQsetParameter(\"_pq_.report_parameters\", \"role,search_path\")\n\nOtherwise we need to introduce **two** new functions for every\nprotocol extension that is going to be introduced, a blocking and a\nnon-blocking one (e.g. PQsetWireProtocolRole() and\nPQsendSetWireProtocolRole()). And that seems like unnecessary API\nbloat to me.\n\nTo be clear, I think it would probably make sense for client authors\nto expose functions like that for the users of the client. But I think\nlibpq should not add an API surface that can easily be avoided (e.g.\nthere's also no function to begin a transaction, even though pretty\nmuch every client exposes one).\n\n\n",
"msg_date": "Wed, 17 Jan 2024 03:21:26 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 9:21 PM Jelte Fennema-Nio <[email protected]> wrote:\n> It's understandable you're worried about breaking clients, but afaict\n> **you are actually misunderstanding the situation**. I totally agree\n> that we cannot bump the protocol version without also merging 0002\n> (that's why the version bump is in patch 0005 not patch 0001). But\n> 0002 does **not** need to be backported to all supported branches. The\n> only libpq versions that can ever receive a NegotiateVersionProtocol\n> are ones that request 3.1, and since none of the pre-17 libpq versions\n> ever request 3.1 there's no need for them to be able to handle\n> NegotiateVersionProtocol.\n\nOK, yeah, fuzzy thinking on my part.\n\n> I think we have a very different idea of what is a desirable API for\n> client authors that use libpq to build their clients. libpq its API is\n> pretty low level, so I think it makes total sense for client authors\n> to know what protocol extension parameters exist. It seems totally\n> acceptable for me to have them call PQsetParameter directly:\n>\n> PQsetParameter(\"_pq_.protocol_roles\", \"true\")\n> PQsetParameter(\"_pq_.report_parameters\", \"role,search_path\")\n>\n> Otherwise we need to introduce **two** new functions for every\n> protocol extension that is going to be introduced, a blocking and a\n> non-blocking one (e.g. PQsetWireProtocolRole() and\n> PQsendSetWireProtocolRole()). And that seems like unnecessary API\n> bloat to me.\n\nI think it's hard to say for sure what API is going to work well here,\nbecause we just don't have much experience with this. I do agree that\nwe want to avoid API bloat. However, I also think that the reason why\nthe API you're proposing here looks good in this case is because libpq\nitself doesn't really need to do anything differently for these\nparameters. It doesn't actually really change anything about the\nprotocol; it only nails down the server behavior in a way that can't\nbe changed. Another current request is to have a way to have certain\ndata types always be sent in binary format, specified by OID. Do we\nwant that to be written as PQsetParameter(\"always_binary_format\",\n\"123,456,789,101112\") or do we want it to maybe look more like\nPQalwaysBinaryFormat(int count, Oid *stuff)? Or, another example, say\nwe want to set client_to_server_compression_method=lz4. It's very\ndifficult for me to believe that libpq should be doing strcmp()\nagainst the proposed protocol parameter settings and thus realizing\nthat it needs to start compressing ... especially since there might be\ncompression parameters (like level or degree of parallelism) that the\nclient needs and the server doesn't.\n\nAlso, I never intended for _pq_ to become a user-visible namespace\nthat people would have to care about; that was just a convention that\nI adopted to differentiate setting a protocol parameter from setting a\nGUC. I think it's a mistake to make that string something users have\nto worry about directly. It wouldn't begin and end with an underscore\nif it were intended to be user-visible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jan 2024 08:39:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 17 Jan 2024 at 14:39, Robert Haas <[email protected]> wrote:\n> I think it's hard to say for sure what API is going to work well here,\n> because we just don't have much experience with this.\n\nAgreed, but I strongly believe PQunsupportedProtocolExtensions() is\nuseful regardless of the API choice.\n\n> I also think that the reason why\n> the API you're proposing here looks good in this case is because libpq\n> itself doesn't really need to do anything differently for these\n> parameters. It doesn't actually really change anything about the\n> protocol; it only nails down the server behavior in a way that can't\n> be changed. Another current request is to have a way to have certain\n> data types always be sent in binary format, specified by OID. Do we\n> want that to be written as PQsetParameter(\"always_binary_format\",\n> \"123,456,789,101112\") or do we want it to maybe look more like\n> PQalwaysBinaryFormat(int count, Oid *stuff)? Or, another example, say\n> we want to set client_to_server_compression_method=lz4.\n\nI think from libpq's perspective there are two categories of protocol\nextension parameters:\n1. parameters that change behaviour in a way that does not matter to libpq\n2. parameters that change in such a way that libpq needs to change its\nbehaviour too (by parsing or sending messages differently than it\nnormally would).\n\n_pq_.protocol_roles, _pq_.report_parameters, and (afaict) even\n_pq_.always_binary_format would fall into category 1. But\n_pq_.client_to_server_compression_method would fall into category 2,\nbecause libpq should start to compress the messages that it is\nsending.\n\nI think if you look at it like that, then using PQsetParameter for\nparameters in category 1 makes sense. And indeed you'd likely want a\ndedicated API for each parameter in category 2, and probably have\nPQsetParameter error for these parameters. In any case it seems like\nsomething that can be decided on a case by case basis. However, to\nmake this future proof, I think it might be best if PQsetParameter\nwould error for protocol extension parameters that it does not know\nabout.\n\n> Also, I never intended for _pq_ to become a user-visible namespace\n> that people would have to care about\n\nI agree that forcing Postgres users to learn about this prefix is\nprobably unwanted. But requiring client authors to learn about the\nprefix seems acceptable to me.\n\n\n",
"msg_date": "Wed, 17 Jan 2024 16:15:07 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\", but it seems like\nthere were some CFbot test failures last time it was run [1]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4736\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 14:31:04 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 22 Jan 2024 at 04:31, Peter Smith <[email protected]> wrote:\n> Hi, This patch has a CF status of \"Needs Review\", but it seems like\n> there were some CFbot test failures last time it was run [1]. Please\n> have a look and post an updated version if necessary.\n\nAh yes, I had noticed that a while back and fixed the problem in v7 of\nmy patchset but it seems a rebase conflict has caused the cfbot not to\nrun tests on that version at all. Attached is a rebased version\ntogether with following small changes:\n\n- Rename PQunsupportedProtocolExtensions to\nPQunsupportedProtocolExtensionParameters\n\n- Add PQunsupportedProtocolExtensionParameters in its own patch",
"msg_date": "Mon, 22 Jan 2024 10:19:57 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Fixed a rebase conflict.",
"msg_date": "Tue, 20 Feb 2024 15:04:07 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Fixed some conflicts again.\n\nTo summarize the current state of this patchset, it consists of 5\ndifferent parts that could be merged separately:\n1. 0001-0005 are needed for any protocol change, and hopefully\nshouldn't require much discussion\n2. 0006-0009 introduce a new protocol message that can be used to\nupdate GUCs. I think we might want some discussion on the design of\nthe protocol message, but I think the behaviour has all feedback\nincorporated.\n3. 0010 Adds GUC contexts for protocol extensions, in the way that Tom\nLane suggested.\n4. 0011 Adds a way to mark some GUCs as only changeable using protocol\nmessages, there's still some discussion needed on this (I can remove\nthis from the patchset if that makes it easier).\n5. 0012 Adds some additional tests for some of the previous features\n6. 0013 Add a protocol parameter to control which GUCs have\nGUC_REPORT. Quite straightforward imho.",
"msg_date": "Fri, 8 Mar 2024 12:47:15 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Fixed some conflicts again, as well as adding a connection option to\nchoose the requested protocol version (as discussed in[1]). This new\nconnection option is not useful when connecting to any of the\nsupported postgres versions. But it can be useful when connecting to\nPG versions before 9.3. Or when connecting to connection poolers or\nother databases that implement the postgres protocol but do not\nsupport the NegotiateProtocolVersion message.\n\n[1]: https://www.postgresql.org/message-id/flat/CAGECzQRrHn52yEX%2BFc6A9uvVbwRCxjU82KNuBirwFU5HRrNxqA%40mail.gmail.com#835914cbd55c56b36e8e7691cb743a18\n\n\n",
"msg_date": "Wed, 13 Mar 2024 19:18:07 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 19:18, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> Fixed some conflicts again, as well as adding a connection option to\n> choose the requested protocol version\n\nnow with attachments...",
"msg_date": "Wed, 13 Mar 2024 19:27:06 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 6:47 AM Jelte Fennema-Nio <[email protected]> wrote:\n> 1. 0001-0005 are needed for any protocol change, and hopefully\n> shouldn't require much discussion\n\nI feel bad arguing about the patches that you think are a slam-dunk,\nbut I find myself disagreeing with the design choices.\n\nRegarding 0001, I considered making this change as part of\nae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed but decided against it,\nbecause it seemed like it was making the assumption that we always\nwanted to initiate new connections with the latest protocol version\nthat we know how to accept, and I wasn't sure that would always be\ntrue. I don't think it would be catastrophic if this got committed or\nanything -- it could always be changed later if the need arose -- but\nI wanted to mention that I had a reason for not doing it, and I'm\nstill not particularly convinced that we should do it.\n\nI'm really unhappy with 0002-0004. Before\nae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed, any parameter included in\nthe startup message that wasn't in a short, hard-coded list was\ntreated as a request to set a GUC. That left no room for any other\ntype of protocol modification, so that commit carved out an exception,\nwhere when we something that starts with _pq_., it's assumed to be\nsetting some other kind of thing, not a GUC. But 0004 throws that out\nthe window and decides, nope, those are just GUCs, too. Even if we\ndon't have a specific reason today why we'd ever want such a thing, it\nseems short-sighted to give up on the possibility that in the future\nwe will. Because if we implement what this patch wants to do in this\nway, basically consuming the entire namespace that\nae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed created in on shot, and then\nlater we want the sort of thing that I'm postulating, we'll have to\nmanufacture another new namespace for that need.\n\nAnd it seems to me that there are other ways we could do this. For\nexample, suppose we introduced just one new protocol parameter; for\nthe sake of argument, I'll call it _pq_.protocol_set. If the client\nsends this parameter and the server accepts it, then the client knows\nthat the server supports a new protocol message ProtocolSetParameter,\nwhich is the only way to set GUCs in the new PROTOCOL_EXTENSION\ncategory. New libpq functions with names like, I don't know,\nPQsetProtocolParameter(), are added to send such messages, and they\nreturn an error if there are network problems or whatever, or if the\nserver didn't accept the _pq_.protocol_set option at startup time.\n\nWith this kind of design, you're just consuming one element of the\n_pq_ namespace, and the next person who wants to do something can\nconsume one more element, and we'll be able to go on for a very long\ntime without running out of room. This is really how I intended this\nmechanism to be used, and the only real downside I see as compared to\nwhat you've done is that you can't set the protocol GUCs in the\nstartup packet, but must set them afterward via separate messages. If\nthat's a problem, then the proposal I just outline needs modification\n... but no matter what we do exactly, I don't want the very first\nprotocol extension we ever add to eat up all of _pq_. I intended that\nto support decades worth of extensibility, not just one patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 11:45:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 16:45, Robert Haas <[email protected]> wrote:\n> I feel bad arguing about the patches that you think are a slam-dunk,\n> but I find myself disagreeing with the design choices.\n\nNo worries, thanks a lot for responding. I'm happy to discuss this\ndesign further. I didn't necessarily mean these patches were a\nslam-dunk. I mainly meant that these first few patches were not\nspecific to any protocol change, but are changes we should agree on\nbefore any change to the protocol is possible at all. Based on your\nresponse, we currently disagree on a bunch of core things.\n\nI'll first try to summarize my view on (future) protocol changes and\nwhy I think the current core design in this patchset is the correct\npath forward, and then go into some details inline in your response\nbelow:\n\nIn my view there can be, **by definition,** only two general types of\nprotocol changes:\n1. A \"simple protocol change\": This is one that requires agreement by\nboth the client and server, that they understand the new message types\ninvolved in this change. e.g. the ParameterSet message proposal (this\nmessage type is either supported or it's not)\n2. A \"parameterized protocol change\": This requires the same as 1, but\nshould also allow some parameterization from the client, e.g. for the\ncompression proposal, the client should specify what compression\nalgorithm the server should use to compress data when sending it to\nthe client.\n\nClient and Server can agree that a \"simple protocol change\" is\nsupported by both advertising a minimum minor protocol version. And\nfor a \"parameterized protocol change\" the client can send a _pq_\nparameter in the startup packet.\n\nSo, new _pq_ parameters should only ever be introduced for\nparameterized protocol changes. They are not meant to advertise\nsupport, they are meant to configure protocol features. For a\nnon-configurable protocol feature, we'd simply bump the protocol\nversion. And since _pq_ parameters thus always control some setting at\nthe session level, we can simply store it as a GUC, because that's how\nwe store all our parameters at a session level.\n\n> Regarding 0001, I considered making this change as part of\n> ae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed but decided against it,\n> because it seemed like it was making the assumption that we always\n> wanted to initiate new connections with the latest protocol version\n> that we know how to accept, and I wasn't sure that would always be\n> true.\n\nI think given the automatic downgrade supported by the\nNegotiateProtocolVersion, there's no real down-side to requesting the\nnewest version by default. The only downside I can see is when\nconnecting to other applications (i.e. non PostgreSQL servers) that\nimplement the postgres protocol, but don't implement\nNegotiateProtocolVersion. But for that I added the\nmax_protocol_version connection option in 0006 (of my latest\npatchset).\n\n> I'm really unhappy with 0002-0004.\n\nJust to be clear, (afaict) your argument below seems to only really be\nabout 0004, not about 0002 or 0003. Was there anything about 0002 &\n0003 that you were unhappy with? 0002 & 0003 are not dependent an 0004\nimho. Because even when not making _pq_ parameters map to GUCs, we'd\nstill need to change libpq to not instantly close the connection\nwhenever a _pq_ parameter (or new protocol version) is not supported\nby the server (which is what 0002 & 0003 do).\n\n> That left no room for any other\n> type of protocol modification, so that commit carved out an exception,\n> where when we something that starts with _pq_., it's assumed to be\n> setting some other kind of thing, not a GUC.\n\nOkay, our interpretation is very different here. From my perspective\nintroducing a non-GUC namespace is NOT AT ALL the benefit of the _pq_\nprefix. The main benefit (imho) is that it allows sending an\n\"optional\" parameter (i.e GUC) in the startup packet. So, one where if\nthe server doesn't recognize it the connection attempt still succeeds.\nIf you specify a normal GUC in the connection parameters and the\nserver doesn't know about it, the server will close the connection.\nSo, to be able to send a GUC that depends on the protocol and/or\nserver version in an optional way, you'd need to wait for an extra\nnetwork roundtrip until the server tells you what protocol and/or\nserver version they are.\n\n> But 0004 throws that out\n> the window and decides, nope, those are just GUCs, too. Even if we\n> don't have a specific reason today why we'd ever want such a thing, it\n> seems short-sighted to give up on the possibility that in the future\n> we will.\n\nSince I believe a _pq_ parameter should only be used to control\nsettings at the session level. I don't think it would be short-sighted\nto give-up on the possibility to store them as anything else as GUCs.\nBecause in the many years that we've had GUCs, we've been able to\nstore all session settings using that infrastructure. BUT PLEASE NOTE:\nI don't think we are giving up on the thing you're describing (see\nexplanation in final part of this email)\n\n> With this kind of design, you're just consuming one element of the\n> _pq_ namespace, and the next person who wants to do something can\n> consume one more element, and we'll be able to go on for a very long\n> time without running out of room. This is really how I intended this\n> mechanism to be used, and the only real downside I see as compared to\n> what you've done is that you can't set the protocol GUCs in the\n> startup packet, but must set them afterward via separate messages. If\n> that's a problem, then the proposal I just outline needs modification\n\nI very much think that's a problem. This would mean an extra roundtrip\nat connection startup. Which, as I described above, is to me the whole\nbenefit of the _pq_ namespace.\n\n> ... but no matter what we do exactly, I don't want the very first\n> protocol extension we ever add to eat up all of _pq_. I intended that\n> to support decades worth of extensibility, not just one patch.\n\nThis seems to be the core of your argument. But honestly, I don't\nunderstand this logic at all. Why do you think that assigning _pq_\nparameters to GUCs **for the ones that match an existing GUC** would\nhave such a far reaching effect into the future. There's only a\nhandful of _pq_ parameters being proposed on the mailinglist at the\nmoment. Even if we implement all of those as GUCs, and in the future,\nwe'd want a _pq_ parameter that does not map to GUC (which I\npersonally doubt will ever be the case). Then we can simply change the\nserver code like so and do something special for that parameter:\n\n }\n+ else if (strcmp(nameptr,\n\"_pq_.some_non_guc_parameter\") == 0)\n+ {\n+ // Do something with the parameter\n+ }\n else if (strncmp(nameptr, \"_pq_.\", 5) == 0 &&\n!find_option(nameptr, false, true, ERROR))\n {\n /*\n * We report unkown protocol\nextensions using the\n * NegotiateProtocolVersion message\ninstead of erroring\n */\n\n\nThis would be completely backwards compatible afaict, because\n_pq_.some_non_guc_parameter would not have been a GUC before. So the\nonly thing that would have happened if you sent it, is that you would\nget back that the server doesn't support it in the\nNegotiateProtocolVersion packet (just like what is happening currently\nsince ae65f6066dc3d19a55f4fdcd3b30003c5ad8dbed).\n\nSO TO BE VERY CLEAR: (afaict) interpreting _pq_ parameters as GUCs\nright now does not limit our ability to do something differently for\nnew _pq_ parameters that we introduce in the future.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 18:54:19 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 1:54 PM Jelte Fennema-Nio <[email protected]> wrote:\n> In my view there can be, **by definition,** only two general types of\n> protocol changes:\n> 1. A \"simple protocol change\": This is one that requires agreement by\n> both the client and server, that they understand the new message types\n> involved in this change. e.g. the ParameterSet message proposal (this\n> message type is either supported or it's not)\n> 2. A \"parameterized protocol change\": This requires the same as 1, but\n> should also allow some parameterization from the client, e.g. for the\n> compression proposal, the client should specify what compression\n> algorithm the server should use to compress data when sending it to\n> the client.\n\nYou seem to be supposing here that all protocol changes will consist\nof adding new message types. While I think that will be a common\npattern, I do not think it will be a universal one. I do agree,\nhowever, that every possible variation of the protocol is either\nBoolean-valued (i.e. are we doing X or not?) or something more\ncomplicated (i.e. how exactly are doing X?).\n\n> Client and Server can agree that a \"simple protocol change\" is\n> supported by both advertising a minimum minor protocol version. And\n> for a \"parameterized protocol change\" the client can send a _pq_\n> parameter in the startup packet.\n>\n> So, new _pq_ parameters should only ever be introduced for\n> parameterized protocol changes. They are not meant to advertise\n> support, they are meant to configure protocol features. For a\n> non-configurable protocol feature, we'd simply bump the protocol\n> version. And since _pq_ parameters thus always control some setting at\n> the session level, we can simply store it as a GUC, because that's how\n> we store all our parameters at a session level.\n\nThis is definitely not how I was thinking about it. I was imagining\nthat we wanted to reserve bumping the protocol version for more\nsignificant changes, and that we'd use _pq_ parameters for relatively\nminor new functionality whether Boolean-valued or otherwise.\n\n> I think given the automatic downgrade supported by the\n> NegotiateProtocolVersion, there's no real down-side to requesting the\n> newest version by default. The only downside I can see is when\n> connecting to other applications (i.e. non PostgreSQL servers) that\n> implement the postgres protocol, but don't implement\n> NegotiateProtocolVersion. But for that I added the\n> max_protocol_version connection option in 0006 (of my latest\n> patchset).\n\nYou might be right. This is a minor point that's not worth arguing\nabout too much.\n\n> > I'm really unhappy with 0002-0004.\n>\n> Just to be clear, (afaict) your argument below seems to only really be\n> about 0004, not about 0002 or 0003. Was there anything about 0002 &\n> 0003 that you were unhappy with? 0002 & 0003 are not dependent an 0004\n> imho. Because even when not making _pq_ parameters map to GUCs, we'd\n> still need to change libpq to not instantly close the connection\n> whenever a _pq_ parameter (or new protocol version) is not supported\n> by the server (which is what 0002 & 0003 do).\n\nI completely agree that we need to avoid slamming the connection shut.\nWhat I don't agree with is taking the list of protocol extensions that\nthe server knows about and putting it into an array of strings that\nthe user can see. I don't want the user to know or care so much about\nwhat's happening at the wire protocol level. The user is entitled to\nknow whether PQsetProtocolParameter() will work or not, and the user\nis entitled to know whether it has a chance of working next time if it\ndidn't work this time, and when it fails, the user is entitled to a\ngood error message explaining the reason for the failure. But the user\nis not entitled to know what negotiation took place over the wire to\nfigure that out. They shouldn't need to know that the _pq_ namespace\nexists, and they shouldn't need to know whether we negotiated the\navailability or unavailability of PQsetProtocolParameter() using [a]\nthe protocol minor version number, [b] the protocol major version\nnumber, [c] a protocol option called parameter_set, or [d] a protocol\noption called bananas_foster. Those are all things that might change\nin the future.\n\nJust as a for instance, I had a thought that we might accumulate a few\nnew message types controlled by protocol options (ParameterSet,\nAlwaysSendTypeInBinaryFormat, whatever else) while keeping the\nprotocol version as 3.0, and then eventually bump the protocol version\nto 3.1 where all of that would be mandatory functionality. So the\nprotocol parameters wouldn't be specified any more when using 3.1, but\nthey would be specified when talking to older 3.0 servers. That\ndifference shouldn't be visible to the user. The user should only know\nthe result of the negotiation.\n\n> Okay, our interpretation is very different here. From my perspective\n> introducing a non-GUC namespace is NOT AT ALL the benefit of the _pq_\n> prefix. The main benefit (imho) is that it allows sending an\n> \"optional\" parameter (i.e GUC) in the startup packet. So, one where if\n> the server doesn't recognize it the connection attempt still succeeds.\n> If you specify a normal GUC in the connection parameters and the\n> server doesn't know about it, the server will close the connection.\n\nBut this is another example of a problem that can *easily* be fixed\nwithout using up the entirety of the _pq_ namespace. You can introduce\n_pq_.dont_error_on_unknown_gucs=1 or\n_pq_.dont_error_on_these_gucs='foo, bar, baz'. The distinction between\nthe startup packet containing whizzbang=frob and instead containing\n_pq_.whizzbang=frob shouldn't be just whether an error is thrown if we\ndon't know anything about whizzbang.\n\n> > ... but no matter what we do exactly, I don't want the very first\n> > protocol extension we ever add to eat up all of _pq_. I intended that\n> > to support decades worth of extensibility, not just one patch.\n>\n> This seems to be the core of your argument. But honestly, I don't\n> understand this logic at all. Why do you think that assigning _pq_\n> parameters to GUCs **for the ones that match an existing GUC** would\n> have such a far reaching effect into the future. There's only a\n> handful of _pq_ parameters being proposed on the mailinglist at the\n> moment. Even if we implement all of those as GUCs, and in the future,\n> we'd want a _pq_ parameter that does not map to GUC (which I\n> personally doubt will ever be the case). Then we can simply change the\n> server code like so and do something special for that parameter:\n\nI guess I'm in the same position as you are -- your argument doesn't\nreally make any sense to me. That also has the unfortunate\ndisadvantage of making it difficult for me to explain why I don't\nagree with you, but let me just tick off a few things that I'm\nthinking about here:\n\n1. Connection poolers. If I'm talking to pgpool and pgpool is talking\nto the server, and pgpool and I agree to use compression, that's\ncompletely separate from whether pgpool and the server are using\ncompression. If I have to interrogate the compression state by\nexecuting \"SHOW some_compression_guc\", then I'm going to get the wrong\nanswer unless pgpool runs a full SQL parser on every command that I\nexecute and intercepts the ones that touch protocol parameters. That's\nbound to be expensive an unreliable -- consider something like SELECT\ncurrent_setting('some_compression_guc') || ' ' |\ncurrent_setting('some_other_guc') which isn't half as pathological as\nit first looks. I want to be able to know the state of my protocol\nparameters by calling libpq functions that answer my questions\ndefinitively based on libpq's own internal state. libpq itself *must*\nknow what compression I'm using on my connection; the server's answer\nmay be different.\n\n2. Clarity of meaning across versions. Let's say we add a protocol\noption in the future that expands the message-type byte into a\ntwo-byte word. Failure of the two sides to agree on the value of this\nprotocol option is, fairly obviously, a catastrophe. I assume that if\nwe actually did something like this, there's a fair chance it would be\nprotocol version 4.0 rather than an option to 3.whatever, but it's a\ngood example of something that might someday need to be changed that\nis not just a new message type and about which the communicating\nparties must absolutely agree. Let's say we do as you propose and have\na GUC wide_message_types = {true | false}. Now, what happens when a\nsneaky user of an older libpq, which does not know about this option,\ntries to connect to a newer server? As I see it, in your proposal, the\nclient thinks they're just setting a GUC, but the server thinks we're\ncompletely changing up the wire protocol. Disaster ensues. From my\npoint of view, the problem is created by the fact that you're mixing\ntogether two things which ought to be kept well-separated -- the act\nof negotiating what protocol variant we're using, on the one hand, and\nthe setting of particular GUCs to particular values, on the other.\n\n3. Generally, and maybe this is just an expansion of the previous\npoint, it feels to me like you've conflated the thing you want to do\nright now with what everybody who wants to modify the protocol will\never want to do in the future. It's just all GUCs, all the time! But\nthe GUC model is actually a poor fit in all kinds of scenarios, which\nis why we have all kinds of other ways to configure things too, like\nconnection parameters for instance. Now, to be fair, it's often useful\nto expose values that are configured through some other means as\nread-only GUCs, so the dividing line between GUCs and other things\ndoes get a bit sloppy. And we're already using client_encoding, which\nis a GUC, for something that really ought not to have been handled\nthat way ... because to take the connection pooler example again,\nthere's no reason -- other than bad wire-protocol design -- why the\nencoding being used between the client and the pooler needs to match\nthe encoding being used between the pooler and the server. But in my\nview, this isn't evidence that we should continue to muddy the\ndistinction between things that ought to be protocol parameters and\nthings that ought to be GUCs; rather, it's evidence of the need to\nmake the distinction between the two as crisp as we possibly can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:33:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 14.03.24 21:33, Robert Haas wrote:\n> You seem to be supposing here that all protocol changes will consist\n> of adding new message types. While I think that will be a common\n> pattern, I do not think it will be a universal one.\n\nAs an additional data point, the column encryption patch that is \ncurrently on hiatus [0] uses a protocol extension to change the format \nof existing message types.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n> This is definitely not how I was thinking about it. I was imagining\n> that we wanted to reserve bumping the protocol version for more\n> significant changes, and that we'd use _pq_ parameters for relatively\n> minor new functionality whether Boolean-valued or otherwise.\n\nIt appears there are several different perspectives about this. My \nintuition was that a protocol version change indicates something that we \neventually want all client libraries to support. Whereas protocol \nextensions are truly opt-in.\n\nFor example, if we didn't have the ParameterStatus message and someone \nwanted to add it, then this ought to be a protocol version change, so \nthat we eventually get everyone to adopt it.\n\nBut, for example, the column encryption feature is not something I'd \nexpect all client libraries to implement, so it can be opt-in.\n\n(I believe we have added a number of new protocol messages since the \nbeginning of the 3.0 protocol, without any version change, so \"new \nprotocol message\" might not always be a good example to begin with.)\n\nI fear that if we prefer protocol extensions over version increases, \nthen we'd get a very fragmented landscape of different client libraries \nsupporting different combinations of things.\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 14:50:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Ugh, I seem to have somehow missed this response completely.\n\nOn Thu, 14 Mar 2024 at 21:33, Robert Haas <[email protected]> wrote:\n> While I think that will be a common\n> pattern, I do not think it will be a universal one. I do agree,\n> however, that every possible variation of the protocol is either\n> Boolean-valued (i.e. are we doing X or not?) or something more\n> complicated (i.e. how exactly are doing X?).\n\nAgreed\n\n> This is definitely not how I was thinking about it. I was imagining\n> that we wanted to reserve bumping the protocol version for more\n> significant changes, and that we'd use _pq_ parameters for relatively\n> minor new functionality whether Boolean-valued or otherwise.\n\nYeah, we definitely think differently here then. To me bumping the\nminor protocol version shouldn't be a thing that we would need to\ncarefully consider. It should be easy to do, and probably done often.\n\n> The user is entitled to\n> know whether PQsetProtocolParameter() will work or not, and the user\n> is entitled to know whether it has a chance of working next time if it\n> didn't work this time, and when it fails, the user is entitled to a\n> good error message explaining the reason for the failure. But the user\n> is not entitled to know what negotiation took place over the wire to\n> figure that out. They shouldn't need to know that the _pq_ namespace\n> exists, and they shouldn't need to know whether we negotiated the\n> availability or unavailability of PQsetProtocolParameter() using [a]\n> the protocol minor version number, [b] the protocol major version\n> number, [c] a protocol option called parameter_set, or [d] a protocol\n> option called bananas_foster. Those are all things that might change\n> in the future.\n\nSo, what approach of checking feature support are you envisioning\ninstead? A function for every feature?\nSomething like SupportsSetProtocolParameter, that returns an error\nmessage if it's not supported and NULL otherwise. And then add such a\nsupport function for every feature?\n\nI think I might agree with you that it would be nice for features that\ndepend on a protocol extension parameter, indeed because in the future\nwe might make them required and they don't have to update their logic\nthen. But for features only depending on the protocol version I\nhonestly don't see the point. A protocol version check is always going\nto continue working.\n\nI'm also not sure why you're saying a user is not entitled to this\ninformation. We already provide users of libpq a way to see the full\nPostgres version, and the major protocol version. I think allowing a\nuser to access this information is only a good thing. But I agree that\nproviding easy to use feature support functions is a better user\nexperience in some cases.\n\n> But this is another example of a problem that can *easily* be fixed\n> without using up the entirety of the _pq_ namespace. You can introduce\n> _pq_.dont_error_on_unknown_gucs=1 or\n> _pq_.dont_error_on_these_gucs='foo, bar, baz'. The distinction between\n> the startup packet containing whizzbang=frob and instead containing\n> _pq_.whizzbang=frob shouldn't be just whether an error is thrown if we\n> don't know anything about whizzbang.\n\nNice idea, but that sadly won't work well in practice. Because old PG\nversions don't know about _pq_.dont_error_on_unknown_gucs. So that\nwould mean we'd have to wait another 5 years (and probably more) until\nwe could set non-_pq_-prefixed GUCs safely in the startup message,\nusing this approach.\n\nTwo side-notes:\n1. I realized there is a second benefit to using _pq_ for all GUCs\nthat change the protocol level that I didn't mention in my previous\nemail: It allows clients to assume that _pq_ prefixed GUCs will change\nthe wire-protocol and that they should not allow a user of the client\nto set them willy-nilly. I'll go into this benefit more in the rest of\nthis email.\n2. To clarify: I'm suggesting that a startup packet containing\n_pq_.whizzbang would actually set the _pq_.whizzbang GUC, and not the\nwhizzbang GUC. i.e. the _pq_ prefix is not stripped-off when parsing\nthe startup packet.\n\n> I guess I'm in the same position as you are -- your argument doesn't\n> really make any sense to me. That also has the unfortunate\n> disadvantage of making it difficult for me to explain why I don't\n> agree with you, but let me just tick off a few things that I'm\n> thinking about here:\n\nThanks for listing these thoughts. That makes it much easier to\ndiscuss something concrete.\n\n> 1. Connection poolers. If I'm talking to pgpool and pgpool is talking\n> to the server, and pgpool and I agree to use compression, that's\n> completely separate from whether pgpool and the server are using\n> compression. If I have to interrogate the compression state by\n> executing \"SHOW some_compression_guc\", then I'm going to get the wrong\n> answer unless pgpool runs a full SQL parser on every command that I\n> execute and intercepts the ones that touch protocol parameters. That's\n> bound to be expensive an unreliable -- consider something like SELECT\n> current_setting('some_compression_guc') || ' ' |\n> current_setting('some_other_guc') which isn't half as pathological as\n> it first looks.\n\nTotally agreed that we shouldn't rely on poolers parsing queries.\n\n> I want to be able to know the state of my protocol\n> parameters by calling libpq functions that answer my questions\n> definitively based on libpq's own internal state. libpq itself *must*\n> know what compression I'm using on my connection; the server's answer\n> may be different.\n\nI think that totally makes sense that libpq should be able to answer\nthose questions without contacting the server, and indeed some\nintrospection should be added for that. But being able to introspect\nwhat the server thinks the setting is seems quite useful too. That\nstill doesn't solve the problem of poolers though. How about\nintroducing a new ParameterGet message type too (matching the proposed\nParameterSet), so that poolers can easily parse and intercept that\nmessage type.\n\n> 2. Clarity of meaning across versions.\n> ...\n> As I see it, in your proposal, the\n> client thinks they're just setting a GUC, but the server thinks we're\n> completely changing up the wire protocol. Disaster ensues. From my\n> point of view, the problem is created by the fact that you're mixing\n> together two things which ought to be kept well-separated -- the act\n> of negotiating what protocol variant we're using, on the one hand, and\n> the setting of particular GUCs to particular values, on the other.\n\nI totally agree that this is a problem that needs to be fixed. But I\nfeel like it is fixed by my patchset, due to two things:\n1. By requiring the _pq_ prefix for GUCs that change the wire protocol\n2. By using PGC_PROTOCOL to indicate that those GUCs can only be\nchanged using ParameterSet\n\nThis is the exact way how I protect the new \\parameterset meta-command\nin psql from patch 0009\n\n+ if (strncmp(\"_pq_.\", pset.parameterset_args[0], 5) == 0)\n+ {\n+ pg_log_error(\"\\\\parameterset cannot be used to change protocol\nextensions parameters\");\n+ goto error;\n+ }\n\n> 3. Generally, and maybe this is just an expansion of the previous\n> point, it feels to me like you've conflated the thing you want to do\n> right now with what everybody who wants to modify the protocol will\n> ever want to do in the future. It's just all GUCs, all the time! But\n> the GUC model is actually a poor fit in all kinds of scenarios, which\n> is why we have all kinds of other ways to configure things too, like\n> connection parameters for instance. Now, to be fair, it's often useful\n> to expose values that are configured through some other means as\n> read-only GUCs, so the dividing line between GUCs and other things\n> does get a bit sloppy.\n\nI don't understand this argument at all. You mention specifically\nconnection parameters as an argument against using GUCs, but\nconnection parameters are also GUCs. They are GUCs with the\nPGC_BACKEND GucContext. Adding a PGC_PROTOCOL GucContext like Tom\nsuggested is a really natural and well working extension to the\nexisting GUC system imho.\n\n> rather, it's evidence of the need to\n> make the distinction between the two as crisp as we possibly can.\n\nPrefixing such options with _pq_ is my suggestion of making that\ndifference very crisp.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 18:45:39 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 14:50, Peter Eisentraut <[email protected]> wrote:\n> It appears there are several different perspectives about this. My\n> intuition was that a protocol version change indicates something that we\n> eventually want all client libraries to support. Whereas protocol\n> extensions are truly opt-in.\n>\n> For example, if we didn't have the ParameterStatus message and someone\n> wanted to add it, then this ought to be a protocol version change, so\n> that we eventually get everyone to adopt it.\n>\n> But, for example, the column encryption feature is not something I'd\n> expect all client libraries to implement, so it can be opt-in.\n\nI think that is a good way of deciding between version bump vs\nprotocol extension parameter. But I'd like to make one\nclarification/addition to this logic, if the protocol feature already\nrequires opt-in from the client because of how the feature works, then\nthere's no need for a protocol extension parameter. e.g. if you're\ncolumn-encryption feature would require the client to send a\nColumnDecrypt message before the server would exhibit any behaviour\nrelating to the column-encryption feature, then the client can simply\n\"support\" the feature by never sending the ColumnDecrypt message.\nThus, in such cases a protocol extension parameter would not be\nnecessary, even if the feature is considered opt-in.\n\n> (I believe we have added a number of new protocol messages since the\n> beginning of the 3.0 protocol, without any version change, so \"new\n> protocol message\" might not always be a good example to begin with.)\n\nPersonally, I feel like this is something we should change. IMHO, we\nshould get to a state where protocol minor version bumps are so\nlow-risk that we can do them whenever we add message types. Right now\nthere are basically multiple versions of the 3.0 protocol, which is\nvery confusing to anyone implementing it. Different servers\nimplementing the 3.0 protocol without the NegotiateVersion message is\na good example of that.\n\n> I fear that if we prefer protocol extensions over version increases,\n> then we'd get a very fragmented landscape of different client libraries\n> supporting different combinations of things.\n\n+1\n\n\n",
"msg_date": "Thu, 4 Apr 2024 18:45:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Attached is a rebased patchset",
"msg_date": "Thu, 4 Apr 2024 19:10:34 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 12:45, Jelte Fennema-Nio <[email protected]> wrote:\n\n> On Thu, 4 Apr 2024 at 14:50, Peter Eisentraut <[email protected]>\n> wrote:\n> > It appears there are several different perspectives about this. My\n> > intuition was that a protocol version change indicates something that we\n> > eventually want all client libraries to support. Whereas protocol\n> > extensions are truly opt-in.\n> >\n> > For example, if we didn't have the ParameterStatus message and someone\n> > wanted to add it, then this ought to be a protocol version change, so\n> > that we eventually get everyone to adopt it.\n> >\n> > But, for example, the column encryption feature is not something I'd\n> > expect all client libraries to implement, so it can be opt-in.\n>\n> I think that is a good way of deciding between version bump vs\n> protocol extension parameter. But I'd like to make one\n> clarification/addition to this logic, if the protocol feature already\n> requires opt-in from the client because of how the feature works, then\n> there's no need for a protocol extension parameter. e.g. if you're\n> column-encryption feature would require the client to send a\n> ColumnDecrypt message before the server would exhibit any behaviour\n> relating to the column-encryption feature, then the client can simply\n> \"support\" the feature by never sending the ColumnDecrypt message.\n> Thus, in such cases a protocol extension parameter would not be\n> necessary, even if the feature is considered opt-in.\n>\n> > (I believe we have added a number of new protocol messages since the\n> > beginning of the 3.0 protocol, without any version change, so \"new\n> > protocol message\" might not always be a good example to begin with.)\n>\n> Personally, I feel like this is something we should change. IMHO, we\n> should get to a state where protocol minor version bumps are so\n> low-risk that we can do them whenever we add message types. Right now\n> there are basically multiple versions of the 3.0 protocol, which is\n> very confusing to anyone implementing it. Different servers\n> implementing the 3.0 protocol without the NegotiateVersion message is\n> a good example of that.\n>\n\nTotally agree.\n\n\n>\n> > I fear that if we prefer protocol extensions over version increases,\n> > then we'd get a very fragmented landscape of different client libraries\n> > supporting different combinations of things.\n>\n> +1\nDave\n\n> +1\n>\n\nOn Thu, 4 Apr 2024 at 12:45, Jelte Fennema-Nio <[email protected]> wrote:On Thu, 4 Apr 2024 at 14:50, Peter Eisentraut <[email protected]> wrote:\n> It appears there are several different perspectives about this. My\n> intuition was that a protocol version change indicates something that we\n> eventually want all client libraries to support. Whereas protocol\n> extensions are truly opt-in.\n>\n> For example, if we didn't have the ParameterStatus message and someone\n> wanted to add it, then this ought to be a protocol version change, so\n> that we eventually get everyone to adopt it.\n>\n> But, for example, the column encryption feature is not something I'd\n> expect all client libraries to implement, so it can be opt-in.\n\nI think that is a good way of deciding between version bump vs\nprotocol extension parameter. But I'd like to make one\nclarification/addition to this logic, if the protocol feature already\nrequires opt-in from the client because of how the feature works, then\nthere's no need for a protocol extension parameter. e.g. if you're\ncolumn-encryption feature would require the client to send a\nColumnDecrypt message before the server would exhibit any behaviour\nrelating to the column-encryption feature, then the client can simply\n\"support\" the feature by never sending the ColumnDecrypt message.\nThus, in such cases a protocol extension parameter would not be\nnecessary, even if the feature is considered opt-in.\n\n> (I believe we have added a number of new protocol messages since the\n> beginning of the 3.0 protocol, without any version change, so \"new\n> protocol message\" might not always be a good example to begin with.)\n\nPersonally, I feel like this is something we should change. IMHO, we\nshould get to a state where protocol minor version bumps are so\nlow-risk that we can do them whenever we add message types. Right now\nthere are basically multiple versions of the 3.0 protocol, which is\nvery confusing to anyone implementing it. Different servers\nimplementing the 3.0 protocol without the NegotiateVersion message is\na good example of that.Totally agree. \n\n> I fear that if we prefer protocol extensions over version increases,\n> then we'd get a very fragmented landscape of different client libraries\n> supporting different combinations of things.\n+1 Dave\n+1",
"msg_date": "Fri, 5 Apr 2024 08:20:44 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 8:50 AM Peter Eisentraut <[email protected]> wrote:\n> It appears there are several different perspectives about this. My\n> intuition was that a protocol version change indicates something that we\n> eventually want all client libraries to support. Whereas protocol\n> extensions are truly opt-in.\n\nHmm. This doesn't seem like a bad way to make a distinction to me,\nexcept that I fear it would be mushy in practice. For example:\n\n> For example, if we didn't have the ParameterStatus message and someone\n> wanted to add it, then this ought to be a protocol version change, so\n> that we eventually get everyone to adopt it.\n>\n> But, for example, the column encryption feature is not something I'd\n> expect all client libraries to implement, so it can be opt-in.\n\nI agree that column encryption might not necessarily be supported by\nall client libraries, but equally, the ParameterStatus message is just\nfor the benefit of the client. A client that doesn't care about the\ncontents of such a message is free to ignore it, and would be better\noff if it weren't sent in the first place; it's just extra bytes on\nthe wire that aren't needed for anything. So why would we want to\nforce everyone to adopt that, if it didn't exist already?\n\nI also wonder how the protocol negotiation for column encryption is\nactually going to work. What are the actual wire protocol changes that\nare needed? What does the server need to know from the client, or the\nclient from the server, about what is supported?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 08:55:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 12:45 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Yeah, we definitely think differently here then. To me bumping the\n> minor protocol version shouldn't be a thing that we would need to\n> carefully consider. It should be easy to do, and probably done often.\n\nOften?\n\nI kind of hope that the protocol starts to evolve a bit more than it\nhas, but I don't want a continuous stream of changes. That will be\nvery hard to test and verify correctness, and a hassle for drivers to\nkeep up with, and a mess for compatibility.\n\n> So, what approach of checking feature support are you envisioning\n> instead? A function for every feature?\n> Something like SupportsSetProtocolParameter, that returns an error\n> message if it's not supported and NULL otherwise. And then add such a\n> support function for every feature?\n\nYeah, something like that.\n\n> I think I might agree with you that it would be nice for features that\n> depend on a protocol extension parameter, indeed because in the future\n> we might make them required and they don't have to update their logic\n> then. But for features only depending on the protocol version I\n> honestly don't see the point. A protocol version check is always going\n> to continue working.\n\nSure, if we introduce it based on the protocol version then we don't\nneed anything else.\n\n> I'm also not sure why you're saying a user is not entitled to this\n> information. We already provide users of libpq a way to see the full\n> Postgres version, and the major protocol version. I think allowing a\n> user to access this information is only a good thing. But I agree that\n> providing easy to use feature support functions is a better user\n> experience in some cases.\n\nI guess that's a fair point. But I'm worried that if we expose too\nmuch of the internals, we won't be able to change things later.\n\n> > But this is another example of a problem that can *easily* be fixed\n> > without using up the entirety of the _pq_ namespace. You can introduce\n> > _pq_.dont_error_on_unknown_gucs=1 or\n> > _pq_.dont_error_on_these_gucs='foo, bar, baz'. The distinction between\n> > the startup packet containing whizzbang=frob and instead containing\n> > _pq_.whizzbang=frob shouldn't be just whether an error is thrown if we\n> > don't know anything about whizzbang.\n>\n> Nice idea, but that sadly won't work well in practice. Because old PG\n> versions don't know about _pq_.dont_error_on_unknown_gucs. So that\n> would mean we'd have to wait another 5 years (and probably more) until\n> we could set non-_pq_-prefixed GUCs safely in the startup message,\n> using this approach.\n\nHmm. I guess that is a problem.\n\n> Two side-notes:\n> 1. I realized there is a second benefit to using _pq_ for all GUCs\n> that change the protocol level that I didn't mention in my previous\n> email: It allows clients to assume that _pq_ prefixed GUCs will change\n> the wire-protocol and that they should not allow a user of the client\n> to set them willy-nilly. I'll go into this benefit more in the rest of\n> this email.\n> 2. To clarify: I'm suggesting that a startup packet containing\n> _pq_.whizzbang would actually set the _pq_.whizzbang GUC, and not the\n> whizzbang GUC. i.e. the _pq_ prefix is not stripped-off when parsing\n> the startup packet.\n\nI really intended the _pq_ prefix as a way of taking something out of\nthe GUC namespace, not as a part of the GUC namespace that users would\nsee. And I'm reluctant to go back on that. If we want to make\npg_protocol.${NAME} mean a wire protocol parameter, well maybe there's\nsomething to that idea [insert caveats here]. But doesn't _pq_ look\nlike something that was intended to be internal? That's certainly how\nI intended it.\n\n> > I want to be able to know the state of my protocol\n> > parameters by calling libpq functions that answer my questions\n> > definitively based on libpq's own internal state. libpq itself *must*\n> > know what compression I'm using on my connection; the server's answer\n> > may be different.\n>\n> I think that totally makes sense that libpq should be able to answer\n> those questions without contacting the server, and indeed some\n> introspection should be added for that. But being able to introspect\n> what the server thinks the setting is seems quite useful too. That\n> still doesn't solve the problem of poolers though. How about\n> introducing a new ParameterGet message type too (matching the proposed\n> ParameterSet), so that poolers can easily parse and intercept that\n> message type.\n\nWouldn't libpq already know what value it last set? Or is this needed\nbecause it doesn't know what the other side's default is?\n\n> 2. By using PGC_PROTOCOL to indicate that those GUCs can only be\n> changed using ParameterSet\n\nHmm, OK. I guess if the PGC_PROTOCOL flag makes it so that the GUC can\nonly be set using ParameterSet, and it also makes them\nnon-transactional, then it's fine. So to be clear, I can't set these\nin postgresql.conf, or postgresql.auto.conf, or via ALTER $ANYTHING,\nor via SET, or in any other way than by sending ParameterStatus\nmessages. And when I send a ParameterStatus message, it doesn't matter\nif I'm in a good transaction, an aborted transaction, or no\ntransaction at all, and the setting change takes effect regardless of\nthat and regardless of any subsequent rollbacks. Is that right?\n\nI feel like maybe it's not, because you seem to be thinking that you'd\nalso set these in the startup packet, at least...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 10:02:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 1:10 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Attached is a rebased patchset\n\nWe should keep talking about this, but I think we're far too close to\nthe wire at this point to think about committing anything for v17 at\nthis point. These are big changes, they haven't been thoroughly\nreviewed by anyone AFAICT, and we don't have consensus on what we\nought to be doing. I know that's probably not what you want to hear,\nbut realistically, I think that's the only reasonable decision at this\npoint.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 10:04:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 16:04, Robert Haas <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 1:10 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > Attached is a rebased patchset\n>\n> We should keep talking about this, but I think we're far too close to\n> the wire at this point to think about committing anything for v17 at\n> this point. These are big changes, they haven't been thoroughly\n> reviewed by anyone AFAICT, and we don't have consensus on what we\n> ought to be doing. I know that's probably not what you want to hear,\n> but realistically, I think that's the only reasonable decision at this\n> point.\n\nAgreed on not considering this for PG17 nor this commitfest anymore. I\nchanged the commit fest entry accordingly.\n\n\n",
"msg_date": "Fri, 5 Apr 2024 17:28:03 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 16:02, Robert Haas <[email protected]> wrote:\n> Often?\n>\n> I kind of hope that the protocol starts to evolve a bit more than it\n> has, but I don't want a continuous stream of changes. That will be\n> very hard to test and verify correctness, and a hassle for drivers to\n> keep up with, and a mess for compatibility.\n\nI definitely think protocol changes require a lot more scrutiny than\nmany other things, given their hard/impossible to change nature.\n\nBut I do think that we shouldn't be at all averse to the act of\nbumping the protocol version itself. If we have a single small\nprotocol change in one release, then imho it's no problem to bump the\nprotocol version. Bumping the version should be painless. So we\nshouldn't be inclined to push an already agreed upon protocol change\nto the next release, because there are some more protocol changes in\nthe pipeline that won't make it in the current one.\n\nI don't think this would be any harder for drivers to keep up with,\nthen if we'd bulk all changes together. If driver developers only want\nto support changes in bulk changes, they can simply choose not to\nsupport 3.1 at all if they want and wait until 3.2 to support\neverything in bulk then.\n\n> > So, what approach of checking feature support are you envisioning\n> > instead? A function for every feature?\n> > Something like SupportsSetProtocolParameter, that returns an error\n> > message if it's not supported and NULL otherwise. And then add such a\n> > support function for every feature?\n>\n> Yeah, something like that.\n> ...\n>\n> > I'm also not sure why you're saying a user is not entitled to this\n> > information. We already provide users of libpq a way to see the full\n> > Postgres version, and the major protocol version. I think allowing a\n> > user to access this information is only a good thing. But I agree that\n> > providing easy to use feature support functions is a better user\n> > experience in some cases.\n>\n> I guess that's a fair point. But I'm worried that if we expose too\n> much of the internals, we won't be able to change things later.\n\nI'll take a look at redesigning the protocol parameter stuff. To work\nwith dedicated functions instead.\n\n> I really intended the _pq_ prefix as a way of taking something out of\n> the GUC namespace, not as a part of the GUC namespace that users would\n> see. And I'm reluctant to go back on that. If we want to make\n> pg_protocol.${NAME} mean a wire protocol parameter, well maybe there's\n> something to that idea [insert caveats here]. But doesn't _pq_ look\n> like something that was intended to be internal? That's certainly how\n> I intended it.\n\nI agree that _pq_ does look internal and doesn't clearly indicate that\nit's a protocol related change. But sadly the _pq_ prefix is the only\none that doesn't error in startup packets, waiting another 5 years\nuntil pg_protocol is allowed in the startup packet doesn't seem like a\nreasonable solution either.\n\nHow about naming the GUCs pg_protocol.${NAME}, but still requiring the\n_pq_ prefix in the StartupPacket. That way only client libraries would\nhave to see this internal prefix and they could remap it someway. I\nsee two options for that:\n1. At the server replace the _pq_ prefix with pg_protocol. So\n_pq_.${NAME} would map to pg_protocol.${name}\n2. At the server replace the _pq_.pg_protocol prefix with pg_protocol.\nSo _pq_.pg_protocol.${NAME} would map to pg_protocol.${name}.\n\nI guess you prefer option 2, because that would still leave lots of\nspace to do something with the rest of the _pq_ space, i.e.\n_pq_.magic_pixie_dust can still be used for something different than a\nGUC.\n\nBikeshedding: I think I prefer protocol.${NAME} over\npg_protocol.${NAME}, it's shorter and it seems obvious that protocol\nis the postgres protocol in this context.\n\nThis should be a fairly simple change to make.\n\n> Wouldn't libpq already know what value it last set? Or is this needed\n> because it doesn't know what the other side's default is?\n\nlibpq could/should indeed know this, but for debugging/testing\npurposes it is quite useful to have a facility to read the server side\nvalue. I think defaults should always be whatever was happening if the\nparameter wasn't specified before, so knowing the server default is\nnot something the client needs to worry about (i.e. the default is\ndefined as part of the protocol spec).\n\n> Hmm, OK. I guess if the PGC_PROTOCOL flag makes it so that the GUC can\n> only be set using ParameterSet, and it also makes them\n> non-transactional, then it's fine. So to be clear, I can't set these\n> in postgresql.conf, or postgresql.auto.conf, or via ALTER $ANYTHING,\n> or via SET, or in any other way than by sending ParameterStatus\n> messages. And when I send a ParameterStatus message, it doesn't matter\n> if I'm in a good transaction, an aborted transaction, or no\n> transaction at all, and the setting change takes effect regardless of\n> that and regardless of any subsequent rollbacks. Is that right?\n>\n> I feel like maybe it's not, because you seem to be thinking that you'd\n> also set these in the startup packet, at least...\n\nSetting PGC_PROTOCOL gucs would be allowed in the startup packet,\nwhich is fine afaict because that's also something that's part of the\nprotocol level and is thus fully controlled by client libraries and\npoolers) But other than that: Yes, conf files, ALTER, and SET cannot\nchange these GUCs.\n\n\n",
"msg_date": "Fri, 5 Apr 2024 18:09:29 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 12:09, Jelte Fennema-Nio <[email protected]> wrote:\n\n> On Fri, 5 Apr 2024 at 16:02, Robert Haas <[email protected]> wrote:\n> > Often?\n> >\n> > I kind of hope that the protocol starts to evolve a bit more than it\n> > has, but I don't want a continuous stream of changes. That will be\n> > very hard to test and verify correctness, and a hassle for drivers to\n> > keep up with, and a mess for compatibility.\n>\n> I definitely think protocol changes require a lot more scrutiny than\n> many other things, given their hard/impossible to change nature.\n>\n> But I do think that we shouldn't be at all averse to the act of\n> bumping the protocol version itself. If we have a single small\n> protocol change in one release, then imho it's no problem to bump the\n> protocol version. Bumping the version should be painless. So we\n> shouldn't be inclined to push an already agreed upon protocol change\n> to the next release, because there are some more protocol changes in\n> the pipeline that won't make it in the current one.\n>\n> I don't think this would be any harder for drivers to keep up with,\n> then if we'd bulk all changes together. If driver developers only want\n> to support changes in bulk changes, they can simply choose not to\n> support 3.1 at all if they want and wait until 3.2 to support\n> everything in bulk then.\n>\n> > > So, what approach of checking feature support are you envisioning\n> > > instead? A function for every feature?\n> > > Something like SupportsSetProtocolParameter, that returns an error\n> > > message if it's not supported and NULL otherwise. And then add such a\n> > > support function for every feature?\n> >\n> > Yeah, something like that.\n> > ...\n> >\n> > > I'm also not sure why you're saying a user is not entitled to this\n> > > information. We already provide users of libpq a way to see the full\n> > > Postgres version, and the major protocol version. I think allowing a\n> > > user to access this information is only a good thing. But I agree that\n> > > providing easy to use feature support functions is a better user\n> > > experience in some cases.\n> >\n> > I guess that's a fair point. But I'm worried that if we expose too\n> > much of the internals, we won't be able to change things later.\n>\n> I'll take a look at redesigning the protocol parameter stuff. To work\n> with dedicated functions instead.\n>\n+1\n\n>\n> > I really intended the _pq_ prefix as a way of taking something out of\n> > the GUC namespace, not as a part of the GUC namespace that users would\n> > see. And I'm reluctant to go back on that. If we want to make\n> > pg_protocol.${NAME} mean a wire protocol parameter, well maybe there's\n> > something to that idea [insert caveats here]. But doesn't _pq_ look\n> > like something that was intended to be internal? That's certainly how\n> > I intended it.\n>\n\nIs this actually used in practice? If so, how ?\n\n>\n> I agree that _pq_ does look internal and doesn't clearly indicate that\n> it's a protocol related change. But sadly the _pq_ prefix is the only\n> one that doesn't error in startup packets, waiting another 5 years\n> until pg_protocol is allowed in the startup packet doesn't seem like a\n> reasonable solution either.\n>\n> How about naming the GUCs pg_protocol.${NAME}, but still requiring the\n> _pq_ prefix in the StartupPacket. That way only client libraries would\n> have to see this internal prefix and they could remap it someway. I\n> see two options for that:\n> 1. At the server replace the _pq_ prefix with pg_protocol. So\n> _pq_.${NAME} would map to pg_protocol.${name}\n> 2. At the server replace the _pq_.pg_protocol prefix with pg_protocol.\n> So _pq_.pg_protocol.${NAME} would map to pg_protocol.${name}.\n>\n> I guess you prefer option 2, because that would still leave lots of\n> space to do something with the rest of the _pq_ space, i.e.\n> _pq_.magic_pixie_dust can still be used for something different than a\n> GUC.\n>\n> Bikeshedding: I think I prefer protocol.${NAME} over\n> pg_protocol.${NAME}, it's shorter and it seems obvious that protocol\n> is the postgres protocol in this context.\n>\n> This should be a fairly simple change to make.\n>\n> > Wouldn't libpq already know what value it last set? Or is this needed\n> > because it doesn't know what the other side's default is?\n>\n> libpq could/should indeed know this, but for debugging/testing\n> purposes it is quite useful to have a facility to read the server side\n> value. I think defaults should always be whatever was happening if the\n> parameter wasn't specified before, so knowing the server default is\n> not something the client needs to worry about (i.e. the default is\n> defined as part of the protocol spec).\n>\n> > Hmm, OK. I guess if the PGC_PROTOCOL flag makes it so that the GUC can\n> > only be set using ParameterSet, and it also makes them\n> > non-transactional, then it's fine. So to be clear, I can't set these\n> > in postgresql.conf, or postgresql.auto.conf, or via ALTER $ANYTHING,\n> > or via SET, or in any other way than by sending ParameterStatus\n> > messages. And when I send a ParameterStatus message, it doesn't matter\n> > if I'm in a good transaction, an aborted transaction, or no\n> > transaction at all, and the setting change takes effect regardless of\n> > that and regardless of any subsequent rollbacks. Is that right?\n> >\n> > I feel like maybe it's not, because you seem to be thinking that you'd\n> > also set these in the startup packet, at least...\n>\n> Setting PGC_PROTOCOL gucs would be allowed in the startup packet,\n> which is fine afaict because that's also something that's part of the\n> protocol level and is thus fully controlled by client libraries and\n> poolers) But other than that: Yes, conf files, ALTER, and SET cannot\n> change these GUCs.\n>\n+1\n\nDave\n\nOn Fri, 5 Apr 2024 at 12:09, Jelte Fennema-Nio <[email protected]> wrote:On Fri, 5 Apr 2024 at 16:02, Robert Haas <[email protected]> wrote:\n> Often?\n>\n> I kind of hope that the protocol starts to evolve a bit more than it\n> has, but I don't want a continuous stream of changes. That will be\n> very hard to test and verify correctness, and a hassle for drivers to\n> keep up with, and a mess for compatibility.\n\nI definitely think protocol changes require a lot more scrutiny than\nmany other things, given their hard/impossible to change nature.\n\nBut I do think that we shouldn't be at all averse to the act of\nbumping the protocol version itself. If we have a single small\nprotocol change in one release, then imho it's no problem to bump the\nprotocol version. Bumping the version should be painless. So we\nshouldn't be inclined to push an already agreed upon protocol change\nto the next release, because there are some more protocol changes in\nthe pipeline that won't make it in the current one.\n\nI don't think this would be any harder for drivers to keep up with,\nthen if we'd bulk all changes together. If driver developers only want\nto support changes in bulk changes, they can simply choose not to\nsupport 3.1 at all if they want and wait until 3.2 to support\neverything in bulk then.\n\n> > So, what approach of checking feature support are you envisioning\n> > instead? A function for every feature?\n> > Something like SupportsSetProtocolParameter, that returns an error\n> > message if it's not supported and NULL otherwise. And then add such a\n> > support function for every feature?\n>\n> Yeah, something like that.\n> ...\n>\n> > I'm also not sure why you're saying a user is not entitled to this\n> > information. We already provide users of libpq a way to see the full\n> > Postgres version, and the major protocol version. I think allowing a\n> > user to access this information is only a good thing. But I agree that\n> > providing easy to use feature support functions is a better user\n> > experience in some cases.\n>\n> I guess that's a fair point. But I'm worried that if we expose too\n> much of the internals, we won't be able to change things later.\n\nI'll take a look at redesigning the protocol parameter stuff. To work\nwith dedicated functions instead.+1 \n\n> I really intended the _pq_ prefix as a way of taking something out of\n> the GUC namespace, not as a part of the GUC namespace that users would\n> see. And I'm reluctant to go back on that. If we want to make\n> pg_protocol.${NAME} mean a wire protocol parameter, well maybe there's\n> something to that idea [insert caveats here]. But doesn't _pq_ look\n> like something that was intended to be internal? That's certainly how\n> I intended it.Is this actually used in practice? If so, how ? \n\nI agree that _pq_ does look internal and doesn't clearly indicate that\nit's a protocol related change. But sadly the _pq_ prefix is the only\none that doesn't error in startup packets, waiting another 5 years\nuntil pg_protocol is allowed in the startup packet doesn't seem like a\nreasonable solution either.\n\nHow about naming the GUCs pg_protocol.${NAME}, but still requiring the\n_pq_ prefix in the StartupPacket. That way only client libraries would\nhave to see this internal prefix and they could remap it someway. I\nsee two options for that:\n1. At the server replace the _pq_ prefix with pg_protocol. So\n_pq_.${NAME} would map to pg_protocol.${name}\n2. At the server replace the _pq_.pg_protocol prefix with pg_protocol.\nSo _pq_.pg_protocol.${NAME} would map to pg_protocol.${name}.\n\nI guess you prefer option 2, because that would still leave lots of\nspace to do something with the rest of the _pq_ space, i.e.\n_pq_.magic_pixie_dust can still be used for something different than a\nGUC.\n\nBikeshedding: I think I prefer protocol.${NAME} over\npg_protocol.${NAME}, it's shorter and it seems obvious that protocol\nis the postgres protocol in this context.\n\nThis should be a fairly simple change to make.\n\n> Wouldn't libpq already know what value it last set? Or is this needed\n> because it doesn't know what the other side's default is?\n\nlibpq could/should indeed know this, but for debugging/testing\npurposes it is quite useful to have a facility to read the server side\nvalue. I think defaults should always be whatever was happening if the\nparameter wasn't specified before, so knowing the server default is\nnot something the client needs to worry about (i.e. the default is\ndefined as part of the protocol spec).\n\n> Hmm, OK. I guess if the PGC_PROTOCOL flag makes it so that the GUC can\n> only be set using ParameterSet, and it also makes them\n> non-transactional, then it's fine. So to be clear, I can't set these\n> in postgresql.conf, or postgresql.auto.conf, or via ALTER $ANYTHING,\n> or via SET, or in any other way than by sending ParameterStatus\n> messages. And when I send a ParameterStatus message, it doesn't matter\n> if I'm in a good transaction, an aborted transaction, or no\n> transaction at all, and the setting change takes effect regardless of\n> that and regardless of any subsequent rollbacks. Is that right?\n>\n> I feel like maybe it's not, because you seem to be thinking that you'd\n> also set these in the startup packet, at least...\n\nSetting PGC_PROTOCOL gucs would be allowed in the startup packet,\nwhich is fine afaict because that's also something that's part of the\nprotocol level and is thus fully controlled by client libraries and\npoolers) But other than that: Yes, conf files, ALTER, and SET cannot\nchange these GUCs.+1Dave",
"msg_date": "Fri, 5 Apr 2024 12:30:01 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> On Fri, 5 Apr 2024 at 12:09, Jelte Fennema-Nio <[email protected]> wrote:\n>> Setting PGC_PROTOCOL gucs would be allowed in the startup packet,\n>> which is fine afaict because that's also something that's part of the\n>> protocol level and is thus fully controlled by client libraries and\n>> poolers) But other than that: Yes, conf files, ALTER, and SET cannot\n>> change these GUCs.\n\n> +1\n\nI don't buy that argument, actually. libpq, and pretty much every\nother client AFAIK, has provisions to let higher code levels insert\nrandom options into the startup packet. So to make this work libpq\nwould have to filter or at least inspect such options, which is\nlogic that doesn't exist and doesn't seem nice to need.\n\nThe other problem with adding these things in the startup packet\nis that when you send that packet, you don't know what the server\nversion is and hence don't know if it will take these options.\n\nWhat's so bad about insisting that these options must be sent in a\nseparate message?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 12:43:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 12:09 PM Jelte Fennema-Nio <[email protected]> wrote:\n> But I do think that we shouldn't be at all averse to the act of\n> bumping the protocol version itself. If we have a single small\n> protocol change in one release, then imho it's no problem to bump the\n> protocol version. Bumping the version should be painless. So we\n> shouldn't be inclined to push an already agreed upon protocol change\n> to the next release, because there are some more protocol changes in\n> the pipeline that won't make it in the current one.\n\nI think I half-agree with this. Let's say we all agree that the world\nwill end unless we make wire protocol changes A and B, and for\nwhatever reason we also agree that these changes should be handled via\na protocol version bump rather than any other method, but only the\npatch for A is sufficiently stable by the end of the release cycle.\nThen we commit A and bump the protocol version and the next release we\ndo the same for B, hopefully before the world ends. In this sense this\nis just like CATALOG_VERSION_NO or XLOG_PAGE_MAGIC. We don't postpone\ncommits because they'd require bumping those values; we just bump the\nvalues when it's necessary.\n\nBut on the other hand, I find it a bit hard to believe that the\nstatement \"Bumping the version should be painless\" will ever\ncorrespond to reality. Since we've not done a hard wire protocol break\nin a very long time, I assume that we would not want to start now. But\nthat also means that when the PG version with protocol version 3.5\ncomes out, that server is going to have to be compatible with 3.4,\n3.3, 3.2, 3.1, and 3.0. How will we test that it really is? We could\ntest against libpqs from older server versions, but that quickly\nbecomes awkward, because it means that there won't be tests that run\nas part of a regular build, but it'll have to all be done in the\nbuildfarm or CI with something like the cross-version upgrade tests we\nalready have. Maybe we'd be better off adding a libpq connection\noption that forces the use of a specific minor protocol version, but\nthen we'll need backward-compatibility code in libpq basically\nforever. But maybe we need that anyway to avoid older and newer\nservers being unable to communicate.\n\nPlus, you've got all of the consequences for non-core drivers, which\nhave to both add support for the new wire protocol - if they don't\nwant to seem outdated and eventually obsolete - and also test that\nthey're still compatible with all supported server versions.\nConnection poolers have the same set of problems. The whole thing is\nalmost a hole with no bottom. Keeping up with core changes in this\narea could become a massive undertaking for lots and lots of people,\nsome of whom may be the sole maintainer of some important driver that\nnow needs a whole bunch of work.\n\nI'm not sure how much it improves things if we imagine adding feature\nflags to the existing protocol versions, rather than whole new\nprotocol versions, but at least it cuts down on the assumption that\nadopting new features is mandatory, and that such features are\ncumulative. If a driver wants to support TDE but not protocol\nparameters or protocol parameters but not TDE, who are we to say no?\nIf in supporting those things we bump the protocol version to 3.2, and\nthen 3.3 fixes a huge performance problem, are drivers going to be\nrequired to add support for features they don't care about to get the\nperformance fixes? I see some benefit in bumping the protocol version\nfor major changes, or for changes that we have an important reason to\nmake mandatory, or to make previously-optional features for which\nsupport has become in practical terms universal part of the base\nfeature set. But I'm very skeptical of the idea that we should just\nhandle as many things as possible via a protocol version bump. We've\nbeen avoiding protocol version bumps like the plague since forever,\nand swinging all the way to the other extreme doesn't sound like the\nright idea to me.\n\n> How about naming the GUCs pg_protocol.${NAME}, but still requiring the\n> _pq_ prefix in the StartupPacket. That way only client libraries would\n> have to see this internal prefix and they could remap it someway. I\n> see two options for that:\n> 1. At the server replace the _pq_ prefix with pg_protocol. So\n> _pq_.${NAME} would map to pg_protocol.${name}\n> 2. At the server replace the _pq_.pg_protocol prefix with pg_protocol.\n> So _pq_.pg_protocol.${NAME} would map to pg_protocol.${name}.\n>\n> I guess you prefer option 2, because that would still leave lots of\n> space to do something with the rest of the _pq_ space, i.e.\n> _pq_.magic_pixie_dust can still be used for something different than a\n> GUC.\n\nI'm not sure what I think about this. Do we need these new GUCs to be\nboth PGC_PROTOCOL *and also* live in a separate namespace? I see the\nneed for the former pretty clearly: if these kinds of things are to be\npart of the GUC system (which wasn't my initial bias, but whatever)\nthen they need to have some important behavioral differences from\nother GUCs and so we need a flag to signal that. But what problem are\nwe solving by also giving them special-looking names, and are we sure\nwe wouldn't rather solve that problem some other way?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 12:48:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 18:30, Dave Cramer <[email protected]> wrote:\n>> > I really intended the _pq_ prefix as a way of taking something out of\n>> > the GUC namespace, not as a part of the GUC namespace that users would\n>> > see. And I'm reluctant to go back on that. If we want to make\n>> > pg_protocol.${NAME} mean a wire protocol parameter, well maybe there's\n>> > something to that idea [insert caveats here]. But doesn't _pq_ look\n>> > like something that was intended to be internal? That's certainly how\n>> > I intended it.\n>\n>\n> Is this actually used in practice? If so, how ?\n\nNo, it's not used for anything at the moment. This whole thread is\nbasically about trying to agree on how we want to make protocol\nchanges in the future in a somewhat standardized way. But using the\ntools available that we have to not break connecting to old postgres\nservers: ProtocolVersionNegotation messages, minor version numbers,\nand _pq_ parameters in the startup message. All of those have so far\nbeen completely theoretical and have not appeared in any client-server\ncommunication.\n\n\n",
"msg_date": "Fri, 5 Apr 2024 18:48:38 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 18:43, Tom Lane <[email protected]> wrote:\n> I don't buy that argument, actually. libpq, and pretty much every\n> other client AFAIK, has provisions to let higher code levels insert\n> random options into the startup packet. So to make this work libpq\n> would have to filter or at least inspect such options, which is\n> logic that doesn't exist and doesn't seem nice to need.\n\nlibpq actually doesn't support doing this (only by putting them in the\n\"options\" parameter, but _pq_ parameters would not be allowed there),\nbut indeed many other clients do allow this and indeed likely don't\nhave logic to filter/disallow _pq_ prefixed parameters.\n\nThis seems very easy to address though: Only parse _pq_ options when\nprotocol version 3.1 is requested by the client, and otherwise always\nreport them as \"not supported\". Then clients upgrading to 3.1, they\nshould filter/disallow _pq_ parameters to be arbitrarily set. I don't\nthink that's hard/not nice to add, it's literally a prefix check for\nthe \"_pq_.\" string.\n\n> The other problem with adding these things in the startup packet\n> is that when you send that packet, you don't know what the server\n> version is and hence don't know if it will take these options.\n\n(imho) the whole point of the _pq_ options is that they don't trigger\nan error when they are requested by the client, but not supported by\nthe server. So I don't understand your problem here.\n\n> What's so bad about insisting that these options must be sent in a\n> separate message?\n\nTo not require an additional roundtrip waiting for the server to respond.\n\n\n",
"msg_date": "Fri, 5 Apr 2024 18:56:02 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": ">\n>\n> Plus, you've got all of the consequences for non-core drivers, which\n> have to both add support for the new wire protocol - if they don't\n> want to seem outdated and eventually obsolete - and also test that\n> they're still compatible with all supported server versions.\n> Connection poolers have the same set of problems. The whole thing is\n> almost a hole with no bottom. Keeping up with core changes in this\n> area could become a massive undertaking for lots and lots of people,\n> some of whom may be the sole maintainer of some important driver that\n> now needs a whole bunch of work.\n>\n\nWe already have this in many places. Headers or functions change and\nextensions have to fix their code.\ncatalog changes force drivers to change their code.\nThis argument blocks any improvement to the protocol. I don't think it's\nunreasonable to expect maintainers to keep up.\nWe could make it easier by having a specific list for maintainers, but that\ndoesn't change the work.\n\n\n\n> I'm not sure how much it improves things if we imagine adding feature\n> flags to the existing protocol versions, rather than whole new\n> protocol versions, but at least it cuts down on the assumption that\n> adopting new features is mandatory, and that such features are\n> cumulative. If a driver wants to support TDE but not protocol\n> parameters or protocol parameters but not TDE, who are we to say no?\n> If in supporting those things we bump the protocol version to 3.2, and\n> then 3.3 fixes a huge performance problem, are drivers going to be\n> required to add support for features they don't care about to get the\n> performance fixes? I see some benefit in bumping the protocol version\n> for major changes, or for changes that we have an important reason to\n> make mandatory, or to make previously-optional features for which\n> support has become in practical terms universal part of the base\n> feature set. But I'm very skeptical of the idea that we should just\n> handle as many things as possible via a protocol version bump. We've\n> been avoiding protocol version bumps like the plague since forever,\n> and swinging all the way to the other extreme doesn't sound like the\n> right idea to me.\n>\n\n+1 for not swinging too far here. But I don't think it should be a non\nstarter.\nDave\n\n\n\nPlus, you've got all of the consequences for non-core drivers, which\nhave to both add support for the new wire protocol - if they don't\nwant to seem outdated and eventually obsolete - and also test that\nthey're still compatible with all supported server versions.\nConnection poolers have the same set of problems. The whole thing is\nalmost a hole with no bottom. Keeping up with core changes in this\narea could become a massive undertaking for lots and lots of people,\nsome of whom may be the sole maintainer of some important driver that\nnow needs a whole bunch of work.We already have this in many places. Headers or functions change and extensions have to fix their code. catalog changes force drivers to change their code. This argument blocks any improvement to the protocol. I don't think it's unreasonable to expect maintainers to keep up. We could make it easier by having a specific list for maintainers, but that doesn't change the work.\n\nI'm not sure how much it improves things if we imagine adding feature\nflags to the existing protocol versions, rather than whole new\nprotocol versions, but at least it cuts down on the assumption that\nadopting new features is mandatory, and that such features are\ncumulative. If a driver wants to support TDE but not protocol\nparameters or protocol parameters but not TDE, who are we to say no?\nIf in supporting those things we bump the protocol version to 3.2, and\nthen 3.3 fixes a huge performance problem, are drivers going to be\nrequired to add support for features they don't care about to get the\nperformance fixes? I see some benefit in bumping the protocol version\nfor major changes, or for changes that we have an important reason to\nmake mandatory, or to make previously-optional features for which\nsupport has become in practical terms universal part of the base\nfeature set. But I'm very skeptical of the idea that we should just\nhandle as many things as possible via a protocol version bump. We've\nbeen avoiding protocol version bumps like the plague since forever,\nand swinging all the way to the other extreme doesn't sound like the\nright idea to me.+1 for not swinging too far here. But I don't think it should be a non starter.Dave",
"msg_date": "Fri, 5 Apr 2024 14:06:50 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 18:48, Robert Haas <[email protected]> wrote:\n> Maybe we'd be better off adding a libpq connection\n> option that forces the use of a specific minor protocol version, but\n> then we'll need backward-compatibility code in libpq basically\n> forever. But maybe we need that anyway to avoid older and newer\n> servers being unable to communicate.\n\nI think this would be good because it makes testing easy and, like you\nsaid, I think we'll need this backward-compatibility code in libpq\nanyway to be able to connect to old servers. To have even better and\nmore realistic test coverage though, I think we might also want to\nactually test new libpq against old postgres servers and vice-versa in\na build farm animal though.\n\n> Plus, you've got all of the consequences for non-core drivers, which\n> have to both add support for the new wire protocol - if they don't\n> want to seem outdated and eventually obsolete - and also test that\n> they're still compatible with all supported server versions.\n\nI think for clients/drivers, the work would generally be pretty\nminimal. For almost all proposed changes, clients can \"support\" the\nprotocol version update by simply not using the new features, e.g. a\nclient can \"support\" the ParameterSet feature, by simply never sending\nthe ParameterSet message. So binding it to a protocol version bump\ndoesn't make it any harder for that client to support that protocol\nversion. I'm not saying that is the case for all protocol changes, but\nbased on what's being proposed so far that's definitely a very common\ntheme. Overall, I think this is something to discuss for each protocol\nchange in isolation: i.e. how to make supporting the new feature as\npainless as possible for clients/drivers.\n\n> Connection poolers have the same set of problems.\n\nFor connection poolers this is indeed a bigger hassle, because they at\nleast need to be able to handle all the new message types that a\nclient can send and maybe do something special for them. But I think\nif we're careful to keep connection poolers in mind when designing the\nfeatures themselves then I think this isn't necessarily a problem. And\nprobably indeed for the features that we think are hard for connection\npoolers to implement, we should be using protocol extension parameter\nfeature flags. But I think a lot of protocol would be fairly trivial\nfor a connection pooler to support.\n\n> The whole thing is\n> almost a hole with no bottom. Keeping up with core changes in this\n> area could become a massive undertaking for lots and lots of people,\n> some of whom may be the sole maintainer of some important driver that\n> now needs a whole bunch of work.\n\nI agree with Dave here, if you want to benefit from new features\nthere's some expectation to keep up with the changes. But to be clear,\nwe'd still support old protocol versions too. So we wouldn't break\nconnecting using those clients, they simply wouldn't benefit from some\nof the new features. I think that's acceptable.\n\n> I'm not sure how much it improves things if we imagine adding feature\n> flags to the existing protocol versions, rather than whole new\n> protocol versions, but at least it cuts down on the assumption that\n> adopting new features is mandatory, and that such features are\n> cumulative. If a driver wants to support TDE but not protocol\n> parameters or protocol parameters but not TDE, who are we to say no?\n> If in supporting those things we bump the protocol version to 3.2, and\n> then 3.3 fixes a huge performance problem, are drivers going to be\n> required to add support for features they don't care about to get the\n> performance fixes?\n\nI think there's an important trade-off here. On one side we don't want\nto make maintainers of clients/poolers do lots of work to support\nfeatures they don't care about. And on the other side it seems quite\nuseful to limit the amount of feature combinations that are used it\nthe wild (both for users and for us) e.g. the combinations of\nbackwards compatibility testing you were talking about would explode\nif every protocol change was a feature flag. I think this trade-off is\nsomething we should be deciding on based on the specific protocol\nchange. But if work needed to \"support\" the feature is \"minimal\"\n(to-be-defined exactly what we consider minimal), I think making it\npart of a protocol version bump is reasonable.\n\n> I see some benefit in bumping the protocol version\n> for major changes, or for changes that we have an important reason to\n> make mandatory, or to make previously-optional features for which\n> support has become in practical terms universal part of the base\n> feature set. But I'm very skeptical of the idea that we should just\n> handle as many things as possible via a protocol version bump. We've\n> been avoiding protocol version bumps like the plague since forever,\n> and swinging all the way to the other extreme doesn't sound like the\n> right idea to me.\n\nI think there's two parts to a protocol version bump:\n1. The changes that cause us to consider a protocol bump\n2. The actual act of bumping the protocol version\n\nI think 1 is a thing we should be careful about every time (especially\nregarding impact on clients/poolers). But 2 shouldn't be something\nthat we should consider dangerous/scary. I think that every change\nthat we make to the protocol (no matter how minor or backwards\ncompatible it is), should be accompanied with a protocol version bump.\nThis isn't what has happened in the past, and it makes it quite hard\nto understand what \"supporting\" a specific protocol version actually\nmeans. e.g. PgBouncer currently supports protocol 3.0, but doesn't\nsupport the NegotiateProtocolVersion message (I'm working on fixing\nthat).\n\nTo take it to the extreme: I think we should get to a state, where if\nwe bump the protocol version at the client and server side without\nactually making any protocol changes, everything should continue to\nwork fine. If we'd do that right now, then libpq wouldn't be able to\nconnect to old postgres versions anymore.\n\n> I'm not sure what I think about this. Do we need these new GUCs to be\n> both PGC_PROTOCOL *and also* live in a separate namespace? I see the\n> need for the former pretty clearly: if these kinds of things are to be\n> part of the GUC system (which wasn't my initial bias, but whatever)\n> then they need to have some important behavioral differences from\n> other GUCs and so we need a flag to signal that. But what problem are\n> we solving by also giving them special-looking names, and are we sure\n> we wouldn't rather solve that problem some other way?\n\nClients might want to allow the user of the client to change regular\nparameters using ParameterSet (e.g. so that a connection pooler can\nintercept those ParameterSet messages and change its own behaviour if\nthe parameter name is pgbouncer.pool_mode). But they wouldn't want a\nuser to set any parameters that change the wire-protocol this way. And\nbecause an old client might connect to a new server a simple\nhard-coded list of parameters at the client side is not sufficient.\n\nI can see two ways around this:\n1. Using a well-known prefix or namespace for parameters that change\nthe wire protocol. (exact prefix to be bikeshedded on)\n2. Using a hard-coded list at the client AND disallow changing\nPGC_PROTOCOL parameters at the server if the negotiated protocol\nversion is lower than the version this parameter was introduced in AND\nbump the protocol version whenever we add a new PGC_PROTOCOL\nparameter.\n\nI think 1 is easier to implement at the client side, as it only\nrequires a prefix comparison instead of keeping track of a list.\n\n\n",
"msg_date": "Sun, 7 Apr 2024 00:14:03 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 05.04.24 14:55, Robert Haas wrote:\n> I also wonder how the protocol negotiation for column encryption is\n> actually going to work. What are the actual wire protocol changes that\n> are needed? What does the server need to know from the client, or the\n> client from the server, about what is supported?\n\nI have just posted an updated patch for that: [0]\n\nThe protocol changes can be inspected in the diffs for\n\ndoc/src/sgml/protocol.sgml\nsrc/backend/access/common/printtup.c\nsrc/interfaces/libpq/fe-protocol3.c\n\nThere are various changes, including new messages, additional fields in \nexisting messages, and some more flag bits in existing fields.\n\nIt all works, so I don't have any requests or anything in this thread, \nbut it would be good to get some feedback if I'm using this wrong. \nAFAICT, that patch was the first public one that ever tried to make use \nof the protocol extension facility, so I was mainly guessing about the \nintended way to use this.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/f63fe170-cef2-4914-be00-ef9222456505%40eisentraut.org\n\n\n\n",
"msg_date": "Wed, 10 Apr 2024 12:35:52 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 6:14 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I think for clients/drivers, the work would generally be pretty\n> minimal. For almost all proposed changes, clients can \"support\" the\n> protocol version update by simply not using the new features, ...\n\nI mean, I agree if a particular protocol version bump does nothing\nother than signal the presence of some optional, ignorable feature,\nthen it doesn't cause a problem if we force clients to support it. But\nthat seems a bit like saying that eating wild mushrooms is fine\nbecause some of them aren't poisonous. The point is that if we roll\nout two protocol changes, A and B, each of which requires the client\nto make some change in order to work with the newer protocol version,\nthen using version numbers as the gating mechanism requires that the\nclient can't support the newer of those two changes without also\nsupporting the older one. Using feature flags doesn't impose that\nconstraint, which I think is a plus.\n\n> I think there's an important trade-off here. On one side we don't want\n> to make maintainers of clients/poolers do lots of work to support\n> features they don't care about. And on the other side it seems quite\n> useful to limit the amount of feature combinations that are used it\n> the wild (both for users and for us) e.g. the combinations of\n> backwards compatibility testing you were talking about would explode\n> if every protocol change was a feature flag.\n\nThis is a good point, and I agree. It's a little hard to discuss this\nin the abstract. I wouldn't want to have feature flags that say:\n\n1. I support rows containing an extra-large number of bytes.\n2. I support rows containing an extra-large number of columns.\n3. I support queries returning an extra-large number of rows.\n\nIt is quite likely that there might be bugs that only manifest with\nparticular combinations of those flags; knowing that someone who\nsupports (3) must also support (1) and (2) seems like it would make\neveryone's life easier. But on the other hand, I wouldn't mind having\nfeature flags that say:\n\n1. I support transparent column encryption.\n2. I support letting a connection pooler lock down the session role in\na way that can't be reversed from SQL.\n3. I support compression.\n\nIt's still not theoretically impossible that those features could\ninteract with each other in some way, but it seems a lot less likely.\nHere, I think a driver ought to be able to choose any subset of these\nthings and support only the ones they care about, without having to\nworry about the server deciding to do something for which the driver\nis unprepared. Also, in a case like this, if there are bugs that only\noccur in certain combinations, we have to test for and fix those\nanyway, because even if a driver supports all of those features,\nthey're not all going to be used for every connection.\n\n> I think there's two parts to a protocol version bump:\n> 1. TI ahe changes that cause us to consider a protocol bump\n> 2. The actual act of bumping the protocol version\n>\n> I think 1 is a thing we should be careful about every time (especially\n> regarding impact on clients/poolers). But 2 shouldn't be something\n> that we should consider dangerous/scary. I think that every change\n> that we make to the protocol (no matter how minor or backwards\n> compatible it is), should be accompanied with a protocol version bump.\n> This isn't what has happened in the past, and it makes it quite hard\n> to understand what \"supporting\" a specific protocol version actually\n> means. e.g. PgBouncer currently supports protocol 3.0, but doesn't\n> support the NegotiateProtocolVersion message (I'm working on fixing\n> that).\n\nI'm generally not a fan of giving things version numbers and then not\nchanging the version numbers when you change the thing, but I find\nmyself reluctant to apply that principle to this case. I think it's\nbad that we keep adding functions to libpq and sometimes changing the\nbehavior of existing functions and never, ever bumping the libpq .so\nversion. I've seen that cause real, practical problems. It means for\nexample that you can't make an RPM depend on libpq.so.5 and expect\nthat to do anything meaningful -- every version going back forever is\nversion 5, even if it doesn't contain the functions (or other behavior\nchanges) that are needed for some program compiled against a newer\nversion of PostgreSQL to work.\n\nBut the wire protocol changes very slowly, and I think that is a\ndifference that actually matters quite a bit here. Broadly speaking, I\ncan use a psq\n\n> To take it to the extreme: I think we should get to a state, where if\n> we bump the protocol version at the client and server side without\n> actually making any protocol changes, everything should continue to\n> work fine. If we'd do that right now, then libpq wouldn't be able to\n> connect to old postgres versions anymore.\n\nI think I agree with this, but it seems like a bootstrapping problem\nand nothing more.\n\n>\n> > I'm not sure what I think about this. Do we need these new GUCs to be\n> > both PGC_PROTOCOL *and also* live in a separate namespace? I see the\n> > need for the former pretty clearly: if these kinds of things are to be\n> > part of the GUC system (which wasn't my initial bias, but whatever)\n> > then they need to have some important behavioral differences from\n> > other GUCs and so we need a flag to signal that. But what problem are\n> > we solving by also giving them special-looking names, and are we sure\n> > we wouldn't rather solve that problem some other way?\n>\n> Clients might want to allow the user of the client to change regular\n> parameters using ParameterSet (e.g. so that a connection pooler can\n> intercept those ParameterSet messages and change its own behaviour if\n> the parameter name is pgbouncer.pool_mode). But they wouldn't want a\n> user to set any parameters that change the wire-protocol this way. And\n> because an old client might connect to a new server a simple\n> hard-coded list of parameters at the client side is not sufficient.\n>\n> I can see two ways around this:\n> 1. Using a well-known prefix or namespace for parameters that change\n> the wire protocol. (exact prefix to be bikeshedded on)\n> 2. Using a hard-coded list at the client AND disallow changing\n> PGC_PROTOCOL parameters at the server if the negotiated protocol\n> version is lower than the version this parameter was introduced in AND\n> bump the protocol version whenever we add a new PGC_PROTOCOL\n> parameter.\n>\n> I think 1 is easier to implement at the client side, as it only\n> requires a prefix comparison instead of keeping track of a list.\n\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 13:43:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "[ Hit send too early, sorry. ]\n\nOn Mon, Apr 15, 2024 at 1:43 PM Robert Haas <[email protected]> wrote:\n> But the wire protocol changes very slowly, and I think that is a\n> difference that actually matters quite a bit here. Broadly speaking, I\n> can use a psq\n\n...a psql that I just built today to talk to a server from many years\nago, and everything is fine. Sure, there are some marginal wire\nprotocol changes around the edges, but not in places that are going to\nreally affect the ability of psql to communicate. But I wouldn't try\nto run a psql built against 16 with a libpq even one major release\nold, and the other direction (older psql, newer libpq) also carries\nsome (albeit fewer) risks. So the two situations aren't really\nentirely comparable, I feel. I don't quite know what to make of that\nas a practical matter: surely it can't be right to use protocol\nversion 3.0 to refer to a bunch of different things. But at the same\ntime, surely we don't want clients to start panicking and bailing out\nwhen everything would have been fine.\n\n> > To take it to the extreme: I think we should get to a state, where if\n> > we bump the protocol version at the client and server side without\n> > actually making any protocol changes, everything should continue to\n> > work fine. If we'd do that right now, then libpq wouldn't be able to\n> > connect to old postgres versions anymore.\n>\n> I think I agree with this, but it seems like a bootstrapping problem\n> and nothing more.\n\nThat is, once we figure out how we want backward compatibility to work\nin general, I think we'll probably get pretty close to the state you\nwant here pretty quickly.\n\n> > Clients might want to allow the user of the client to change regular\n> > parameters using ParameterSet (e.g. so that a connection pooler can\n> > intercept those ParameterSet messages and change its own behaviour if\n> > the parameter name is pgbouncer.pool_mode). But they wouldn't want a\n> > user to set any parameters that change the wire-protocol this way. And\n> > because an old client might connect to a new server a simple\n> > hard-coded list of parameters at the client side is not sufficient.\n> >\n> > I can see two ways around this:\n> > 1. Using a well-known prefix or namespace for parameters that change\n> > the wire protocol. (exact prefix to be bikeshedded on)\n> > 2. Using a hard-coded list at the client AND disallow changing\n> > PGC_PROTOCOL parameters at the server if the negotiated protocol\n> > version is lower than the version this parameter was introduced in AND\n> > bump the protocol version whenever we add a new PGC_PROTOCOL\n> > parameter.\n> >\n> > I think 1 is easier to implement at the client side, as it only\n> > requires a prefix comparison instead of keeping track of a list.\n\nI'm unconvinced that we should let ParameterSet change\nnon-PGC_PROTOCOL GUCs. The pooler can agree on a list of protocol GUCs\nwith the end client that differs from what the server agreed with the\npooler - e.g., it can add pgbouncer.pool_mode to the list. But for\ntruly non-protocol GUCs, we have a lot of ways to set those already.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 13:52:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 15 Apr 2024 at 19:43, Robert Haas <[email protected]> wrote:\n>\n> On Sat, Apr 6, 2024 at 6:14 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > I think for clients/drivers, the work would generally be pretty\n> > minimal. For almost all proposed changes, clients can \"support\" the\n> > protocol version update by simply not using the new features, ...\n>\n> I mean, I agree if a particular protocol version bump does nothing\n> other than signal the presence of some optional, ignorable feature,\n> then it doesn't cause a problem if we force clients to support it. But\n> that seems a bit like saying that eating wild mushrooms is fine\n> because some of them aren't poisonous. The point is that if we roll\n> out two protocol changes, A and B, each of which requires the client\n> to make some change in order to work with the newer protocol version,\n> then using version numbers as the gating mechanism requires that the\n> client can't support the newer of those two changes without also\n> supporting the older one. Using feature flags doesn't impose that\n> constraint, which I think is a plus.\n\nI think we're in agreement here, i.e. it depends on the situation if a\nfeature flag or version bump is more appropriate. I think the\nguidelines could be as follows:\n1. For protocol changes that require \"extremely minimal\" work from\nclients & poolers: bump the protocol version.\n2. For \"niche\" features that require some work from clients and/or\npoolers: use a protocol parameter feature flag.\n3. For anything in between, let's discuss on the thread for that\nspecific protocol change on the tradeoffs.\n\nOn Mon, 15 Apr 2024 at 19:52, Robert Haas <[email protected]> wrote:\n> surely it can't be right to use protocol\n> version 3.0 to refer to a bunch of different things. But at the same\n> time, surely we don't want clients to start panicking and bailing out\n> when everything would have been fine.\n\nI feel like the ProtocolVersionNegotiation should make sure people\ndon't panic and bail out. And if not, then feature flags won't help\nwith this either. Because clients would then still bail out if some\nfeature is not supported.\n\n> I'm unconvinced that we should let ParameterSet change\n> non-PGC_PROTOCOL GUCs. The pooler can agree on a list of protocol GUCs\n> with the end client that differs from what the server agreed with the\n> pooler - e.g., it can add pgbouncer.pool_mode to the list. But for\n> truly non-protocol GUCs, we have a lot of ways to set those already.\n\nI feel like you're glossing over something fairly important here. How\nexactly would the client know about pgbouncer.pool_mode? Are you\nenvisioning a list of GUCs which can be changed using ParameterSet,\nwhich the server then sends to the client during connection startup\n(using presumably some new protocol message)? If so, then I feel this\nsame problem still exists. How would the client know which of those\nGUCs change wire-protocol behaviour and which don't? It still would\nneed a hardcoded list (now including pgbouncer.pool_mode and maybe\nmore) of things that a user is allowed to change using ParameterSet.\nSo I think a well-known prefix would still be applicable.\n\nTo be clear, imho the well-known prefix discussion is separate from\nthe discussion about whether Postgres should throw an ERROR when\nParameterSet is used to change any non-PGC_PROTOCOL GUC. I'd be fine\nwith disallowing that if that seems better/safer/clearer to you\n(although I'd love to hear your exact concerns about this). But I'd\nstill want a well-known prefix for protocol parameters. Because that\nprefix is not for the benefit of the server, it's for the benefit of\nthe client and pooler. So the client/pooler can error if any dangerous\nGUC is being changed, because the server would accept it and change\nthe wire-protocol accordingly.\n\n\n",
"msg_date": "Mon, 15 Apr 2024 21:37:47 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 15 Apr 2024 at 15:38, Jelte Fennema-Nio <[email protected]> wrote:\n\n> On Mon, 15 Apr 2024 at 19:43, Robert Haas <[email protected]> wrote:\n> >\n> > On Sat, Apr 6, 2024 at 6:14 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > > I think for clients/drivers, the work would generally be pretty\n> > > minimal. For almost all proposed changes, clients can \"support\" the\n> > > protocol version update by simply not using the new features, ...\n> >\n> > I mean, I agree if a particular protocol version bump does nothing\n> > other than signal the presence of some optional, ignorable feature,\n> > then it doesn't cause a problem if we force clients to support it. But\n> > that seems a bit like saying that eating wild mushrooms is fine\n> > because some of them aren't poisonous. The point is that if we roll\n> > out two protocol changes, A and B, each of which requires the client\n> > to make some change in order to work with the newer protocol version,\n> > then using version numbers as the gating mechanism requires that the\n> > client can't support the newer of those two changes without also\n> > supporting the older one. Using feature flags doesn't impose that\n> > constraint, which I think is a plus.\n>\n> I think we're in agreement here, i.e. it depends on the situation if a\n> feature flag or version bump is more appropriate. I think the\n> guidelines could be as follows:\n> 1. For protocol changes that require \"extremely minimal\" work from\n> clients & poolers: bump the protocol version.\n> 2. For \"niche\" features that require some work from clients and/or\n> poolers: use a protocol parameter feature flag.\n> 3. For anything in between, let's discuss on the thread for that\n> specific protocol change on the tradeoffs.\n>\n\nMy first thought here is that all of the above is subjective and we will\nend up discussing all of the above.\nNo real argument just an observation.\n\n>\n> On Mon, 15 Apr 2024 at 19:52, Robert Haas <[email protected]> wrote:\n> > surely it can't be right to use protocol\n> > version 3.0 to refer to a bunch of different things. But at the same\n> > time, surely we don't want clients to start panicking and bailing out\n> > when everything would have been fine.\n>\n> I feel like the ProtocolVersionNegotiation should make sure people\n> don't panic and bail out. And if not, then feature flags won't help\n> with this either. Because clients would then still bail out if some\n> feature is not supported.\n>\n\nI don't think a client should ever bail out. They may not support something\nbut IMO bailing out is not an option.\n\nDave\n\n>\n>\n\nOn Mon, 15 Apr 2024 at 15:38, Jelte Fennema-Nio <[email protected]> wrote:On Mon, 15 Apr 2024 at 19:43, Robert Haas <[email protected]> wrote:\n>\n> On Sat, Apr 6, 2024 at 6:14 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > I think for clients/drivers, the work would generally be pretty\n> > minimal. For almost all proposed changes, clients can \"support\" the\n> > protocol version update by simply not using the new features, ...\n>\n> I mean, I agree if a particular protocol version bump does nothing\n> other than signal the presence of some optional, ignorable feature,\n> then it doesn't cause a problem if we force clients to support it. But\n> that seems a bit like saying that eating wild mushrooms is fine\n> because some of them aren't poisonous. The point is that if we roll\n> out two protocol changes, A and B, each of which requires the client\n> to make some change in order to work with the newer protocol version,\n> then using version numbers as the gating mechanism requires that the\n> client can't support the newer of those two changes without also\n> supporting the older one. Using feature flags doesn't impose that\n> constraint, which I think is a plus.\n\nI think we're in agreement here, i.e. it depends on the situation if a\nfeature flag or version bump is more appropriate. I think the\nguidelines could be as follows:\n1. For protocol changes that require \"extremely minimal\" work from\nclients & poolers: bump the protocol version.\n2. For \"niche\" features that require some work from clients and/or\npoolers: use a protocol parameter feature flag.\n3. For anything in between, let's discuss on the thread for that\nspecific protocol change on the tradeoffs.My first thought here is that all of the above is subjective and we will end up discussing all of the above.No real argument just an observation. \n\nOn Mon, 15 Apr 2024 at 19:52, Robert Haas <[email protected]> wrote:\n> surely it can't be right to use protocol\n> version 3.0 to refer to a bunch of different things. But at the same\n> time, surely we don't want clients to start panicking and bailing out\n> when everything would have been fine.\n\nI feel like the ProtocolVersionNegotiation should make sure people\ndon't panic and bail out. And if not, then feature flags won't help\nwith this either. Because clients would then still bail out if some\nfeature is not supported.I don't think a client should ever bail out. They may not support something but IMO bailing out is not an option. Dave",
"msg_date": "Mon, 15 Apr 2024 15:47:40 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 15 Apr 2024 at 21:47, Dave Cramer <[email protected]> wrote:\n>> On Mon, 15 Apr 2024 at 19:52, Robert Haas <[email protected]> wrote:\n>> > surely it can't be right to use protocol\n>> > version 3.0 to refer to a bunch of different things. But at the same\n>> > time, surely we don't want clients to start panicking and bailing out\n>> > when everything would have been fine.\n>>\n>> I feel like the ProtocolVersionNegotiation should make sure people\n>> don't panic and bail out. And if not, then feature flags won't help\n>> with this either. Because clients would then still bail out if some\n>> feature is not supported.\n>\n> I don't think a client should ever bail out. They may not support something but IMO bailing out is not an option.\n\n\nOn Thu, 18 Apr 2024 at 21:01, Robert Haas <[email protected]> wrote:\n> On Thu, Apr 18, 2024 at 1:49 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > IMHO that means that we should also bump the protocol version for this\n> > change, because it's changing the wire protocol by adding a new\n> > parameter format code. And it does so in a way that does not depend on\n> > the new protocol extension.\n>\n> I think we're more or less covering the same ground we did on the\n> other thread here -- in theory I don't love the fact that we never\n> bump the protocol version when we change stuff, but in practice if we\n> start bumping it every time we do anything I think it's going to just\n> break a bunch of stuff without any real benefit.\n\n(the second quoted message comes from Peter his column encryption\nthread, but responding here to keep this discussion in one place)\n\nI really don't understand what exactly you're worried about. What do\nyou expect will break when bumping the protocol version? As Dave said,\nclients should never bail out due to protocol version differences.\n\nWhen the server supports a lower version than the client, the client\nshould disable certain features because it gets the\nProtocolVersionNegotiation message. This is also true if we don't bump\nthe version. Negotiating a lower version actually makes it clearer for\nthe client what features to disable. Using the reported postgres\nversion for this might not, because a connection pooler in the middle\nmight not support the features that the client wants and thus throw an\nerror (e.g. due to the client sending unknown messages) even if the\nbacking Postgres server would support these features. Not to mention\nnon-postgresql servers that implement the PostgreSQL protocol (of\nwhich there are more and more).\n\nWhen the server supports a higher version, the client never even\nnotices this because the server will silently accept and only enable\nthe features of the lower version. So this could never cause breakage,\nas from the client's perspective the server didn't bump their protocol\nversion.\n\nSo, I don't understand why you seem to view bumping the protocol\nversion with so much negativity. We're also bumping PG versions every\nyear. Afaik people only like that, partially because it's immediately\nclear that certain features (e.g. MERGE) are not supported when\nconnecting to older servers. To me the same is true for bumping the\nprotocol version. There are no downsides to bumping it, only upsides.\n\n\n",
"msg_date": "Thu, 18 Apr 2024 21:34:07 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 3:34 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I really don't understand what exactly you're worried about. What do\n> you expect will break when bumping the protocol version? As Dave said,\n> clients should never bail out due to protocol version differences.\n\nSure, and I should never forget to take out the trash or mow the lawn.\n\n> So, I don't understand why you seem to view bumping the protocol\n> version with so much negativity. We're also bumping PG versions every\n> year. Afaik people only like that, partially because it's immediately\n> clear that certain features (e.g. MERGE) are not supported when\n> connecting to older servers. To me the same is true for bumping the\n> protocol version. There are no downsides to bumping it, only upsides.\n\nI see it the exact opposite way around.\n\nIf we go with Peter's approach, every driver that supports his feature\nwill work perfectly, and every driver that doesn't will work exactly\nas it does today. The risk of breaking anything is as near to zero as\nhuman developers can reasonably hope to achieve. Nobody who doesn't\ncare about the feature will have to lift a single finger, today,\ntomorrow, or ever. That's absolutely brilliant.\n\nIf we instead go with your approach, then anyone who wants to support\n3.2 when it materializes will have to also support 3.1, which means\nthey have to support this feature. That's not a terrible burden, but\nit's not a necessary one either. Also, even just 3.1 is going to break\nsomething for somebody. There's just no way that we've left the\nprotocol version unchanged for this long and the first change we make\ndoesn't cause some collateral damage.\n\nSure, those are minor downsides in the grand scheme of things. But\nAFAICS the only downside of Peter's approach that you've alleged is\nthat doesn't involve bumping the version number. Of course, if bumping\nthe version number is an intrinsic good, then no further justification\nis required, but I don't buy that. I do not believe that users or\nmaintainers will throw us a pizza party when they find out that we've\nchanged the version number. There's no reason for anyone to be happy\nabout that for its own sake.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Apr 2024 16:17:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 18 Apr 2024 at 22:17, Robert Haas <[email protected]> wrote:\n> If we go with Peter's approach, every driver that supports his feature\n> will work perfectly, and every driver that doesn't will work exactly\n> as it does today. The risk of breaking anything is as near to zero as\n> human developers can reasonably hope to achieve. Nobody who doesn't\n> care about the feature will have to lift a single finger, today,\n> tomorrow, or ever. That's absolutely brilliant.\n>\n> If we instead go with your approach, then anyone who wants to support\n> 3.2 when it materializes will have to also support 3.1, which means\n> they have to support this feature.\n\nTo clarify: My proposed approach is to use a protocol extension\nparameter for to enable the new messages that the server can send\n(like Peter is doing now). And **in addition to that** gate the new\nBind format type behind a feature switch. There is literally nothing\nclients will have to do to \"support\" that feature (except for\nrequesting a higher version protocol). Your suggestion of not bumping\nthe version but still allowing the new format type on version 3.0\ndoesn't have any advantage afaict, except secretly hiding from any\npooler in the middle that such a format type might be sent.\n\n> Also, even just 3.1 is going to break\n> something for somebody. There's just no way that we've left the\n> protocol version unchanged for this long and the first change we make\n> doesn't cause some collateral damage.\n\nSure, but the exact same argument holds for protocol extension\nparameters. We've never set them, so they are bound to break something\nthe first time. My whole point is that once we bite that bullet, the\nnext protocol parameters and protocol version bumps won't cause such\nbreakage.\n\n> Sure, those are minor downsides in the grand scheme of things. But\n> AFAICS the only downside of Peter's approach that you've alleged is\n> that doesn't involve bumping the version number. Of course, if bumping\n> the version number is an intrinsic good, then no further justification\n> is required, but I don't buy that. I do not believe that users or\n> maintainers will throw us a pizza party when they find out that we've\n> changed the version number. There's no reason for anyone to be happy\n> about that for its own sake.\n\nAs a connection pooler maintainer I would definitely love it if every\nprotocol change required either a protocol version parameter or a\nprotocol version bump. That way I can easily check every release if\nthe protocol changed by looking at two things, instead of diffing the\nprotocol docs for some tiny \"supposedly irrelevant\" change was made.\n\n\n",
"msg_date": "Thu, 18 Apr 2024 23:36:19 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 18 Apr 2024 at 23:36, Jelte Fennema-Nio <[email protected]> wrote:\n> To clarify: My proposed approach is to use a protocol extension\n> parameter for to enable the new messages that the server can send\n> (like Peter is doing now). And **in addition to that** gate the new\n> Bind format type behind a feature switch.\n\nugh, correction: gate the new Bind format type behind a **protocol bump**\n\n\n",
"msg_date": "Thu, 18 Apr 2024 23:49:27 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 5:36 PM Jelte Fennema-Nio <[email protected]> wrote:\n> To clarify: My proposed approach is to use a protocol extension\n> parameter for to enable the new messages that the server can send\n> (like Peter is doing now). And **in addition to that** gate the new\n> Bind format type behind a feature switch. There is literally nothing\n> clients will have to do to \"support\" that feature (except for\n> requesting a higher version protocol). Your suggestion of not bumping\n> the version but still allowing the new format type on version 3.0\n> doesn't have any advantage afaict, except secretly hiding from any\n> pooler in the middle that such a format type might be sent.\n\nThat's a fair point, but I'm still not seeing much practical\nadvantage. It's unlikely that a client is going to set a random bit in\na format parameter for no reason.\n\n> As a connection pooler maintainer I would definitely love it if every\n> protocol change required either a protocol version parameter or a\n> protocol version bump. That way I can easily check every release if\n> the protocol changed by looking at two things, instead of diffing the\n> protocol docs for some tiny \"supposedly irrelevant\" change was made.\n\nPerhaps this is the root of our disagreement, or at least part of it.\nI completely agree that it is important for human beings to be able to\nunderstand whether, and how, the wire protocol has changed from one\nrelease to another. I think it would be useful to document that, and\nmaybe some agreement to start actually bumping the version number\nwould come out of that, either immediately or eventually. But I don't\nthink bumping the protocol version first is going to help anything. If\nyou know that something has changed at least one time in the release,\nyou still have to figure out what it was, and whether there were any\nmore of them that, presumably, would not bump the protocol version\nbecause there would be no good reason to do that more than once per\nmajor release. Not only that, but it's entirely possible that someone\ncould fail to realize that they were supposed to bump the protocol\nversion, or have some reason not to do it in a particular instance, so\neven if there are no bumps at all in a particular release cycle, that\ndoesn't prove that there are no changes that you would have liked to\nknow about.\n\nSaid differently, I think bumping the protocol version should be,\nfirst and foremost, a way of telling the computer on the end of the\nconnection something that it needs to know. There is a separate\nproblem of making sure that human maintainers know what they need to\nknow, and I think we're doing that quite poorly right now, but I think\nyou might be conflating those two problems a bit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Apr 2024 10:26:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 22 Apr 2024 at 16:26, Robert Haas <[email protected]> wrote:\n> That's a fair point, but I'm still not seeing much practical\n> advantage. It's unlikely that a client is going to set a random bit in\n> a format parameter for no reason.\n\nI think you're missing an important point of mine here. The client\nwouldn't be \"setting a random bit in a format parameter for no\nreason\". The client would decide it is allowed to set this bit,\nbecause the PG version it connected to supports column encryption\n(e.g. PG18). But this completely breaks protocol and application layer\nseparation.\n\nIt doesn't seem completely outside of the realm of possibility for a\npooler to gather some statistics on the amount of Bind messages that\nuse text vs binary query parameters. That's very easily doable now,\nwhile looking only at the protocol layer. If a client then sets the\nnew format parameter bit, this pooler could then get confused and\nclose the connection.\n\n> Perhaps this is the root of our disagreement, or at least part of it.\n> I completely agree that it is important for human beings to be able to\n> understand whether, and how, the wire protocol has changed from one\n> release to another.\n\nI think this is partially the reason for our disagreement, but I think\nthere are at least two other large reasons:\n\n1. I strongly believe minor protocol version bumps after the initial\n3.1 one can be made painless for clients/poolers (so the ones to\n3.2/3.3/etc). Similar to how TLS 1.3 can be safely introduced, and not\nhaving to worry about breaking TLS 1.2 communication. Once clients and\npoolers implement version negotiation support for 3.1, there's no\nreason for version negation support to work for 3.0 and 3.1 to then\nsuddenly break on the 3.2 bump. To be clear, I'm talking about the act\nof bumping the version here, not the actual protocol changes. So\nassuming zero/near-zero client implementation effort for the new\nfeatures (like never setting the newly supported bit in a format\nparameter), then bumping the protocol version for these new features\ncan never have negative consequences.\n\n2. I very much want to keep a clear split between the protocol layer\nand the application layer of our communication. And these layers merge\nwhenever (like you say) \"the wire protocol has changed from one\nrelease to another\", but no protocol version bump or protocol\nextension is used to indicate that. When that happens the only way for\na client to know what valid wire protocol messages are according to\nthe server, is by checking the server version. This completely breaks\nthe separation between layers. So, while checking the server version\nindeed works for direct client to postgres communication, it starts to\nbreak down whenever you put a pooler inbetween (as explained in the\nexample earlier in this email). And it breaks down even more when\nconnecting to servers that implement the Postgres wire protocol, but\nare not postgres at all, like CockroachDB. Right now libpq and other\npostgres drivers can be used to talk to these other servers and\npoolers, but if we start mixing protocol and application layer stuff\nthen eventually that will stop being the case.\n\nAfaict from your responses, you disagree with 1. However, it's not at\nall clear to me what exact problems you're worried about. It sounds\nlike you don't know either, and it's more that you're worried about\nthings breaking for not yet known reasons. I hoped to take away/reduce\nthose worries using some arguments in a previous email (quoted below),\nbut you didn't respond to those arguments, so I'm not sure if they\nwere able to change your mind.\n\nOn Thu, 18 Apr 2024 at 21:34, Jelte Fennema-Nio <[email protected]> wrote:\n> When the server supports a lower version than the client, the client\n> should disable certain features because it gets the\n> ProtocolVersionNegotiation message. This is also true if we don't bump\n> the version. Negotiating a lower version actually makes it clearer for\n> the client what features to disable. Using the reported postgres\n> version for this might not, because a connection pooler in the middle\n> might not support the features that the client wants and thus throw an\n> error (e.g. due to the client sending unknown messages) even if the\n> backing Postgres server would support these features. Not to mention\n> non-postgresql servers that implement the PostgreSQL protocol (of\n> which there are more and more).\n>\n> When the server supports a higher version, the client never even\n> notices this because the server will silently accept and only enable\n> the features of the lower version. So this could never cause breakage,\n> as from the client's perspective the server didn't bump their protocol\n> version.\n\n\n",
"msg_date": "Mon, 22 Apr 2024 23:19:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 5:19 PM Jelte Fennema-Nio <[email protected]> wrote:\n> On Mon, 22 Apr 2024 at 16:26, Robert Haas <[email protected]> wrote:\n> > That's a fair point, but I'm still not seeing much practical\n> > advantage. It's unlikely that a client is going to set a random bit in\n> > a format parameter for no reason.\n>\n> I think you're missing an important point of mine here. The client\n> wouldn't be \"setting a random bit in a format parameter for no\n> reason\". The client would decide it is allowed to set this bit,\n> because the PG version it connected to supports column encryption\n> (e.g. PG18). But this completely breaks protocol and application layer\n> separation.\n\nI can't see what the problem is here. If the client is connected to a\ndatabase that contains encrypted columns, and its response to seeing\nan encrypted column is to set this bit, that's fine and nothing should\nbreak. If a client doesn't know about encrypted columns and sets that\nbit at random, that will break things, and formally I think that's a\nrisk, because I don't believe we document anywhere that you shouldn't\nset unused bits in the format mask. But practically, it's not likely.\n(And also, maybe we should document that you shouldn't do that.)\n\n> It doesn't seem completely outside of the realm of possibility for a\n> pooler to gather some statistics on the amount of Bind messages that\n> use text vs binary query parameters. That's very easily doable now,\n> while looking only at the protocol layer. If a client then sets the\n> new format parameter bit, this pooler could then get confused and\n> close the connection.\n\nRight, this is the kind of risk I was worried about. I think it's\nsimilar to my example of a client setting an unused bit for no reason\nand breaking everything. Here, you've hypothesized a pooler that tries\nto interpret the bit and just errors out when it sees something it\ndoesn't understand. I agree that *formally* this is enough to justify\nbumping the protocol version, but I think *practically* it isn't,\nbecause the incompatibility is so minor as to inconvenience almost\nnobody, whereas changing the protocol version affects everybody.\n\nLet's consider a hypothetical country much like Canada except that\nthere are three official languages rather than two: English, French,\nand Robertish. Robertish is just like English except that the meanings\nof the words cabbage and rutabaga are reversed. Shall we mandate that\nall signs in the country be printed in three languages rather than\ntwo? Formally, we ought, because the substantial minority of our\nhypothetical country that proudly speaks Robertish as their mother\ntongue will not want to feel that they are second class citizens. But\npractically, there are very few situations where the differences\nbetween the two languages are going to inconvenience anyone. Indeed,\nthe French speakers might be a bit put out if English is effectively\nrepresented twice on every sign while their mother tongue is there\nonly once. Of course, people are entitled to organize their countries\npolitically in any way that works for the people who live in them, but\nas a practical matter, English and Robertish are mutually\nintelligible.\n\nAnd so here. If someone codes a connection pooler in the way you\nsuppose, then it will break. But, first of all, they probably won't do\nthat, both because it's not particularly likely that someone wants to\ngather that particular set of statistics and also because erroring out\nseems like an overreaction. And secondly, let's imagine that we do\nbump the protocol version and think about whether and how that solves\nthe problem. A client will request from the pooler a version 3.1\nconnection and the pooler will say, sorry, no can do, I only\nunderstand 3.0. So the client will now say, oh ok, no problem, I'm\ngoing to refrain from setting that parameter format bit. Cool, right?\n\nWell, no, not really. First, now the client application is probably\nbroken. If the client is varying its behavior based on the server's\nprotocol version, that must mean that it cares about accessing\nencrypted columns, and that means that the bit in question is not an\noptional feature. So actually, the fact that the pooler can force the\nclient to downgrade hasn't fixed anything at all.\n\nSecond, if the connection pooler were written to do something other\nthan close the connection, like say mask out the one bit that it knows\nhow to deal with or have an \"unknown\" bucket to count values that it\ndoesn't recognize, then it wouldn't have needed to care about the\nprotocol version in the first place. It would have been better off not\neven knowing, because then it wouldn't have forced a downgrade onto\nthe client application for no real reason. Throwing an error wasn't a\nwrong decision on the part of the person writing the pooler, but there\nare other things they could have done that would have been less\nbrittle.\n\nThird, applications, drivers, and connection poolers now all need to\nworry about handling downgrades smoothly. If a connection pooler\nrequests a v3.1 connection to the server and gets v3.0, it had better\nmake sure that it only advertises 3.0 to the client. If the client\nrequests v3.0, the pooler had better make sure to either request v3.0\nfrom the server. Or alternatively, the pooler can be prepared to\ntranslate between 3.0 and 3.1 wherever that's needed, in either\ndirection. But it's not at all clear what that would look like for\nsomething like TCE. Will the pooler arrange to encrypt parameters\ndestined for encrypted tables if the client doesn't do so? Will it\narrange to decrypt values coming from encrypted tables if the client\ndoesn't understand encryption? It's possible someone will code that\nsort of thing, but I bet a lot of people won't bother. In general, I\nthink we'll quickly end up with a bunch of different protocol versions\n-- say, 3.0 through 3.4 -- but people will thoroughly test with only\none or two of them and support for the others will either be buggy\nbecause it wasn't tested or work anyway because the differences didn't\nreally matter in the first place.\n\n> 1. I strongly believe minor protocol version bumps after the initial\n> 3.1 one can be made painless for clients/poolers (so the ones to\n> 3.2/3.3/etc). Similar to how TLS 1.3 can be safely introduced, and not\n> having to worry about breaking TLS 1.2 communication. Once clients and\n> poolers implement version negotiation support for 3.1, there's no\n> reason for version negation support to work for 3.0 and 3.1 to then\n> suddenly break on the 3.2 bump. To be clear, I'm talking about the act\n> of bumping the version here, not the actual protocol changes. So\n> assuming zero/near-zero client implementation effort for the new\n> features (like never setting the newly supported bit in a format\n> parameter), then bumping the protocol version for these new features\n> can never have negative consequences.\n\nI do like the idea of being able to introduce new versions without\nbreaking things, but I think that if the TLS folks bumped the protocol\nversion for something as minor as what we're talking about here, there\nwould quickly be so many TLS versions that the result would be\nunmanageable. I suspect that they either never make small changes and\nbatch everything up for the next rev, or they slip small changes into\nexisting protocol versions as I propose that we do here. I have zero\nobjection to bumping the protocol version when there is a real\nquestion of mutual intelligibility, and zero objection to trying to\nreduce friction around version bumps. But my current view, which I\nreserve the right to revise at a later time, is that a change that\n99.99+% of people can safely ignore is not a sufficient reason for a\nversion bump.\n\n> 2. I very much want to keep a clear split between the protocol layer\n> and the application layer of our communication. And these layers merge\n> whenever (like you say) \"the wire protocol has changed from one\n> release to another\", but no protocol version bump or protocol\n> extension is used to indicate that. When that happens the only way for\n> a client to know what valid wire protocol messages are according to\n> the server, is by checking the server version. This completely breaks\n> the separation between layers. So, while checking the server version\n> indeed works for direct client to postgres communication, it starts to\n> break down whenever you put a pooler inbetween (as explained in the\n> example earlier in this email). And it breaks down even more when\n> connecting to servers that implement the Postgres wire protocol, but\n> are not postgres at all, like CockroachDB. Right now libpq and other\n> postgres drivers can be used to talk to these other servers and\n> poolers, but if we start mixing protocol and application layer stuff\n> then eventually that will stop being the case.\n\nIn practice, it's already the case. If such databases don't share code\nwith PostgreSQL, it seems impossible that the replication subprotocol\nworks in any meaningful way. It seems very likely that there are other\ndark corners of the protocol where things don't work either. And TCE\nwill be another one, but bumping the protocol version doesn't fix\nthat.\n\nI kind of feel bad arguing so much about this - I don't think the urge\nto bump the protocol version when we change the protocol is a bad one\nin concept. And it sounds like you've done more work with software\nthat cares about the protocol outside of PostgreSQL itself than I\nhave. So maybe you're right and I'm all wet. But I can't understand\nwhy you don't see practical problems with frequent version bumps. It's\nnot just about the one-time effort of getting everything that doesn't\ncurrently understand how to negotiate a version to do so. It's about\nhow everyone acts on that information, or doesn't, and whether the end\nresult of all of those individual decisions is better or worse for the\ncommunity as a whole.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Apr 2024 11:03:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 2:20 PM Jelte Fennema-Nio <[email protected]> wrote:\n> 1. I strongly believe minor protocol version bumps after the initial\n> 3.1 one can be made painless for clients/poolers (so the ones to\n> 3.2/3.3/etc). Similar to how TLS 1.3 can be safely introduced, and not\n> having to worry about breaking TLS 1.2 communication.\n\nApologies for focusing on a single portion of your argument, but this\nclaim in particular stuck out to me. To my understanding, IETF worried\na _lot_ about breaking TLS 1.2 implementations with the TLS 1.3\nchange, to the point that TLS 1.3 clients and servers advertise\nthemselves as TLS 1.2 and sneak the actual version used into a TLS\nextension (roughly analogous to the _pq_ stuff). I vaguely recall that\nthe engineering work done for that update was pretty far from\npainless.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 23 Apr 2024 10:39:15 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 23 Apr 2024 at 17:03, Robert Haas <[email protected]> wrote:\n> If a client doesn't know about encrypted columns and sets that\n> bit at random, that will break things, and formally I think that's a\n> risk, because I don't believe we document anywhere that you shouldn't\n> set unused bits in the format mask. But practically, it's not likely.\n> (And also, maybe we should document that you shouldn't do that.)\n\nCurrently Postgres errors out when you set anything other than 0 or 1\nin the format field. And it's already documented that these are the\nonly allowed values: \"Each must presently be zero (text) or one\n(binary).\"\n\n> Let's consider a hypothetical country much like Canada except that\n> there are three official languages rather than two: English, French,\n> and Robertish.\n\nI don't really understand the point you are trying to make with this\nanalogy. Sure an almost equivalent unused language maybe shouldn't be\nput on the signs, but having two different names (aka protocol\nversions) for English and Robertish seems quite useful to avoid\nconfusion.\n\n> And so here. If someone codes a connection pooler in the way you\n> suppose, then it will break. But, first of all, they probably won't do\n> that, both because it's not particularly likely that someone wants to\n> gather that particular set of statistics and also because erroring out\n> seems like an overreaction.\n\nIs it really that much of an overreaction? Postgres happily errors on\nany weirdness in the protocol including unknown format codes. Why\nwouldn't a pooler/proxy in the middle be allowed to do the same? The\npooler has no clue what the new format means, maybe it requires some\nsession state to be passed on to the server to be interpreted\ncorrectly. The pooler would be responsible for syncing that session\nstate but might not have synced it because it doesn't know that the\nformat code requires that. An example of this would be a compressed\nvalue of which the compression algorithm is configured using a GUC.\nThus the pooler being strict in what it accepts would be a good way of\nnotifying the client that the pooler needs to be updated (or the\nfeature not used).\n\nBut I do agree that it seems quite unlikely that a pooler would\nimplement it like that though. However, a pooler logging a warning\nseems not that crazy. And in that case the connection might not be\nclosed, but the logs would be flooded.\n\n> And secondly, let's imagine that we do\n> bump the protocol version and think about whether and how that solves\n> the problem. A client will request from the pooler a version 3.1\n> connection and the pooler will say, sorry, no can do, I only\n> understand 3.0. So the client will now say, oh ok, no problem, I'm\n> going to refrain from setting that parameter format bit. Cool, right?\n>\n> Well, no, not really. First, now the client application is probably\n> broken. If the client is varying its behavior based on the server's\n> protocol version, that must mean that it cares about accessing\n> encrypted columns, and that means that the bit in question is not an\n> optional feature. So actually, the fact that the pooler can force the\n> client to downgrade hasn't fixed anything at all.\n\nIt seems quite a lot nicer if the client can safely fallback to not\nwriting data using the new format code, instead of getting an error\nfrom the pooler/flooding the pooler logs/having the pooler close the\nconnection.\n\n> Third, applications, drivers, and connection poolers now all need to\n> worry about handling downgrades smoothly. If a connection pooler\n> requests a v3.1 connection to the server and gets v3.0, it had better\n> make sure that it only advertises 3.0 to the client.\n\nThis seems quite straightforward to solve from a pooler perspective:\n1. Connect to the server first to find out what the maximum version is\nthat it support\n2. If a client asks for a higher version, advertise the server version\n\n> If the client\n> requests v3.0, the pooler had better make sure to either request v3.0\n> from the server. Or alternatively, the pooler can be prepared to\n> translate between 3.0 and 3.1 wherever that's needed, in either\n> direction. But it's not at all clear what that would look like for\n> something like TCE. Will the pooler arrange to encrypt parameters\n> destined for encrypted tables if the client doesn't do so? Will it\n> arrange to decrypt values coming from encrypted tables if the client\n> doesn't understand encryption?\n\nThis is a harder problem, and indeed one I hadn't considered much.\n From a pooler perspective I really would like to be able to have just\none pool of connections to the server all initially connected with the\n3.1 protocol and use those 3.1 server connections for clients that use\nthe 3.1 protocol but also for clients that use the 3.0 protocol.\nCreating separate pools of server connections for different protocol\nversions is super annoying to manage for operators (e.g. how should\nthese pools be sized). One of the reasons I'm proposing the\nParameterSet message is to avoid this problem for protocol parameters\ne.g. PgBouncer could then set TCE=off on server connection with\nTCE=on, just before it hands it to a client with TCE=off.\n\nI can think of a few ways of solving this issue for protocol versions:\n1. Introduce a ProtocolVersionSet message, which could be send on\nhandoff to downgrade/upgrade the protocol version at the postgres side\n2. Feature-flag all protocol changes behind protocol parameters, so\nParameterSet can be used to enable/disable everything\n3. Feature-flag most protocol changes using protocol parameters, but\nallow protocol changes to be made using a version bump when the server\nresponds the exact same way to all messages a client can send using\nthe previous protocol version. So don't allow a protocol version bump\nto add extra fields to existing messages, nor introduce new message\ntypes that are not sent as a 1-to-1 response to a new message type\nsent by the client, nor send existing message types more often than\nwas expected in the previous protocol version.\n\nI would prefer the 3rd option. 1 seems strange when considering\nprotocol changes that only impact the handshake, such as lengthening\nthe cancel key[1], also it requires adding yet another message type.\nAnd 2 seems an overly strict version of 3.\n\n[1]: https://www.postgresql.org/message-id/flat/508d0505-8b7a-4864-a681-e7e5edfe32aa%40iki.fi\n\n> It's possible someone will code that\n> sort of thing, but I bet a lot of people won't bother. In general, I\n> think we'll quickly end up with a bunch of different protocol versions\n> -- say, 3.0 through 3.4 -- but people will thoroughly test with only\n> one or two of them and support for the others will either be buggy\n> because it wasn't tested or work anyway because the differences didn't\n> really matter in the first place.\n\nI think this is overly pessimistic. I'm pretty sure clients and\npoolers will want to support multiple protocol versions, to be able to\ntalk to old clients/servers. I do think we should make supporting\nmultiple versions as easy as possible though.\n\n> In practice, it's already the case. If such databases don't share code\n> with PostgreSQL, it seems impossible that the replication subprotocol\n> works in any meaningful way.\n\nI think the fact that the replication subprotocol is gated behind the\n\"replication=true\" StartupMessage parameter makes it very easy to\ncheck for support.\n\n> It seems very likely that there are other\n> dark corners of the protocol where things don't work either. And TCE\n> will be another one, but bumping the protocol version doesn't fix\n> that.\n\nTo be clear, gating the new TCE format code behind a protocol version\nbump is **not only useful detect non-support for TCE** in such other\nservers. But it can also be used to detect that this other server\nactually **does support TCE**. If the postgresql server version is\nused to indicate such support, then these non-Postgres servers now\nneed to pretend that they actually are Postgres servers by sending the\nsame version number in the server_version GUC ParameterStatus message.\n\n> But I can't understand\n> why you don't see practical problems with frequent version bumps. It's\n> not just about the one-time effort of getting everything that doesn't\n> currently understand how to negotiate a version to do so. It's about\n> how everyone acts on that information, or doesn't, and whether the end\n> result of all of those individual decisions is better or worse for the\n> community as a whole.\n\nI do see practical problems. But I see the exact same practical\nproblems when encoding new protocol feature support in the postgres\nserver version number instead of the protocol version number. But\nencoding protocol feature support in the server version introduces\nother issues, such as not being able to detect that some non-Postgres\nserver supports TCE.\n\n\n",
"msg_date": "Mon, 6 May 2024 18:19:50 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 23 Apr 2024 at 19:39, Jacob Champion\n<[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 2:20 PM Jelte Fennema-Nio <[email protected]> wrote:\n> > 1. I strongly believe minor protocol version bumps after the initial\n> > 3.1 one can be made painless for clients/poolers (so the ones to\n> > 3.2/3.3/etc). Similar to how TLS 1.3 can be safely introduced, and not\n> > having to worry about breaking TLS 1.2 communication.\n>\n> Apologies for focusing on a single portion of your argument, but this\n> claim in particular stuck out to me. To my understanding, IETF worried\n> a _lot_ about breaking TLS 1.2 implementations with the TLS 1.3\n> change, to the point that TLS 1.3 clients and servers advertise\n> themselves as TLS 1.2 and sneak the actual version used into a TLS\n> extension (roughly analogous to the _pq_ stuff). I vaguely recall that\n> the engineering work done for that update was pretty far from\n> painless.\n\nMy bad... I guess TLS 1.3 was a bad example, due to it changing the\nhandshake itself so significantly.\n\n\n",
"msg_date": "Mon, 6 May 2024 18:22:25 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 5 Apr 2024 at 18:30, Dave Cramer <[email protected]> wrote:\n> On Fri, 5 Apr 2024 at 12:09, Jelte Fennema-Nio <[email protected]> wrote:\n>> I'll take a look at redesigning the protocol parameter stuff. To work\n>> with dedicated functions instead.\n>\n> +1\n\nIt's been a while, but I now actually took the time to look into this.\nAnd I ran into a problem that I'd like to get some feedback on before\ncontinuing the implementation:\n\nIf we're not setting the protocol parameter in the StartupMessage,\nthere's currently no way for us to know if the protocol parameter is\nsupported by the server. If protocol parameters were unchangable then\nthat would be fine, but the whole point of introducing ParameterSet is\nto make it possible to change protocol parameters on an existing\nconnection. Having the function SupportsProtocolCompression return\nfalse, even though you can enable compression just fine, only because\nwe didn't ask for compression when connecting seems quite silly and\nconfusing.\n\nI see five ways around this problem and would love some feedback on\nwhich you think is best (or if you can think of any other/better\nones):\n1. Have protocol parameters always be GUC_REPORT, so that the presence\nof a ParameterStatus message during connection startup can be used as\na way of detecting support for the protocol parameter.\n2. Make libpq always send each known protocol parameter in the\nStartupMessage to check for their support, even if the connection\nstring does not contain the related parameters (set them to their\ndefault value then). Then the non-presence of the parameter in the\nNegotiateProtocolVersion message can be used reliably to determine\nsupport for the feature. We could even disallow changing a protocol\nparameter at the server side using ParameterSet if it was not\nrequested in the StartupMessage.\n3. Very similar to 1, but require explicit user input in the\nconnection string to request the feature on connection startup by\nhaving the user explicitly provide its default value. If it's not\nrequested on connection startup assume its unsupported and disallow\nusage of the feature (even if the server might actually support it).\n4. Make SupportsProtocolCompression return a tri-state, SUPPORTED,\nUNSUPPORTED, UNKNOWN. If it's UNKNOWN people could send a ParameterSet\nmessage themselves to check for feature support after connection\nstartup. We could even recognize this and change the state that\nSupportProtocolCompression function to return SUPPORTED/UNSUPPORTED on\nfuture calls according to the server response.\n5. Basically the same as 4 but automatically send a ParameterSet\nmessage internally when calling SupportsProtocolCompression and the\nstate is UNKNOWN, so we'd only ever return SUPPORTED or UNSUPPORTED.\n\nThe above options are listed in my order of preference, below some\nreasoning why:\n\n1 and 2 would increase the bandwidth used during connection handshake\nslightly for each protocol parameter that we would add, but they have\nthe best user experience IMHO.\n\nI slightly prefer 1 over 2 because there is another argument to be\nmade for always having protocol parameters be GUC_REPORT: these\nparameters change what message types a client can send or receive. So\nit makes sense to me to have the server tell the client what the\ncurrent value of such a parameter is. This might not be a strong\nargument though, because this value would only ever change due to user\ninteraction. But still, one might imagine scenarios where the value\nthat the client sent is not exactly what the server would set the\nparameter to on receiving that value from the client. e.g. for\nprotocol compression, maybe the client sends a list of prefered\ncompression methods and the server would send a ParameterStatus\ncontaining only the specific compression method that it will use when\nsending messages to the client.\n\n3 seems also an acceptable option to me. While having slightly worse\nuser experience than 2, it allows the user of the client to make the\ndecision if the extra bandwidth during connection startup is worth it\nto be able to enable the feature later.\n\n4 assumes that we want people to be able to trigger sending\nParameterSet messages for every protocol parameter. I'm not sure we'd\nwant to give that ability in all cases.\n\n5 would require SupportsProtocolCompression to also have a\nnon-blocking version, which bloats our API more than I'd like. Also as\na user you wouldn't be able to know if SupportsProtocolCompression\nwill do a network request or not.\n\nPS. This is only a problem for feature detection for features relying\non protocol parameters, feature-support relying on protocol version\nbumps are easy to detect based on the NegotiateProtocolVersion\nmessage.\n\n\n",
"msg_date": "Thu, 16 May 2024 11:21:56 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 16, 2024 at 5:22 AM Jelte Fennema-Nio <[email protected]> wrote:\n> If we're not setting the protocol parameter in the StartupMessage,\n> there's currently no way for us to know if the protocol parameter is\n> supported by the server. If protocol parameters were unchangable then\n> that would be fine, but the whole point of introducing ParameterSet is\n> to make it possible to change protocol parameters on an existing\n> connection. Having the function SupportsProtocolCompression return\n> false, even though you can enable compression just fine, only because\n> we didn't ask for compression when connecting seems quite silly and\n> confusing.\n\nYou're probably not going to like this answer, but I feel like this is\nanother sign that you're trying to use the protocol extensibility\nfacilities in the wrong way. In my first reply to the thread, I\nproposed having the client send _pq_.protocol_set=1 in that startup\nmessage. If the server accepts that message, then you can send\nwhatever set of message types are associated with that option, which\ncould include messages to list known settings, as well as messages to\nset them. Alternatively, if we used a wire protocol bump for this, you\ncould request version 3.1 and everything that I just said still\napplies. In other words, I feel that if you had adopted the design\nthat I proposed back in March, or some variant of it, the problem\nyou're having now wouldn't exist.\n\nIMHO, we need to negotiate the language that we're going to use to\ncommunicate before we start communicating. We should find out which\nprotocol version we're using, and what protocol options are accepted,\nbased on sending a StartupMessage and receiving a reply. Then, after\nthat, having established a common language, we can do whatever. I\nthink you're trying to meld those two steps into one, which is\nunderstandable from the point of view of saving a round trip, but I\njust don't see it working out well. What we can do in the startup\nmessage is extremely limited, because any startup messages we generate\ncan't break existing servers, and also because of the concerns I\nraised earlier about leaving room for more extension in the future.\nOnce we get past the startup message negotiation, the sky's the limit!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2024 11:28:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 16 May 2024 at 17:29, Robert Haas <[email protected]> wrote:\n> You're probably not going to like this answer, but I feel like this is\n> another sign that you're trying to use the protocol extensibility\n> facilities in the wrong way. In my first reply to the thread, I\n> proposed having the client send _pq_.protocol_set=1 in that startup\n> message. If the server accepts that message, then you can send\n> whatever set of message types are associated with that option, which\n> could include messages to list known settings, as well as messages to\n> set them. Alternatively, if we used a wire protocol bump for this, you\n> could request version 3.1 and everything that I just said still\n> applies. In other words, I feel that if you had adopted the design\n> that I proposed back in March, or some variant of it, the problem\n> you're having now wouldn't exist.\n\nI don't really understand the benefit of your proposal over option 2\nthat I proposed. Afaict you're proposing that for e.g. compression we\nfirst set _pq_.supports_compression=1 in the StartupMessage and use\nthat to do feature detection, and then after we get the response we\nsend ParameterSet(\"compression\", \"gzip\"). To me this is pretty much\nidentical to option 2, except that it introduces an extra round trip\nfor no benefit (as far as I can see). Why not go for option 2 and send\n_pq_.compression=gzip in the StartupMessage directly.\n\n> IMHO, we need to negotiate the language that we're going to use to\n> communicate before we start communicating. We should find out which\n> protocol version we're using, and what protocol options are accepted,\n> based on sending a StartupMessage and receiving a reply. Then, after\n> that, having established a common language, we can do whatever. I\n> think you're trying to meld those two steps into one, which is\n> understandable from the point of view of saving a round trip, but I\n> just don't see it working out well.\n\nI think not increasing the number of needed round trips in the startup\nof a connection is extremely important. I think it's so important that\nI honestly don't think we should merge a protocol change that\nintroduces an extra round trip without a VERY good reason, and this\nround trip should only be needed when actually using the feature.\n\n> What we can do in the startup\n> message is extremely limited, because any startup messages we generate\n> can't break existing servers, and also because of the concerns I\n> raised earlier about leaving room for more extension in the future.\n> Once we get past the startup message negotiation, the sky's the limit!\n\nSure, what we can do in the StartupMessage is extremely limited, but\nwhat it does allow is passing arbitrary key value pairs to the server.\nBut by only using _pq_.feature_name=1, we're effectively only using\nthe key part of the key value pair. Limiting ourselves even more, by\nthrowing half of our communication channel away, seems like a bad idea\nto me. But maybe I'm just not understanding the problem you're seeing\nwith using the value too.\n\n\n",
"msg_date": "Thu, 16 May 2024 18:09:18 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 16, 2024 at 12:09 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I don't really understand the benefit of your proposal over option 2\n> that I proposed. Afaict you're proposing that for e.g. compression we\n> first set _pq_.supports_compression=1 in the StartupMessage and use\n> that to do feature detection, and then after we get the response we\n> send ParameterSet(\"compression\", \"gzip\"). To me this is pretty much\n> identical to option 2, except that it introduces an extra round trip\n> for no benefit (as far as I can see). Why not go for option 2 and send\n> _pq_.compression=gzip in the StartupMessage directly.\n\nUgh, it's so hard to communicate clearly about this stuff. I didn't\nreally have any thought that we'd ever try to handle something as\ncomplicated as compression using ParameterSet. I tend to agree that\nfor compression I'd like to see the startup packet contain more than\n_pq_.compression=1, but I'm not sure what would happen after that\nexactly. If the client asks for _pq_.compression=lz4 and the server\ntells the client that it doesn't understand _pq_.compression at all,\nthen everybody's on the same page: no compression. But, if the server\nunderstands the option but isn't OK with the proposed value, what\nhappens then? Does it send a NegotiateCompressionType message after\nthe NegotiateProtocolVersion, for example? That seems like it could\nlead to the client having to be prepared for a lot of NegotiateX\nmessages somewhere down the road.\n\nI think at some point in the past we had discussed having the client\nlist all the algorithms it supported in the argument to\n_pq_.compression, and then the server would respond with the algorithm\nit wanted use, or maybe a list of algorithms that it could allow, and\nthen we'd go from there. But I'm not entirely sure if that's the right\nidea, either.\n\nChanging compression algorithms in mid-stream is tricky, too. If I\ntell the server \"hey, turn on server-to-client compression!\" then I\nneed to be able to identify where exactly that happens. Any messages\nalready sent by the server and not yet processed by me, or any\nmessages sent after that but before the server handles my request, are\ngoing to be uncompressed. Then, at some point, I'll start getting\ncompressed data. If the compressed data is framed inside some message\ntype created for that purpose, like I get a CompressedMessage message\nand then I decompress to get the actual message, this is simpler to\nmanage. But even then, it's tricky if the protocol shifts. If I tell\nthe server, you know what, gzip was a bad choice, I want lz4, I'll\nneed to know where the switch happens to be able to decompress\nproperly.\n\nI don't know if we want to support changing compression algorithms in\nmid-stream. I don't think there's any reason we can't, but it might be\na bunch of work for something that nobody really cares about. Not\nsure.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2024 12:57:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 16, 2024 at 6:57 AM Robert Haas <[email protected]> wrote:\n>\n> Ugh, it's so hard to communicate clearly about this stuff. I didn't\n> really have any thought that we'd ever try to handle something as\n> complicated as compression using ParameterSet. I tend to agree that\n> for compression I'd like to see the startup packet contain more than\n> _pq_.compression=1, but I'm not sure what would happen after that\n> exactly. If the client asks for _pq_.compression=lz4 and the server\n> tells the client that it doesn't understand _pq_.compression at all,\n> then everybody's on the same page: no compression. But, if the server\n> understands the option but isn't OK with the proposed value, what\n> happens then? Does it send a NegotiateCompressionType message after\n> the NegotiateProtocolVersion, for example? That seems like it could\n> lead to the client having to be prepared for a lot of NegotiateX\n> messages somewhere down the road.\n>\n> I think at some point in the past we had discussed having the client\n> list all the algorithms it supported in the argument to\n> _pq_.compression, and then the server would respond with the algorithm\n> it wanted use, or maybe a list of algorithms that it could allow, and\n> then we'd go from there. But I'm not entirely sure if that's the right\n> idea, either.\n\nAs currently implemented [1], the client sends the server the list of\nall compression algorithms it is willing to accept, and the server can\nuse one of them. If the server knows what `_pq_.compression` means\nbut doesn't actually support any compression, it will both send the\nclient its empty list of supported algorithms and just never send any\ncompressed messages, and everyone involved will be (relatively) happy.\nThere is a libpq function that a client can use to check what\ncompression is in use if a client *really* doesn't want to continue\nwith the conversation without compression, but 99% of the time I can't\nsee why a client wouldn't prefer to continue using a connection with\nwhatever compression the server supports (or even none) without more\nexplicit negotiation. (Unlike TLS, where automagically picking\nbetween using and not using TLS has strange security implications and\neffects, compression is a convenience feature for everyone involved.)\n\n> Changing compression algorithms in mid-stream is tricky, too. If I\n> tell the server \"hey, turn on server-to-client compression!\" then I\n> need to be able to identify where exactly that happens. Any messages\n> already sent by the server and not yet processed by me, or any\n> messages sent after that but before the server handles my request, are\n> going to be uncompressed. Then, at some point, I'll start getting\n> compressed data. If the compressed data is framed inside some message\n> type created for that purpose, like I get a CompressedMessage message\n> and then I decompress to get the actual message, this is simpler to\n> manage. But even then, it's tricky if the protocol shifts. If I tell\n> the server, you know what, gzip was a bad choice, I want lz4, I'll\n> need to know where the switch happens to be able to decompress\n> properly.\n>\n> I don't know if we want to support changing compression algorithms in\n> mid-stream. I don't think there's any reason we can't, but it might be\n> a bunch of work for something that nobody really cares about. Not\n> sure.\n\nAs the protocol layer is currently designed [1], it explicitly makes\nit very easy to change/restart compression streams, specifically for\nthis use case (and in particular for the general connection pooler use\ncase). Compressed data is already framed in a `CompressedData`\nmessage, and that message has a header byte that corresponds to an\nenum value for which algorithm is currently in use. Any time the\ncompression stream was restarted by the sender, the first\n`CompressedData` message will set that byte, and then the client will\nrestart its decompression stream with the chosen algorithm from that\npoint. For `CompressedData` messages that continue using the\nalready-established stream, the byte is simply set to 0. (This is\nalso how the \"each side sends a list\" form of negotiation is able to\nwork without additional round trips, as the `CompressedData` framing\nitself communicates which compression algorithm has been selected.)\n\n[1] https://www.postgresql.org/message-id/CACzsqT5-7xfbz%2BSi35TBYHzerNX3XJVzAUH9AewQ%2BPp13fYBoQ%40mail.gmail.com\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Thu, 16 May 2024 07:39:04 -1000",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 16 May 2024 at 18:57, Robert Haas <[email protected]> wrote:\n> Ugh, it's so hard to communicate clearly about this stuff. I didn't\n> really have any thought that we'd ever try to handle something as\n> complicated as compression using ParameterSet.\n\nOkay, then it's definitely very hard to communicate clearly about\nthis. Because being able to re-configure compression using\nParameterSet is exactly the type of thing I want to be able to do in\nPgBouncer. Otherwise a connection from PgBouncer to Postgres cannot be\nhanded off to any client, because its compression state cannot be\nchanged on the fly to the state that a client expects, if the first\none wants lz4 compression, and a second one wants zstd compression.\nThere's no way the same connection can be reused for both unless you\ndecompress and recompress in pgbouncer, which is expensive to do.\nBeing able to reconfigure the stream to compress the messages in the\nexpected form is much cheaper.\n\n> Does it send a NegotiateCompressionType message after\n> the NegotiateProtocolVersion, for example? That seems like it could\n> lead to the client having to be prepared for a lot of NegotiateX\n> messages somewhere down the road.\n\nLike Jacob explains, you'd want to allow the client to provide a list\nof options in order of preference. And then have the server respond\nwith a ParameterStatus message saying what it ended up being. So no\nnew NegotiateXXX messages are needed, as long as we make sure any\n_pq_.xxx falls back to reporting its default value on failure. This is\nexactly why I said I prefer option 1 of the options I listed, because\nwe need _pq_.xxx messages to report their current value to the client.\n\nTo be clear, this is not special to compression. This applies to ALL\nproposed protocol parameters. The server should fall back to some\n\"least common denominator\" if it doesn't understand (part of) the\nvalue for the protocol parameter that's provided by the client,\npossibly falling back to disabling the protocol extension completely.\n\n> Changing compression algorithms in mid-stream is tricky, too. If I\n> tell the server \"hey, turn on server-to-client compression!\"\n\nYes it is tricky, but it's something that it would need to support\nimho. And Jacob actually implemented it this way, so I feel like we're\ndiscussing a non-problem here.\n\n> I don't know if we want to support changing compression algorithms in\n> mid-stream. I don't think there's any reason we can't, but it might be\n> a bunch of work for something that nobody really cares about.\n\nAgain, I guess I wasn't clear at all in my previous emails and/or\ncommit messages. Connection poolers care **very much** about this.\nPoolers need to be able to re-configure any protocol parameter to be\nable to pool the same server connection across clients with\ndifferently configured protocol parameters. Again: This is the primary\nreason for me wanting to introduce the ParameterSet message.\n\n\n",
"msg_date": "Thu, 16 May 2024 23:01:56 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 16, 2024 at 1:39 PM Jacob Burroughs\n<[email protected]> wrote:\n> As currently implemented [1], the client sends the server the list of\n> all compression algorithms it is willing to accept, and the server can\n> use one of them. If the server knows what `_pq_.compression` means\n> but doesn't actually support any compression, it will both send the\n> client its empty list of supported algorithms and just never send any\n> compressed messages, and everyone involved will be (relatively) happy.\n> There is a libpq function that a client can use to check what\n> compression is in use if a client *really* doesn't want to continue\n> with the conversation without compression, but 99% of the time I can't\n> see why a client wouldn't prefer to continue using a connection with\n> whatever compression the server supports (or even none) without more\n> explicit negotiation. (Unlike TLS, where automagically picking\n> between using and not using TLS has strange security implications and\n> effects, compression is a convenience feature for everyone involved.)\n\nThis all seems sensible to me.\n\n> As the protocol layer is currently designed [1], it explicitly makes\n> it very easy to change/restart compression streams, specifically for\n> this use case (and in particular for the general connection pooler use\n> case). Compressed data is already framed in a `CompressedData`\n> message, and that message has a header byte that corresponds to an\n> enum value for which algorithm is currently in use. Any time the\n> compression stream was restarted by the sender, the first\n> `CompressedData` message will set that byte, and then the client will\n> restart its decompression stream with the chosen algorithm from that\n> point. For `CompressedData` messages that continue using the\n> already-established stream, the byte is simply set to 0. (This is\n> also how the \"each side sends a list\" form of negotiation is able to\n> work without additional round trips, as the `CompressedData` framing\n> itself communicates which compression algorithm has been selected.)\n\nOK, so you made it so that compressed data is fully self-identifying.\nHence, there's no need to worry if something gets changed: the\nreceiver, if properly implemented, can't help but notice. The only\ndownside that I can see to this design is that you only have one byte\nto identify the compression algorithm, but that doesn't actually seem\nlike a real problem at all, because I expect the number of supported\ncompression algorithms to grow very slowly. I think it would take\ncenturies, possibly millenia, before we started to get short of\nidentifiers. So, cool.\n\nBut, in your system, how does the client ask the server to switch to a\ndifferent compression algorithm, or to restart the compression stream?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 09:15:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, May 17, 2024 at 3:15 AM Robert Haas <[email protected]> wrote:\n>\n> OK, so you made it so that compressed data is fully self-identifying.\n> Hence, there's no need to worry if something gets changed: the\n> receiver, if properly implemented, can't help but notice. The only\n> downside that I can see to this design is that you only have one byte\n> to identify the compression algorithm, but that doesn't actually seem\n> like a real problem at all, because I expect the number of supported\n> compression algorithms to grow very slowly. I think it would take\n> centuries, possibly millenia, before we started to get short of\n> identifiers. So, cool.\n>\n> But, in your system, how does the client ask the server to switch to a\n> different compression algorithm, or to restart the compression stream?\n\nI was leaving the details around triggering that for this conversation\nand in that patch just designing the messages in a way that would\nallow adding the reconfiguration/restarting to be easily added in a\nbackwards-compatible way in a future patch. I would imagine that an\nexplicit `ParameterSet` call that sets `_pq_.connection_compression`\n(or whatever the implementation details turn out to be) would also\ntrigger a compressor restart, and when restarted it would pick an\nalgorithm/configuration based on the new value of the parameter rather\nthan the one used at connection establishment.\n\n\n\n-- \nJacob Burroughs | Staff Software Engineer\nE: [email protected]\n\n\n",
"msg_date": "Fri, 17 May 2024 07:26:45 -1000",
"msg_from": "Jacob Burroughs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, May 17, 2024 at 1:26 PM Jacob Burroughs\n<[email protected]> wrote:\n> I was leaving the details around triggering that for this conversation\n> and in that patch just designing the messages in a way that would\n> allow adding the reconfiguration/restarting to be easily added in a\n> backwards-compatible way in a future patch. I would imagine that an\n> explicit `ParameterSet` call that sets `_pq_.connection_compression`\n> (or whatever the implementation details turn out to be) would also\n> trigger a compressor restart, and when restarted it would pick an\n> algorithm/configuration based on the new value of the parameter rather\n> than the one used at connection establishment.\n\nHmm, OK, interesting. I suppose that I thought we were going to handle\nproblems like this by just adding bespoke messages for each case (e.g.\nCompressionSet). I wasn't thinking it would be practical to make a\nmessage whose remit was as general as what it seems like Jelte wants\nto do with ParameterSet, because I'm not sure everything can be\nhandled that simply. It's not an exact analogy, but when you want to\nstop the server, it's not enough to say that you want to change from\nthe Running state to the Stopped state. You have to specify which type\nof shutdown should be used to make the transition. You could even have\nmore complicated cases, where one side says \"prepare to do X\" and the\nother side eventually says \"OK, I'm prepared\" and the first side says\n\"great, now activate X\" and the other side eventually says \"OK, it's\nactivate, please confirm that you've also activated it from your side\"\nand the first side eventually says \"OK, I confirm that\". I think the\nfear that we're going to run into cases where such complex handshaking\nis necessary is a major reason why I'm afraid of Jelte's ideas about\nParameterSet: it seems much more opinionated to me than he seems to\nthink it is.\n\nTo the extent that I can wrap my head around what Jelte is proposing,\nand all signs point to that extent being quite limited, I suppose I\nwas thinking of something like his option (2). That is, I assumed that\na client would request all the optional features that said client\nmight wish to use at connection startup time. But I can see how that\nassumption might be somewhat problematic in a connection-pooling\nenvironment. Still, at least to me, it seems better than trying to\nrely on GUC_REPORT. My opinion is (1) GUC_REPORT isn't a particularly\nwell-designed mechanism so I dislike trying to double down on it and\n(2) trying to mix these protocol-level parameters and the\ntransactional GUCs we have together in a single mechanism seems\npotentially problematic and (3) I'm still not particularly happy about\nthe idea of making protocol parameters into GUCs in the first place.\nPerhaps these are all minority positions, but I can't tell you what\neveryone thinks, only what I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 15:24:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 17 May 2024 at 21:24, Robert Haas <[email protected]> wrote:\n> I think the\n> fear that we're going to run into cases where such complex handshaking\n> is necessary is a major reason why I'm afraid of Jelte's ideas about\n> ParameterSet: it seems much more opinionated to me than he seems to\n> think it is.\n\nI think that fear is valid, and I agree that we might want to add a\nbespoke message for cases where ParameterSet is not enough. But as far\nas I can tell ParameterSet would at least cover all the protocol\nchanges that have been suggested so far. Using an opinionated but\nlimited message type for 90% of the cases and a bespoke message for\nthe last 10% seems much better to me than having a bespoke one for\neach (especially if currently none of the protocol proposals fall into\nthe last 10%).\n\n> To the extent that I can wrap my head around what Jelte is proposing,\n> and all signs point to that extent being quite limited, I suppose I\n> was thinking of something like his option (2). That is, I assumed that\n> a client would request all the optional features that said client\n> might wish to use at connection startup time. But I can see how that\n> assumption might be somewhat problematic in a connection-pooling\n> environment.\n\nTo be clear, I'd also be totally fine with my option (2). I'm\npersonally slightly leaning towards my option (1), due to the reasons\nlisted before. But connection poolers could request all the protocol\nextensions at the start just fine (using the default \"disabled\" value)\nto check for support. So I think option (2) would probably be the most\nconservative, i.e. we could always decide that option (1) is fine in\nsome future release.\n\n> Still, at least to me, it seems better than trying to\n> rely on GUC_REPORT. My opinion is (1) GUC_REPORT isn't a particularly\n> well-designed mechanism so I dislike trying to double down on it\n\nI agree that GUC_REPORT is not particularly well designed,\ncurrently... But even in its current form it's already a very\neffective mechanism for connection poolers to find out to which value\na specific GUC is set to, and if something similar to patch 0014 would\nbe merged my main gripe with GUC_REPORT would be gone. Tracking GUC\nsettings by using ParameterSet would actually be harder, because if\nParameterSet errors at Postgres then the connection pooler would have\nto roll back its cache of that setting. While with the GUC_REPORT\nresponse from Postgres it can simply rely on Postgres telling the\npooler that the GUC has changed, even rollbacks are handled correctly\nthis way.\n\n> and\n> (2) trying to mix these protocol-level parameters and the\n> transactional GUCs we have together in a single mechanism seems\n> potentially problematic\n\nI don't understand what potential problems you're worried about here.\nCould you clarify?\n\n> and (3) I'm still not particularly happy about\n> the idea of making protocol parameters into GUCs in the first place.\n\nSimilar to the above: Could you clarify why you're not happy about that?\n\n> Perhaps these are all minority positions, but I can't tell you what\n> everyone thinks, only what I think.\n\nI'd love to hear some opinions from others on these design choices. So\nfar it seems like we're the only two that have opinions on this (which\nseems hard to believe) and our opinions are clearly conflicting. And\nabove all I'd like to move forward with this, be it my way or yours\n(although I'd prefer my way of course ;) )\n\n\n",
"msg_date": "Sat, 18 May 2024 01:33:29 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Fri, 17 May 2024 at 21:24, Robert Haas <[email protected]> wrote:\n>> Perhaps these are all minority positions, but I can't tell you what\n>> everyone thinks, only what I think.\n\n> I'd love to hear some opinions from others on these design choices. So\n> far it seems like we're the only two that have opinions on this (which\n> seems hard to believe) and our opinions are clearly conflicting. And\n> above all I'd like to move forward with this, be it my way or yours\n> (although I'd prefer my way of course ;) )\n\nI got around to looking through this thread in preparation for next\nweek's patch review session. I have a couple of opinions to offer:\n\n1. Protocol versions suck. Bumping them is seldom a good answer,\nand should never be done if you have any finer-grained negotiation\nmechanism available. My aversion to this is over thirty years old:\nI learned that lesson from watching the GIF87-to-GIF89 transition mess.\nAuthors of GIF-writing tools tended to take the easy way out and write\n\"GIF89\" in the header whether they were actually using any of the new\nversion's features or not. This led to an awful lot of pictures that\ncouldn't be read by available GIF-displaying tools, for no good reason\nwhatsoever. The PNG committee, a couple years later, reacted to that\nmess by designing PNG to have no version number whatsoever, and yet\nbe extensible in a fine-grained way. (Basically, a PNG file is made\nup of labeled chunks. If a reader doesn't recognize a particular\nchunk code, it can still tell whether the chunk is \"critical\" or not,\nand thereby decide if it must give up or can proceed while ignoring\nthat chunk.)\n\nSo overall, I have a strong preference for using the _pq_.xxx\nmechanism instead of a protocol version bump. I do not believe\nthe latter has any advantage.\n\n2. I share Robert's suspicion of equating protocol parameters\nwith GUCs. The GUC mechanism is quite opinionated and already\nserves multiple masters. In particular, the fact that GUC\nsettings are normally transactional does not play nice with\nthe way protocol parameters need to behave. Yeah, no doubt you\ncould add another dollop of complexity to guc.c to make parameters\nwork differently from other GUCs, but I think it's the wrong design\ndirection. We should handle protocol parameters with a separate\nmechanism. It's not, for instance, clear to me that protocol\nparameters should be exposed at the SQL level at all; but if we\ndon't feel they need to be available via SHOW and pg_settings,\nwhat benefit is guc.c really bringing to the table?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2024 14:12:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 23, 2024 at 11:12 AM Tom Lane <[email protected]> wrote:\n> If a reader doesn't recognize a particular\n> chunk code, it can still tell whether the chunk is \"critical\" or not,\n> and thereby decide if it must give up or can proceed while ignoring\n> that chunk.)\n\nWould it be good to expand on that idea of criticality? IIRC one of\nJelte's complaints earlier was that middleware has to know all the\nextension types anyway, to be able to figure out whether it has to do\nsomething about them or not. HTTP has the concept of hop-by-hop vs\nend-to-end headers for related reasons.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 23 May 2024 11:25:55 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> Would it be good to expand on that idea of criticality? IIRC one of\n> Jelte's complaints earlier was that middleware has to know all the\n> extension types anyway, to be able to figure out whether it has to do\n> something about them or not. HTTP has the concept of hop-by-hop vs\n> end-to-end headers for related reasons.\n\nYeah, perhaps. We'd need to figure out just which classes we need\nto divide protocol parameters into, and then think about a way for\ncode to understand which class a parameter falls into even when\nit doesn't specifically know that parameter. That seems possible\nthough. PNG did it with spelling rules for the chunk labels.\nHere, since we don't yet have any existing _pq_.xxx parameter names,\nwe could maybe say that the names shall follow a pattern like\n\"_pq_.class.param\". (That works only if the classes are\nnon-overlapping, an assumption not yet justified by evidence;\nbut we could do something more complicated if we have to.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2024 14:40:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 23, 2024 at 2:12 PM Tom Lane <[email protected]> wrote:\n> I got around to looking through this thread in preparation for next\n> week's patch review session. I have a couple of opinions to offer:\n\nI agree with these opinions. Independently of that, I'm glad you shared them.\n\nI think part of the reason we ended up with the protocol parameters =\nGUCs thing is because you seemed to be concurring with that approach\nupthread. I think it was Jelte's idea originally, but I interpreted\nsome of your earlier remarks to be supporting it. I'm not sure whether\nyou've revised your opinion, or just refined it, or whether we\nmisinterpreted your earlier remarks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 May 2024 15:45:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I think part of the reason we ended up with the protocol parameters =\n> GUCs thing is because you seemed to be concurring with that approach\n> upthread. I think it was Jelte's idea originally, but I interpreted\n> some of your earlier remarks to be supporting it. I'm not sure whether\n> you've revised your opinion, or just refined it, or whether we\n> misinterpreted your earlier remarks.\n\nI don't recall exactly what I thought earlier, but now I think we'd\nbe better off with separate infrastructure. guc.c is unduly complex\nalready. Perhaps there are bits of it that could be factored out\nand shared, but I bet not a lot.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2024 16:00:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, May 23, 2024 at 4:00 PM Tom Lane <[email protected]> wrote:\n> I don't recall exactly what I thought earlier, but now I think we'd\n> be better off with separate infrastructure. guc.c is unduly complex\n> already. Perhaps there are bits of it that could be factored out\n> and shared, but I bet not a lot.\n\nOK. That seems fine to me, but I bet Jelte is going to disagree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 May 2024 09:28:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 24 May 2024 at 15:28, Robert Haas <[email protected]> wrote:\n>\n> On Thu, May 23, 2024 at 4:00 PM Tom Lane <[email protected]> wrote:\n> > I don't recall exactly what I thought earlier, but now I think we'd\n> > be better off with separate infrastructure. guc.c is unduly complex\n> > already. Perhaps there are bits of it that could be factored out\n> > and shared, but I bet not a lot.\n>\n> OK. That seems fine to me, but I bet Jelte is going to disagree.\n\nI indeed disagree. I think the effort needed to make guc.c handle\nprotocol parameters is extremely little. The 0011 patch are all the\nchanges that are needed to achieve that, and that patch only adds 65\nadditional lines. And only 15 of those 65 lines actually have to do\nanything somewhat weird, to be able to handle the transactionality\ndiscrepancy between protocol parameters and other GUCs. The other 50\nlines are (imho) really clean and fit perfectly with the way guc.c is\ncurrently structured (i.e. they add PGC_PROTOCOL and PGC_SU_PROTOCOL\nin a really obvious way)\n\nSeparating it from the GUC infrastructure will mean we need to\nduplicate a lot of the infrastructure. Assuming we don't care about\nSHOW or pg_settings (which I agree are not super important), the\nthings that we would want for protocol parameters to have that guc.c\ngives us for free are:\n1. Reporting the value of the parameter to the client (done using\nParameterStatus)\n2. Parsing and validating of the input, bool, int, enum, etc, but also\ncheck_hook and assign_hook.\n3. Logic in all connection poolers to change GUC values to the\nclient's expected values whenever a server connection is handed off to\na client\n4. Permission checking, if we want some protocol extensions to only be\nconfigurable by a highly privileged user\n\nAll of those things would have to be duplicated/re-implemented if we\nmake protocol parameters their own dedicated thing. Doing that work\nseems like a waste of time to me, and would imho add much more\ncomplexity than the proposed 65 lines of code in 0011.\n\n\n",
"msg_date": "Sat, 25 May 2024 00:08:57 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 23 May 2024 at 20:40, Tom Lane <[email protected]> wrote:\n>\n> Jacob Champion <[email protected]> writes:\n> > Would it be good to expand on that idea of criticality? IIRC one of\n> > Jelte's complaints earlier was that middleware has to know all the\n> > extension types anyway, to be able to figure out whether it has to do\n> > something about them or not. HTTP has the concept of hop-by-hop vs\n> > end-to-end headers for related reasons.\n>\n> Yeah, perhaps. We'd need to figure out just which classes we need\n> to divide protocol parameters into, and then think about a way for\n> code to understand which class a parameter falls into even when\n> it doesn't specifically know that parameter.\n\nI think this class is so rare, that it's not worth complicating the\ndiscussion on new protocol features even more. AFAICT there is only\none proposed protocol change that does not need any pooler support\n(apart from syncing the feature value when re-assigning the\nconnectin): Automatic binary encoding for a list of types\n\nAll others need some support from poolers, at the very least they need\nnew message types to not error out. But in many cases more complex\nstuff is needed.\n\n\n",
"msg_date": "Sat, 25 May 2024 12:23:11 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 23 May 2024 at 20:12, Tom Lane <[email protected]> wrote:\n>\n> Jelte Fennema-Nio <[email protected]> writes:\n> > On Fri, 17 May 2024 at 21:24, Robert Haas <[email protected]> wrote:\n> >> Perhaps these are all minority positions, but I can't tell you what\n> >> everyone thinks, only what I think.\n>\n> > I'd love to hear some opinions from others on these design choices. So\n> > far it seems like we're the only two that have opinions on this (which\n> > seems hard to believe) and our opinions are clearly conflicting. And\n> > above all I'd like to move forward with this, be it my way or yours\n> > (although I'd prefer my way of course ;) )\n>\n> I got around to looking through this thread in preparation for next\n> week's patch review session. I have a couple of opinions to offer:\n>\n> 1. Protocol versions suck. Bumping them is seldom a good answer,\n> and should never be done if you have any finer-grained negotiation\n> mechanism available. My aversion to this is over thirty years old:\n> I learned that lesson from watching the GIF87-to-GIF89 transition mess.\n> Authors of GIF-writing tools tended to take the easy way out and write\n> \"GIF89\" in the header whether they were actually using any of the new\n> version's features or not. This led to an awful lot of pictures that\n> couldn't be read by available GIF-displaying tools, for no good reason\n> whatsoever. The PNG committee, a couple years later, reacted to that\n> mess by designing PNG to have no version number whatsoever, and yet\n> be extensible in a fine-grained way. (Basically, a PNG file is made\n> up of labeled chunks. If a reader doesn't recognize a particular\n> chunk code, it can still tell whether the chunk is \"critical\" or not,\n> and thereby decide if it must give up or can proceed while ignoring\n> that chunk.)\n>\n> So overall, I have a strong preference for using the _pq_.xxx\n> mechanism instead of a protocol version bump. I do not believe\n> the latter has any advantage.\n\nI'm not necessarily super opposed to only using the _pq_.xxx\nmechanism. I mainly think it's silly to have a protocol version number\nand then never use it. And I feel some of the proposed changes don't\nreally benefit from being able to be turned on-and-off by themselves.\nMy rule of thumb would be:\n1. Things that a modern client/pooler would always request: version bump\n2. Everything else: _pq_.xxx\n\nOf the proposed changes so far on the mailing list the only 2 that\nwould fall under 1 imho are:\n1. The ParameterSet message\n2. Longer than 32bit secret in BackendKeyData\n\nI also don't think the GIF situation you describe translates fully to\nthis discussion. We have active protocol version negotiation, so if a\nserver doesn't support protocol 3.1 a client is expected to fall back\nto the 3.0 protocol when communicating. Of course you can argue that a\nbadly behaved client will fail to connect when it gets a downgrade\nrequest from the server, but that same argument can be made about a\nserver not reporting support for a _pq_.xxx parameter that every\nmodern client/pooler requests. So I don't think there's a practical\ndifference in the problem you're describing.\n\n\n\nBut again if I'm alone in this, then I don't\n\n\n",
"msg_date": "Sat, 25 May 2024 12:39:58 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "(pressed send to early)\n\nOn Sat, 25 May 2024 at 12:39, Jelte Fennema-Nio <[email protected]> wrote:\n> But again if I'm alone in this, then I don't\n\n... mind budging on this to move this decision along. Using _pq_.xxx\nparameters for all protocol changes would totally be acceptable to me.\n\n\n",
"msg_date": "Sat, 25 May 2024 12:43:46 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, 25 May 2024 at 06:40, Jelte Fennema-Nio <[email protected]> wrote:\n\n> On Thu, 23 May 2024 at 20:12, Tom Lane <[email protected]> wrote:\n> >\n> > Jelte Fennema-Nio <[email protected]> writes:\n> > > On Fri, 17 May 2024 at 21:24, Robert Haas <[email protected]>\n> wrote:\n> > >> Perhaps these are all minority positions, but I can't tell you what\n> > >> everyone thinks, only what I think.\n> >\n> > > I'd love to hear some opinions from others on these design choices. So\n> > > far it seems like we're the only two that have opinions on this (which\n> > > seems hard to believe) and our opinions are clearly conflicting. And\n> > > above all I'd like to move forward with this, be it my way or yours\n> > > (although I'd prefer my way of course ;) )\n> >\n> > I got around to looking through this thread in preparation for next\n> > week's patch review session. I have a couple of opinions to offer:\n> >\n> > 1. Protocol versions suck. Bumping them is seldom a good answer,\n> > and should never be done if you have any finer-grained negotiation\n> > mechanism available. My aversion to this is over thirty years old:\n> > I learned that lesson from watching the GIF87-to-GIF89 transition mess.\n> > Authors of GIF-writing tools tended to take the easy way out and write\n> > \"GIF89\" in the header whether they were actually using any of the new\n> > version's features or not. This led to an awful lot of pictures that\n> > couldn't be read by available GIF-displaying tools, for no good reason\n> > whatsoever. The PNG committee, a couple years later, reacted to that\n> > mess by designing PNG to have no version number whatsoever, and yet\n> > be extensible in a fine-grained way. (Basically, a PNG file is made\n> > up of labeled chunks. If a reader doesn't recognize a particular\n> > chunk code, it can still tell whether the chunk is \"critical\" or not,\n> > and thereby decide if it must give up or can proceed while ignoring\n> > that chunk.)\n> >\n> > So overall, I have a strong preference for using the _pq_.xxx\n> > mechanism instead of a protocol version bump. I do not believe\n> > the latter has any advantage.\n>\n> I'm not necessarily super opposed to only using the _pq_.xxx\n> mechanism.\n\n\nI find it interesting that up to now nobody has ever used this mechanism.\n\n\n> I mainly think it's silly to have a protocol version number\n> and then never use it. And I feel some of the proposed changes don't\n> really benefit from being able to be turned on-and-off by themselves.\n> My rule of thumb would be:\n> 1. Things that a modern client/pooler would always request: version bump\n> 2. Everything else: _pq_.xxx\n>\n\nHave to agree, why have a protocol version and then just not use it ?\n\n>\n> Of the proposed changes so far on the mailing list the only 2 that\n> would fall under 1 imho are:\n> 1. The ParameterSet message\n> 2. Longer than 32bit secret in BackendKeyData\n>\n> I also don't think the GIF situation you describe translates fully to\n> this discussion. We have active protocol version negotiation, so if a\n> server doesn't support protocol 3.1 a client is expected to fall back\n> to the 3.0 protocol when communicating.\n\n\nAlso agree. Isn't the point of having a version number to figure out what\nfeatures the client wants and subsequently the server can provide?\n\n> Of course you can argue that a\n> badly behaved client will fail to connect when it gets a downgrade\n> request from the server, but that same argument can be made about a\n> server not reporting support for a _pq_.xxx parameter that every\n> modern client/pooler requests. So I don't think there's a practical\n> difference in the problem you're describing.\n>\n\n+1\n\n>\n>\n>\n> But again if I'm alone in this, then I don't\n>\n\nI would prefer to see a well defined protocol handshaking mechanism rather\nthan some strange _pq.xxx dance.\n\nDave\n\nOn Sat, 25 May 2024 at 06:40, Jelte Fennema-Nio <[email protected]> wrote:On Thu, 23 May 2024 at 20:12, Tom Lane <[email protected]> wrote:\n>\n> Jelte Fennema-Nio <[email protected]> writes:\n> > On Fri, 17 May 2024 at 21:24, Robert Haas <[email protected]> wrote:\n> >> Perhaps these are all minority positions, but I can't tell you what\n> >> everyone thinks, only what I think.\n>\n> > I'd love to hear some opinions from others on these design choices. So\n> > far it seems like we're the only two that have opinions on this (which\n> > seems hard to believe) and our opinions are clearly conflicting. And\n> > above all I'd like to move forward with this, be it my way or yours\n> > (although I'd prefer my way of course ;) )\n>\n> I got around to looking through this thread in preparation for next\n> week's patch review session. I have a couple of opinions to offer:\n>\n> 1. Protocol versions suck. Bumping them is seldom a good answer,\n> and should never be done if you have any finer-grained negotiation\n> mechanism available. My aversion to this is over thirty years old:\n> I learned that lesson from watching the GIF87-to-GIF89 transition mess.\n> Authors of GIF-writing tools tended to take the easy way out and write\n> \"GIF89\" in the header whether they were actually using any of the new\n> version's features or not. This led to an awful lot of pictures that\n> couldn't be read by available GIF-displaying tools, for no good reason\n> whatsoever. The PNG committee, a couple years later, reacted to that\n> mess by designing PNG to have no version number whatsoever, and yet\n> be extensible in a fine-grained way. (Basically, a PNG file is made\n> up of labeled chunks. If a reader doesn't recognize a particular\n> chunk code, it can still tell whether the chunk is \"critical\" or not,\n> and thereby decide if it must give up or can proceed while ignoring\n> that chunk.)\n>\n> So overall, I have a strong preference for using the _pq_.xxx\n> mechanism instead of a protocol version bump. I do not believe\n> the latter has any advantage.\n\nI'm not necessarily super opposed to only using the _pq_.xxx\nmechanism.I find it interesting that up to now nobody has ever used this mechanism. I mainly think it's silly to have a protocol version number\nand then never use it. And I feel some of the proposed changes don't\nreally benefit from being able to be turned on-and-off by themselves.\nMy rule of thumb would be:\n1. Things that a modern client/pooler would always request: version bump\n2. Everything else: _pq_.xxxHave to agree, why have a protocol version and then just not use it ? \n\nOf the proposed changes so far on the mailing list the only 2 that\nwould fall under 1 imho are:\n1. The ParameterSet message\n2. Longer than 32bit secret in BackendKeyData\n\nI also don't think the GIF situation you describe translates fully to\nthis discussion. We have active protocol version negotiation, so if a\nserver doesn't support protocol 3.1 a client is expected to fall back\nto the 3.0 protocol when communicating.Also agree. Isn't the point of having a version number to figure out what features the client wants and subsequently the server can provide? Of course you can argue that a\nbadly behaved client will fail to connect when it gets a downgrade\nrequest from the server, but that same argument can be made about a\nserver not reporting support for a _pq_.xxx parameter that every\nmodern client/pooler requests. So I don't think there's a practical\ndifference in the problem you're describing.+1 \n\n\n\nBut again if I'm alone in this, then I don'tI would prefer to see a well defined protocol handshaking mechanism rather than some strange _pq.xxx dance.Dave",
"msg_date": "Sat, 25 May 2024 06:53:32 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, May 24, 2024 at 6:09 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Separating it from the GUC infrastructure will mean we need to\n> duplicate a lot of the infrastructure. Assuming we don't care about\n> SHOW or pg_settings (which I agree are not super important), the\n> things that we would want for protocol parameters to have that guc.c\n> gives us for free are:\n> 1. Reporting the value of the parameter to the client (done using\n> ParameterStatus)\n> 2. Parsing and validating of the input, bool, int, enum, etc, but also\n> check_hook and assign_hook.\n> 3. Logic in all connection poolers to change GUC values to the\n> client's expected values whenever a server connection is handed off to\n> a client\n> 4. Permission checking, if we want some protocol extensions to only be\n> configurable by a highly privileged user\n>\n> All of those things would have to be duplicated/re-implemented if we\n> make protocol parameters their own dedicated thing. Doing that work\n> seems like a waste of time to me, and would imho add much more\n> complexity than the proposed 65 lines of code in 0011.\n\nI think that separating (1) and (3) is going to make us happy rather\nthan sad. For example, if a client speaks to an intermediate pooler\nwhich speaks to a server, the client-pooler connection could have\ndifferent values from the pooler-server connection, and then if\neverything is done via ParameterStatus messages it's actually more\ncomplicated for the pooler, which will have to edit ParameterStatus\nmessages as they pass through, and possibly also create new ones out\nof nothing. If we separate things, the pooler can always pass\nParameterStatus messages unchanged, and only has to worry about the\nseparate infrastructure for handling these new things.\n\nSaid differently, I'd agree with you if (a) ParameterStatus weren't so\ndubiously designed and (b) all poolers were going to want every\nprotocol parameter to match on the input side and the output side. And\nmaybe in practice (b) will happen, but I want to leave the door open\nto cases where it doesn't, and as soon as that happens, I think this\nbecomes a hassle, whereas separate mechanisms don't really hurt much.\nAs you and I discussed a bit in person last week, two for loops rather\nthan one in the pooler isn't really that big of a deal. IMHO, that's a\nsmall price to pay for an increased chance of not boxing ourselves\ninto a corner depending on how these parameters end up getting used in\npractice.\n\nAs for (2) and (4), I agree there's some duplication, but I think it's\nquite minor. We have existing facilities for parsing integers and\nbooleans that are reused by a lot of code already, and this is just\none more place that can use them. That infrastructure is not\nGUC-specific. The permissions-checking stuff isn't either. The vast\nbulk of the GUC infrastructure is concerned with (1) allowing for GUCs\nto be set from many different sources and (2) handling their\ntransactional nature. We don't need or want either of those\ncharacteristics here, so a lot of that complexity just isn't needed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Jun 2024 10:47:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, May 25, 2024 at 6:40 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Of the proposed changes so far on the mailing list the only 2 that\n> would fall under 1 imho are:\n> 1. The ParameterSet message\n> 2. Longer than 32bit secret in BackendKeyData\n\nYeah. I wonder how Heikki thinks he can do (2) without breaking\neverything. Maybe just adding an extra, optional, longer field onto\nthe existing message and hoping that client implementations ignore the\nextra field? If that's not good enough, then I don't understand how\nthat can be done without breaking compatibility in a fundamental and\nrelatively serious way -- at which point maybe bumping the protocol\nversion is the right thing to do.\n\nReally, what I'm strongly opposed to is not bumping the version, but\nrather doing things that break compatibility such that we need to bump\nthe version. *If* we have a problem that's sufficiently serious to\njustify breaking compatibility anyway, then we don't really lose\nanything by bumping the version, and indeed we gain something. I just\nwant us to be searching for ways to avoid breaking interoperability,\nrather than seeking them out. If it becomes impossible for a PG 18 (or\nwhatever version) server to communicate with earlier servers without\nspecifying special options, or worse yet at all, then a lot of people\nare going to be very sad about that. We will get a bunch of complaints\nand a bunch of frustrated users, and they will not be impressed by\nvague claims of necessity or desirability. They'll just be mad.\n\nThe question for me here is not \"what is the theoretically right thing\nto do?\" but rather \"what am I going to tell angry users when they\ndemand to know why I committed the patch that broke this?\". \"The old\nway was insecure so we had to change it\" might be a good enough reason\nfor people to calm down and stop yelling at me, but \"it's no use\nhaving a protocol version if we never bump it\" definitely won't be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Jun 2024 11:00:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 4 Jun 2024 at 16:47, Robert Haas <[email protected]> wrote:\n> I think that separating (1) and (3) is going to make us happy rather\n> than sad.\n\nAttached is a new patchset where I keep protocol parameters completely\nseparate from GUCs. I do think I indeed like it better this way,\nalthough I'm not entirely sure about the newly proposed\nNegotiateProtocolParameter message (which we discussed in person last\nweek). I'm looking forward to hearing what you think of these newly\nproposed changes.\n\nPatch 1 & 2: Minor code changes with zero effect until we actually\nbump the protocol version or add protocol parameters. I hope these can\nbe merged rather soon to reduce the number of patches in the patchset.\nPatch 3: Similar to 1 & 2 in that it has no actual effect yet. But\nafter bumping the version this would be a user visible API change, so\nI expect it requires a bit more discussion.\nPatch 4: Adds a new connection option, but initially all parameters\nthat it takes have the same effect.\nPatch 5: Starts using the new connection option from Patch 4\nPatch 6: Libpq changes to start handling NegotiateProtocolVersion in a\nmore complex way. (nothing changes in practice yet)\nPatch 7: Bump the protocol version to 3.2 (see commit message for why\nnot bumping to 3.1)\nPatch 8: The main change: Infrastructure and protocol support to be\nable to add protocol parameters\nPatch 9: Adds a report_parameters protocol extension as a POC for the\nchanges in the previous patch.\n\nSee the commit messages for more details.\n\nOn Tue, 4 Jun 2024 at 17:00, Robert Haas <[email protected]> wrote:\n> Yeah. I wonder how Heikki thinks he can do (2) without breaking\n> everything. Maybe just adding an extra, optional, longer field onto\n> the existing message and hoping that client implementations ignore the\n> extra field? If that's not good enough, then I don't understand how\n> that can be done without breaking compatibility in a fundamental and\n> relatively serious way -- at which point maybe bumping the protocol\n> version is the right thing to do.\n\nFYI Heikki his patchset is here:\nhttps://www.postgresql.org/message-id/flat/508d0505-8b7a-4864-a681-e7e5edfe32aa%40iki.fi\n\nAfaict there's no way to implement this with old clients supporting\nthe new message. So it would need to be opt-in from the client\nperspective, either using a version bump or a protocol parameter (e.g.\nlarge_cancel=true). IMHO a version bump would make more sense for this\none.\n\n> Really, what I'm strongly opposed to is not bumping the version, but\n> rather doing things that break compatibility such that we need to bump\n> the version. *If* we have a problem that's sufficiently serious to\n> justify breaking compatibility anyway, then we don't really lose\n> anything by bumping the version, and indeed we gain something. I just\n> want us to be searching for ways to avoid breaking interoperability,\n> rather than seeking them out. If it becomes impossible for a PG 18 (or\n> whatever version) server to communicate with earlier servers without\n> specifying special options, or worse yet at all, then a lot of people\n> are going to be very sad about that. We will get a bunch of complaints\n> and a bunch of frustrated users, and they will not be impressed by\n> vague claims of necessity or desirability. They'll just be mad.\n\nAs discussed in person last week: I think we agree on this. There is\nobviously some moment where it becomes acceptable e.g. libpq45 not\nbeing able to connect to PG11 (at least by default) would probably be\nreasonable. But currently I don't think we're there yet, given the\ncurrent level of support for NegotiateProtocolVersion in the wild.\nWhile many PG servers support this message, no pooler supports it at\nthe moment. Not being able to connect to a pooler by default is indeed\ngoing to make people quite angry.\n\nSo that's why in this new version of the patchset, libpq continues to\nconnect using the 3.0 protocol version by default. However, as soon as\na user asks for a feature in their connection string that requires a\nprotocol parameter, or when they explicitly ask for a newer protocol\nversion to be used, then both the newest protocol version supported by\nthe client as well as all known protocol parameters are requested.\n\nAgain as discussed in person, this is meant to prevent ossification of\nthe protocol. For servers that don't support NegotiateProtocolVersion\ncurrently, a client asking for a new protocol version is just as\nbreaking as a client asking for an unknown protocol parameter (i.e.\nthe server will close the connection, because it doesn't know what to\ndo). Since sending either of these anyway is a breaking change in the\nStartupMessage, we might as well break it as much as possible, in the\nhopes that clients/servers/poolers implement NegotiateProtocolVersion\ncorrectly. Which means afterwards we can safely add new protocol\nparameters again, as well as safely bump the minor protocol version\nagain without worrying about breaking the ecosystem (because now\nfeature+version negotiation is implemented everywhere). Not bumping\nthe version number, or not adding a protocol parameter, has the risk\nof people implementing NegotiateProtocolVersion in such a way that it\nonly works for the thing that we changed the first time (i.e. only\nversion negotiation or only feature negotiation, instead of both)\n\n> The question for me here is not \"what is the theoretically right thing\n> to do?\" but rather \"what am I going to tell angry users when they\n> demand to know why I committed the patch that broke this?\". \"The old\n> way was insecure so we had to change it\" might be a good enough reason\n> for people to calm down and stop yelling at me, but \"it's no use\n> having a protocol version if we never bump it\" definitely won't be.\n\nI think that is totally fair, see also my reason to bump the protocol\nversion to 3.2 instead of 3.1 in the commit message (i.e. because\nPgBouncer currently mistakenly reports 3.1 as supported).",
"msg_date": "Wed, 5 Jun 2024 16:06:30 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 10:06 AM Jelte Fennema-Nio <[email protected]> wrote:\n> FYI Heikki his patchset is here:\n> https://www.postgresql.org/message-id/flat/508d0505-8b7a-4864-a681-e7e5edfe32aa%40iki.fi\n>\n> Afaict there's no way to implement this with old clients supporting\n> the new message. So it would need to be opt-in from the client\n> perspective, either using a version bump or a protocol parameter (e.g.\n> large_cancel=true). IMHO a version bump would make more sense for this\n> one.\n\nWell, hang on. The discussion on that thread suggests that this is\nonly going to break connections to servers that don't have\nNegotiateProtocolVersion. Personally, I think that's fairly OK. It's\ntrue that connections to, I guess, pre-9.3 servers will break, but\nthere shouldn't be tons of those left around. It's not great to break\nconnectivity to such servers, of course, but it seems kind of OK. What\nI really don't want is for v18 to break connections to v17 servers.\nThat would be exponentially more disruptive.\n\nI do take your point that poolers haven't added support for\nNegotiateProtocolVersion yet, but I bet that will change pretty\nquickly once there's a benefit to doing so. I think it's a lot easier\nfor people to install a new pooler version than a new server version,\nbecause the server has the data in it. Also, it's not like they\nhaven't had time.\n\nBut the real question here is whether we think the longer cancel key\nis really important enough to justify the partial compatibility break.\nI'm not deathly opposed to it, but I think it's debatable and I'm sure\nsome people are going to be unhappy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2024 11:12:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 5 Jun 2024 at 17:12, Robert Haas <[email protected]> wrote:\n> Well, hang on. The discussion on that thread suggests that this is\n> only going to break connections to servers that don't have\n> NegotiateProtocolVersion.\n\nYes, correct.\n\n> What\n> I really don't want is for v18 to break connections to v17 servers.\n> That would be exponentially more disruptive.\n\nTotally agreed, and that's easily achievable because of the\nNegotiateProtocolVersion message. We're all good on v18 to v17\nconnections and protocol changes, as long as we make any new behaviour\ndepend on either a protocol parameter or a protocol version bump.\n\n> I do take your point that poolers haven't added support for\n> NegotiateProtocolVersion yet, but I bet that will change pretty\n> quickly once there's a benefit to doing so.\n\nI think at minimum we'll need a mechanism for people to connect using\na StartupMessage that doesn't break existing poolers. If users have no\nway to connect at all to their semi-recently installed connection\npooler with the newest libpq, then I expect we'll have many unhappy\nusers. I think it's debatable whether the compatible StartupMessage\nshould be opt-in or opt-out. I'd personally at minimum want to wait\none PG release before using the incompatible StartupMessage by\ndefault, just to give pooler installs some time to catch up.\n\n> I think it's a lot easier\n> for people to install a new pooler version than a new server version,\n> because the server has the data in it. Also, it's not like they\n> haven't had time.\n\nI agree that it's a lot easier to upgrade poolers than it is to\nupgrade PG, but still people are generally hesitant to upgrade stuff\nin their infrastructure. And I totally agree that poolers have had the\ntime to implement NegotiateProtocolVersion support, but I'm pretty\nsure many users will assign blame to libpq anyway.\n\n> But the real question here is whether we think the longer cancel key\n> is really important enough to justify the partial compatibility break.\n> I'm not deathly opposed to it, but I think it's debatable and I'm sure\n> some people are going to be unhappy.\n\nI think if it's an opt-in compatibility break, users won't care how\nimportant it is. It's either important enough to opt-in to this\ncompatibility break for them, or it's not and nothing changes for\nthem.\n\nMy feeling is that we're unlikely to find a feature that's worth\nbreaking compatibility (with poolers and pre-9.3 PGs) by default on\nits own. But after adding a few new protocol features, at some point\ntogether these features become worth this break. Especially, because\nby then 9.3 will be even older and poolers will have started to\nsupport NegotiateProtocolVersion (due to people complaining that they\ncannot connect with the new opt-in libpq break-compatibility flag).\nBut if we're going to wait until we have the one super important\nprotocol feature, then I don't see us moving forward.\n\nTo summarize: I'd like to make some relatively small opt-in change(s)\nin PG18 that breaks compatibility (with poolers and pre-9.3 PGs). So\nthat when we have an important enough reason to break compatibility by\ndefault, that break will be much less painful to do.\n\n\n",
"msg_date": "Wed, 5 Jun 2024 19:49:58 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 1:50 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Totally agreed, and that's easily achievable because of the\n> NegotiateProtocolVersion message. We're all good on v18 to v17\n> connections and protocol changes, as long as we make any new behaviour\n> depend on either a protocol parameter or a protocol version bump.\n\nCool.\n\n> I think at minimum we'll need a mechanism for people to connect using\n> a StartupMessage that doesn't break existing poolers. If users have no\n> way to connect at all to their semi-recently installed connection\n> pooler with the newest libpq, then I expect we'll have many unhappy\n> users. I think it's debatable whether the compatible StartupMessage\n> should be opt-in or opt-out. I'd personally at minimum want to wait\n> one PG release before using the incompatible StartupMessage by\n> default, just to give pooler installs some time to catch up.\n\nI agree that we need such a mechanism, but if the driving feature is\ncancel-key length, I expect that opt-in isn't going to work very well.\nOpt-in seems like it would work well with compression or transparent\ncolumn encryption, but few users will specify a connection string\noption just to get a longer cancel key length, so the feature won't\nget much use if we do it that way. I won't be crushed if we decide to\nsomehow make it opt-in, but I kind of doubt that will happen. Would we\nmake everyone add longcancelkey=true to all their connection strings\nfor one release and then carry that connection parameter until the\nheat death of the universe even though after the 1-release transition\nperiod there would be no reason to ever use it? Would we rip the\nparameter back out after the transition period and break existing\nconnection strings? Would we have people write protocolversion=3.1 to\nopt in and then they could just keep that in the connection string\nwithout harm, or at least without harm until 3.2 comes out? I don't\nreally like any of these options that well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2024 16:48:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 5 Jun 2024 at 22:48, Robert Haas <[email protected]> wrote:\n> I agree that we need such a mechanism, but if the driving feature is\n> cancel-key length, I expect that opt-in isn't going to work very well.\n> Opt-in seems like it would work well with compression or transparent\n> column encryption, but few users will specify a connection string\n> option just to get a longer cancel key length, so the feature won't\n> get much use if we do it that way.\n\nI know Neon wants to make use of this for their proxy (to encode some\ntenant_id into the key). So they might want to require people to\nopt-in when using their proxy.\n\n> I won't be crushed if we decide to\n> somehow make it opt-in, but I kind of doubt that will happen.\n\n> Would we\n> make everyone add longcancelkey=true to all their connection strings\n> for one release and then carry that connection parameter until the\n> heat death of the universe even though after the 1-release transition\n> period there would be no reason to ever use it? Would we rip the\n> parameter back out after the transition period and break existing\n> connection strings? Would we have people write protocolversion=3.1 to\n> opt in and then they could just keep that in the connection string\n> without harm, or at least without harm until 3.2 comes out? I don't\n> really like any of these options that well.\n\nI agree longcancelkey=true is not what we want. In my patch 0004, you\ncan specify max_protocol_version=latest to use the latest protocol\nversion as opt-in. This is a future proof version of\nprotocolversion=3.1 that you're proposing, because it will\nautomatically start using 3.2 when it comes out. So I think that\nsolves your concern here. (although maybe it should be called\nlatest-3.x or something, in case we ever want to add a 4.0 protocol,\nnaming is hard)\n\nI personally quite like the max_protocol_version connection parameter.\nI think even for testing it is pretty useful to tell libpq what\nprotocol version to try to connect as. It could even be accompanied\nwith a min_protocol_version, e.g. in case you only want the connection\nattempt to fail when the server does not support this more secure\ncancel key.\n\n\n",
"msg_date": "Wed, 5 Jun 2024 23:15:53 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 5:16 PM Jelte Fennema-Nio <[email protected]> wrote:\n> I agree longcancelkey=true is not what we want. In my patch 0004, you\n> can specify max_protocol_version=latest to use the latest protocol\n> version as opt-in. This is a future proof version of\n> protocolversion=3.1 that you're proposing, because it will\n> automatically start using 3.2 when it comes out. So I think that\n> solves your concern here. (although maybe it should be called\n> latest-3.x or something, in case we ever want to add a 4.0 protocol,\n> naming is hard)\n>\n> I personally quite like the max_protocol_version connection parameter.\n> I think even for testing it is pretty useful to tell libpq what\n> protocol version to try to connect as. It could even be accompanied\n> with a min_protocol_version, e.g. in case you only want the connection\n> attempt to fail when the server does not support this more secure\n> cancel key.\n\nThis makes some sense to me. I don't think that I believe\nmax_protocol_version=latest is future-proof: just because I want to\nopt into this round of new features doesn't mean I feel the same way\nabout the next round. But it may still be a good idea.\n\nI suppose the semantics are that we try to connect with the version\nspecified by max_protocol_version and, if we get downgraded by the\nserver, we abort if we end up below min_protocol_version. I like those\nsemantics, and I think I also like having both parameters, but I'm not\n100% sure of the naming. It's a funny use of \"max\" and \"min\", because\nthe max is really what we're trying to do and the min is what we end\nup with, and those terms don't necessarily bring those ideas to mind.\nI don't have a better idea off-hand, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2024 21:02:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 9:03 PM Robert Haas <[email protected]> wrote:\n\n> It's a funny use of \"max\" and \"min\", because the max is really what we're\n> trying to do and the min is what we end\n> up with, and those terms don't necessarily bring those ideas to mind.\n\n\nrequested_protocol_version and minimum_protocol_version?\n\nOn Wed, Jun 5, 2024 at 9:03 PM Robert Haas <[email protected]> wrote: It's a funny use of \"max\" and \"min\", because the max is really what we're trying to do and the min is what we end\nup with, and those terms don't necessarily bring those ideas to mind.requested_protocol_version and minimum_protocol_version?",
"msg_date": "Wed, 5 Jun 2024 23:49:06 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 6 Jun 2024 at 03:03, Robert Haas <[email protected]> wrote:\n> This makes some sense to me. I don't think that I believe\n> max_protocol_version=latest is future-proof: just because I want to\n> opt into this round of new features doesn't mean I feel the same way\n> about the next round. But it may still be a good idea.\n\nI think for most people the only reason not to opt-in to improvements\n(even if they are small) is if those improvements break something\nelse. Once the NegotiateProtocolVersion message is implemented\neverywhere in the ecosystem, nothing should break when going from e.g.\n3.2 to 3.4. So for the majority of the people I think\nmax_protocol_version=latest is what they'll want to use once the\necosystem has caught up. Of course there will be people that want\ntight control, but they can set max_protocol_version=3.2 instead.\n\n> I suppose the semantics are that we try to connect with the version\n> specified by max_protocol_version and, if we get downgraded by the\n> server, we abort if we end up below min_protocol_version.\n\nCorrect\n\n> I like those\n> semantics, and I think I also like having both parameters, but I'm not\n> 100% sure of the naming. It's a funny use of \"max\" and \"min\", because\n> the max is really what we're trying to do and the min is what we end\n> up with, and those terms don't necessarily bring those ideas to mind.\n> I don't have a better idea off-hand, though.\n\nI borrowed this terminology from the the ssl_min_protocol_version and\nssl_max_protocol_version connection options that we already have.\nThose basically have the same semantics as what I'm proposing here,\nbut for the TLS protocol version instead of the Postgres protocol\nversion. I'm also not a huge fan of the min_protocol_version and\nmax_protocol_version names, but staying consistent with existing\noptions seems quite nice.\n\nLooking at ssl_max_protocol_version closer though, to stay really\nconsistent I'd have to change \"latest\" to be renamed to empty string\n(i.e. there is no max_protocol_version). I think I might prefer\nstaying consistent over introducing an imho slightly clearer name.\nAnother way to stay consistent would of course be also adding \"latest\"\nas an option to ssl_max_protocol_version? What do you think?\n\nI'll look into adding min_protocol_version to the patchset soonish.\nSome review of the existing code in the first few patches would\ndefinitely be appreciated.\n\n\n",
"msg_date": "Thu, 6 Jun 2024 11:12:30 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Jun 6, 2024 at 5:12 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Looking at ssl_max_protocol_version closer though, to stay really\n> consistent I'd have to change \"latest\" to be renamed to empty string\n> (i.e. there is no max_protocol_version). I think I might prefer\n> staying consistent over introducing an imho slightly clearer name.\n> Another way to stay consistent would of course be also adding \"latest\"\n> as an option to ssl_max_protocol_version? What do you think?\n\nAs I see it, the issue here is whether the default value would ever be\ndifferent from the latest value. If not, then using blank to mean the\nlatest seems fine, but if so, then I'd expect blank to mean the\ndefault version and latest to mean the latest version.\n\n> I'll look into adding min_protocol_version to the patchset soonish.\n> Some review of the existing code in the first few patches would\n> definitely be appreciated.\n\nYeah, I understand, and I do want to do that, but keep in mind I've\nalready spent considerable time on this patch set, way more than most\nothers, and if I want to get it committed I'm nowhere close to being\ndone. It's probably multiple weeks of additional work for me, and I\nthink I've probably already spent close to a week on this, and I only\nwork ~48 weeks a year, and there are ~300 patches in the CommitFest.\nPlus, right now there is no possibility of actually committing\nanything until after we branch. And, respectfully, I feel like there\nhas to be some give and take here. I've been trying to give this patch\nset higher priority because it's in an area that I know something\nabout and have opinions about and also because I can tell that you're\nkind of frustrated and I don't want you to leave the development\ncommunity. But, at the same time, I don't think you've done much to\nhelp me get my patches committed, and while you have done some review\nof other people's patches, it doesn't seem to often be the kind of\ndetailed, line-by-line review that is needed to get most patches\ncommitted. So I'm curious how you expect this system to scale.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jun 2024 12:01:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, 6 Jun 2024 at 18:01, Robert Haas <[email protected]> wrote:\n> As I see it, the issue here is whether the default value would ever be\n> different from the latest value. If not, then using blank to mean the\n> latest seems fine, but if so, then I'd expect blank to mean the\n> default version and latest to mean the latest version.\n\nAlright, that's fair. And we already seem to follow that pattern:\nThere's currently no connection option that has a default that's not\nthe empty string, but still accepts the empty string as an argument.\n\n> > I'll look into adding min_protocol_version to the patchset soonish.\n> > Some review of the existing code in the first few patches would\n> > definitely be appreciated.\n>\n> Yeah, I understand, and I do want to do that, but keep in mind I've\n> already spent considerable time on this patch set, way more than most\n> others, and if I want to get it committed I'm nowhere close to being\n> done. It's probably multiple weeks of additional work for me, and I\n> think I've probably already spent close to a week on this, and I only\n> work ~48 weeks a year, and there are ~300 patches in the CommitFest.\n\nI very much appreciate the time you spent on this patchset so far. I\nmainly thought that instead of only discussing the more complex parts\nof the patchset, it would be nice to also actually move forward a\nlittle bit too. And the first 3 patches in this patchset are very\nsmall and imho straightforward improvements.\n\nTo be clear, I'm not saying that should be all on you. I think those\nfirst three patches can be reviewed by pretty much anyone.\n\n> Plus, right now there is no possibility of actually committing\n> anything until after we branch.\n\nTotally fair, but even a LGTM on one of the patches would be quite nice.\n\n> And, respectfully, I feel like there\n> has to be some give and take here. I've been trying to give this patch\n> set higher priority because it's in an area that I know something\n> about and have opinions about and also because I can tell that you're\n> kind of frustrated and I don't want you to leave the development\n> community.\n\nThank you for giving it a higher priority, it's definitely appreciated\nand noticed.\n\n> But, at the same time, I don't think you've done much to\n> help me get my patches committed, and while you have done some review\n> of other people's patches, it doesn't seem to often be the kind of\n> detailed, line-by-line review that is needed to get most patches\n> committed. So I'm curious how you expect this system to scale.\n\nOf course there's always the possibility to review more. But I don't\nreally agree with this summary of my review activity. I did see your\npatches related to the incremental backup stuff. They looked\ninteresting, but at the time from an outside perspective it didn't\nseem like those threads needed my reviews to progress (a bunch of\npeople more knowledgable on the topic were already responding). So I\nspent my time mainly on threads where I felt I could add something\nuseful, and often that was more on the design front than the exact\ncode. Honestly that's what triggered this whole patchset in the first\nplace: Adding infrastructure for protocol changes so that the several\nother threads that try to introduce protocol changes can actually move\nforward, instead of being in limbo forever.\n\nRegarding line-by-line reviews, imho I definitely do that for the\nsmaller patches I tend to review (even if they are part of a bigger\npatchset). But the bigger ones I don't think line-by-line reviews are\nsuper helpful at the start, so I generally comment more on the design\nin those cases.\n\n\n",
"msg_date": "Thu, 6 Jun 2024 21:27:36 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Jun 6, 2024 at 3:27 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Of course there's always the possibility to review more. But I don't\n> really agree with this summary of my review activity.\n\nNonetheless, I need to take a break from this to work on some of my\nown stuff. I'll circle back around to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jun 2024 15:51:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 10:06 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Patch 1 & 2: Minor code changes with zero effect until we actually\n> bump the protocol version or add protocol parameters. I hope these can\n> be merged rather soon to reduce the number of patches in the patchset.\n\n0001 looks like a bug fix that can (and probably should) be committed\nand back-patched. What would reduce work for me is if the commit\nmessage explained why these changes are necessary and to which stable\nbranches they should be applied and, if that's not all of them, the\nreason why back-patching should stop at some particular release. The\nchange to pqTraceOutput_NegotiateProtocolVersion is easy to\nunderstand: the current code doesn't dump the message format as\ndocumented. It probably doesn't help that the documentation is missing\na word -- it should say \"Then, for each protocol option...\". It's less\nobvious why the change to fe-connect.c is needed. Since most cases\nseem to be handled in a few common places, it would be good to mention\nin the commit message (or maybe a comment) why this one needs separate\nhandling.\n\nI agree with 0002 except for the change from PG_PROTOCOL_MINOR(proto)\n> PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) to proto > PG_PROTOCOL_LATEST.\nI prefer that test the way it is; I think the intent is clearer with\nthe existing code.\n\n> Patch 3: Similar to 1 & 2 in that it has no actual effect yet. But\n> after bumping the version this would be a user visible API change, so\n> I expect it requires a bit more discussion.\n\nI don't know if this is the right idea or not. An alternative would be\nto leave this alone and add PQprotocolMinorVersion().\n\n> Patch 4: Adds a new connection option, but initially all parameters\n> that it takes have the same effect.\n\nGenerally seems OK, but:\n- The commit message needs spelling and grammar checking.\n- dispsize 3 isn't long enough for 3.10, or more immediately, \"latest\".\n- This is storing the major protocol version in a variable called\n\"minor\" and the minor protocol version in a variable called \"major\".\n- I think PGMAXPROTOCOLVERSION needs to be added to the docs.\n\n> Patch 5: Starts using the new connection option from Patch 4\n\nNot sure yet whether I like this approach.\n\n> Patch 6: Libpq changes to start handling NegotiateProtocolVersion in a\n> more complex way. (nothing changes in practice yet)\n\n+ * NegotiateProtocolVersion message. So we only want to send a\n\nonly->don't\n\n+ * protocol version by default. Since either of those would cause a\n\n\"default. Since\" => \"default, since\"\n\n+ char *conn_string_value = *(char **) ((char *) conn +\nparam->conn_connection_string_value_offset);\n+ char *server_value = *(char **) ((char *) conn +\nparam->conn_connection_string_value_offset);\n+ char *supported_value = *(char **) ((char *) conn +\nparam->conn_connection_string_value_offset);\n\nI have some difficulty understanding how these calculations would\nproduce different answers.\n\n+ libpq_append_conn_error(conn, \"invalid NegotiateProtocolVersion\nmessage, server version is newer than client version\");\n+ libpq_append_conn_error(conn, \"invalid NegotiateProtocolVersion\nmessage, negative protocol parameter count\");\n+ libpq_append_conn_error(conn, \"unexpected NegotiateProtocolVersion\nmessage, server supports requested features\");\n\nThese messages don't seem good. First, I don't think that telling the\nuser that there's a problem with a specific wire protocol message is\nvery user-friendly. Second, the use of a comma to glue two\nsemi-related thoughts together is not a great practice in English in\ngeneral and is certainly something we should try to avoid in our error\nmessages. Third, the first and last of these aren't very clear about\nwhat the problem actually is. I can only understand it from reading\nthe code.\n\nMaybe these messages could be rephrased as \"unable to negotiate\nprotocol version: blah\". Like \"unable to negotiate protocol version:\nserver requests downgrade to higher-numbered version\" or \"unable to\nnegotiate protocol version: server negotiates but asks for no\nchanges\".\n\nI don't think I completely understand what's going on in this patch\nyet. I'm not sure that it can be committed on its own, and I think it\nmight need more polishing, including on comments and the commit\nmessage.\n\n> Patch 7: Bump the protocol version to 3.2 (see commit message for why\n> not bumping to 3.1)\n\nGood commit message. The point is arguable, so putting forth your best\nargument is important.\n\n> Patch 8: The main change: Infrastructure and protocol support to be\n> able to add protocol parameters\n> Patch 9: Adds a report_parameters protocol extension as a POC for the\n> changes in the previous patch.\n\nMy general impression on first looking at these patches is that a lot\nof the ideas make sense but that they don't seem very close to being\ncommittable.\n\nIt's not very clear how these new messages integrate into the overall\nprotocol flow. The documentation makes the negative statement that\nthey can't be used as part of the extended query protocol, but that\njust begs the question of where they can be used. I think there should\nbe an update to protocol-flow.html here. For example, consider the\n\"Simple Query\" section of that page, which begins \"A simple query\ncycle is initiated by the frontend sending a Query message to the\nbackend.\" It goes on to describe what happens afterward. A similar\ndiscussion seems to be needed here, or maybe two of them,\n\nThe patch touches src/interfaces/libpq (which is good) but does not\nupdate the libpq documentation (which is bad).\n\nThe documentation for NegotiateProtocolParameter is almost identical\nto the documentation for SetProtocolParameterComplete. I would have\nexpected the former to include a field giving guidance about values\nthat might be legal in the future, and the latter to include an error\nmessage, rather than just an error indicator.\n\nI wonder whether we could define 3.2 to report on all supported\nprotocol parameters even if they weren't in the startup message, to\navoid having to jam a lot of stuff we don't really care about into the\nstartup message.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jun 2024 13:53:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 12 Jun 2024 at 19:53, Robert Haas <[email protected]> wrote:\n> 0001 looks like a bug fix that can (and probably should) be committed\n> and back-patched.\n\nI moved this patch to its own thread, together with a bunch of other\nfixes to the libpq tracing logic:\nhttps://www.postgresql.org/message-id/flat/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS%2BYWXBhOGo%2BY1YecLgknF3g%40mail.gmail.com\n\nI left the patch numbering intact though.\n\n> I agree with 0002 except for the change from PG_PROTOCOL_MINOR(proto)\n> > PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) to proto > PG_PROTOCOL_LATEST.\n> I prefer that test the way it is; I think the intent is clearer with\n> the existing code.\n\nGreat to hear! I reverted that change to the check back to what it was now.\n\n> > Patch 3: Similar to 1 & 2 in that it has no actual effect yet. But\n> > after bumping the version this would be a user visible API change, so\n> > I expect it requires a bit more discussion.\n>\n> I don't know if this is the right idea or not. An alternative would be\n> to leave this alone and add PQprotocolMinorVersion().\n\nI considered that, but that makes the API a lot more annoying to use\nfor end users when they want to compare to a version, especially when\nthey want to include the major version too.\n\nPQprotocolVersion() >= 30001\n\nvs\n\nPQprotocolVersion() > 3 || (PQprotocolVersion() == 3 &&\nPQprotocolVersionMinor() >= 1)\n\nGiven that PQprotocolVersion currently has no practical use, because\nit always returns 3 in practice. I personally think that changing the\nbehaviour of API in the way I suggested is the best option here.\n\n> > Patch 4: Adds a new connection option, but initially all parameters\n> > that it takes have the same effect.\n>\n> Generally seems OK, but:\n\nFixed\n\n> > Patch 5: Starts using the new connection option from Patch 4\n>\n> Not sure yet whether I like this approach.\n\nCould you explain a bit more what you don't like about it?\n\n> > Patch 6: Libpq changes to start handling NegotiateProtocolVersion in a\n> > more complex way. (nothing changes in practice yet)\n>\n> + * NegotiateProtocolVersion message. So we only want to send a\n>\n> only->don't\n>\n> + * protocol version by default. Since either of those would cause a\n>\n> \"default. Since\" => \"default, since\"\n\nFixed\n\n> I have some difficulty understanding how these calculations would\n> produce different answers.\n\nOops, copy paste mistake, fixed\n\n> + libpq_append_conn_error(conn, \"invalid NegotiateProtocolVersion\n> message, server version is newer than client version\");\n> + libpq_append_conn_error(conn, \"invalid NegotiateProtocolVersion\n> message, negative protocol parameter count\");\n> + libpq_append_conn_error(conn, \"unexpected NegotiateProtocolVersion\n> message, server supports requested features\");\n>\n> These messages don't seem good.\n\nFair enough. I changed them a bit now, do you think they are good now?\n\n> I don't think I completely understand what's going on in this patch\n> yet. I'm not sure that it can be committed on its own, and I think it\n> might need more polishing, including on comments and the commit\n> message.\n\nYeah, I think it probably makes sense to combine this with 0008,\nbecause there is so much interaction between the two. For now I\nhaven't done that yet though.\n\n> > Patch 7: Bump the protocol version to 3.2 (see commit message for why\n> > not bumping to 3.1)\n>\n> Good commit message. The point is arguable, so putting forth your best\n> argument is important.\n\nJust to clarify: do you agree with the point now, after that argument?\n\n> > Patch 8: The main change: Infrastructure and protocol support to be\n> > able to add protocol parameters\n> > Patch 9: Adds a report_parameters protocol extension as a POC for the\n> > changes in the previous patch.\n>\n> My general impression on first looking at these patches is that a lot\n> of the ideas make sense but that they don't seem very close to being\n> committable.\n\nI totally agree that there's definitely significant work/discussion\nthat needs to happen before these are committable. Patch 8 was\nbasically my first implementation of my interpretation of our\nin-person conversation at PGConf.dev. I mainly meant these last two\npatches as an initial start for further discussion, and to see if this\nwas indeed the direction in which to progress. Sorry if I didn't make\nthat clear.\n\n> It's not very clear how these new messages integrate into the overall\n> protocol flow. The documentation makes the negative statement that\n> they can't be used as part of the extended query protocol, but that\n> just begs the question of where they can be used. I think there should\n> be an update to protocol-flow.html here. For example, consider the\n> \"Simple Query\" section of that page, which begins \"A simple query\n> cycle is initiated by the frontend sending a Query message to the\n> backend.\" It goes on to describe what happens afterward. A similar\n> discussion seems to be needed here, or maybe two of them,\n\nI changed the second paragraph a bit to hopefully clarify this. Is it\nindeed clearer like this?\n\n> The patch touches src/interfaces/libpq (which is good) but does not\n> update the libpq documentation (which is bad).\n\nI definitely did update the libpq documentation for all the new public\nC APIs, but it seems I forgot to add docs for the report_parameters\nconnection option, so I fixed that now.\n\nNote: 0008 doesn't add any publicly facing libpq changes, so I don't\nthink updating the libpq docs make sense for that patch.\n\n> The documentation for NegotiateProtocolParameter is almost identical\n> to the documentation for SetProtocolParameterComplete. I would have\n> expected the former to include a field giving guidance about values\n> that might be legal in the future\n\nUgh, it seems I implemented the client side for that only half-baked\nand didn't include it in the docs in the message specification (only\nin the the report_parameters one). This is fixed now.\n\n> and the latter to include an error\n> message, rather than just an error indicator.\n\nI did consider an error message (and I think that is what we discussed\nin-person too). But during implementation a WARNING together with a\nsimple error indicator seemed nicer since that could hook into the\nexisting infrastructure for reporting warnings (both server and client\nside). e.g. you can now provide detail/context/errorcode in the\nwarning, without having to add all those to the\nSetProtocolParameterComplete message. I don't feel super strongly\neither way though, so If you prefer the error message to be part of\nthe SetProtocolParameterComplete message then I'm happy to change\nthat.\n\n> I wonder whether we could define 3.2 to report on all supported\n> protocol parameters even if they weren't in the startup message, to\n> avoid having to jam a lot of stuff we don't really care about into the\n> startup message.\n\nI think that's a good idea. I'll try to look into doing that soonish.",
"msg_date": "Mon, 24 Jun 2024 15:18:56 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Jun 24, 2024 at 9:19 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > > Patch 3: Similar to 1 & 2 in that it has no actual effect yet. But\n> > > after bumping the version this would be a user visible API change, so\n> > > I expect it requires a bit more discussion.\n> >\n> > I don't know if this is the right idea or not. An alternative would be\n> > to leave this alone and add PQprotocolMinorVersion().\n>\n> I considered that, but that makes the API a lot more annoying to use\n> for end users when they want to compare to a version, especially when\n> they want to include the major version too.\n>\n> PQprotocolVersion() >= 30001\n>\n> vs\n>\n> PQprotocolVersion() > 3 || (PQprotocolVersion() == 3 &&\n> PQprotocolVersionMinor() >= 1)\n>\n> Given that PQprotocolVersion currently has no practical use, because\n> it always returns 3 in practice. I personally think that changing the\n> behaviour of API in the way I suggested is the best option here.\n\nMmm, I was thinking of defining the new function to return the major\nand minor version in one number, while the existing function could\ncontinue to work as now. I see your point too, but I'd like to hear\nsome other opinions before we decide, because I think it's unclear\nwhat is actually best here. And also: surely this is not the hill to\ndie on. If it makes others a lot happier to do it one way or the\nother, I'm severely disinclined to spend energy arguing with those\npeople. And, on the basis of previous experience, this is exactly the\nsort of question about which opinions sometimes run quite strongly. I\nwould much rather swim with the current than get what I would myself\nprefer.\n\n> > > Patch 5: Starts using the new connection option from Patch 4\n> >\n> > Not sure yet whether I like this approach.\n>\n> Could you explain a bit more what you don't like about it?\n\nI don't dislike it; I'm just not sure whether it is the best approach\nto testing the remainder of the patch series. Perhaps the commit\nmessage could explain more why this approach was chosen and what value\nwe get from it.\n\n> Fair enough. I changed them a bit now, do you think they are good now?\n\nI'll try to re-review the patch set when I have a bit more time than I\ndo at this exact moment.\n\n> > > Patch 7: Bump the protocol version to 3.2 (see commit message for why\n> > > not bumping to 3.1)\n> >\n> > Good commit message. The point is arguable, so putting forth your best\n> > argument is important.\n>\n> Just to clarify: do you agree with the point now, after that argument?\n\nWell, here again, I would like to know what other people think. It\ndoesn't intrinsically matter to me that much what we do here, but it\nmatters to me a lot that extensive recriminations don't ensue\nafterwards.\n\n> I did consider an error message (and I think that is what we discussed\n> in-person too). But during implementation a WARNING together with a\n> simple error indicator seemed nicer since that could hook into the\n> existing infrastructure for reporting warnings (both server and client\n> side). e.g. you can now provide detail/context/errorcode in the\n> warning, without having to add all those to the\n> SetProtocolParameterComplete message. I don't feel super strongly\n> either way though, so If you prefer the error message to be part of\n> the SetProtocolParameterComplete message then I'm happy to change\n> that.\n\nMy general programming experience has been that any time I decide to\ninclude an error flag rather than an error message, I end up\nregretting it. It's possible that this case is, for some reason, an\nexception to that principle, but I feel like we need to nail down\nexactly what the protocol flow *and* the libpq API for these new\nmessages is going to be before I can really have an intelligent\nopinion about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jun 2024 09:59:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Jun 24, 2024 at 9:19 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > I agree with 0002 except for the change from PG_PROTOCOL_MINOR(proto)\n> > > PG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) to proto > PG_PROTOCOL_LATEST.\n> > I prefer that test the way it is; I think the intent is clearer with\n> > the existing code.\n>\n> Great to hear! I reverted that change to the check back to what it was now.\n\nI took another look at 0002 today:\n\n+ if (proto > PG_PROTOCOL_LATEST)\n+ FrontendProtocol = PG_PROTOCOL_LATEST;\n\nA few lines before this, we set FrontendProto = proto. Then we have\none error check, then this. How about instead changing the earlier\nassignment to FrontendProto = Max(proto, PG_PROTOCOL_LATEST), or an\nif-statement with similar effect? I don't think it practically matters\nbecause send_message_to_frontend() looks well-equipped to handle a\ngarbage value in FrontendProto, but it seems cleaner to set the value\nonce and not change it than to change it and then almost immediately\nchange it again. Also, if we do that, we could fix the comment e.g.\n\"Set FrontendProtocol now so that ereport() knows what format to send\nif we fail during startup, but limit the value to the newest protocol\nversion we actually support.\" And if we don't do that, then this new\nif-statement needs a comment of its own, and it should explain why we\ndidn't choose to merge this with the earlier assignment.\n\n- if (PG_PROTOCOL_MINOR(proto) >\nPG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST) ||\n- unrecognized_protocol_options != NIL)\n+ if (PG_PROTOCOL_MINOR(proto) >\nPG_PROTOCOL_MINOR(PG_PROTOCOL_LATEST)\n+ || unrecognized_protocol_options != NIL)\n\nThis hunk can be dropped.\n\n- pq_sendint32(&buf, PG_PROTOCOL_LATEST);\n+ pq_sendint32(&buf, FrontendProtocol);\n\nThis looks good, but the function header comment needs to be\ncorrected, e.g. \"This lets the client know that they have either\nrequested a newer minor protocol version than we are able to speak, or\nat least one protocol option that we don't understand, or possibly\nboth. FrontendProtocol has already been set to the version requested\nby the client or the highest version we know how to speak, whichever\nis older. If the highest version that we know how to speak is too old\nfor the client, it can abandon the connection.\"\n\n> > > Patch 3: Similar to 1 & 2 in that it has no actual effect yet. But\n> > > after bumping the version this would be a user visible API change, so\n> > > I expect it requires a bit more discussion.\n> >\n> > I don't know if this is the right idea or not. An alternative would be\n> > to leave this alone and add PQprotocolMinorVersion().\n>\n> I considered that, but that makes the API a lot more annoying to use\n> for end users when they want to compare to a version, especially when\n> they want to include the major version too.\n>\n> PQprotocolVersion() >= 30001\n>\n> vs\n>\n> PQprotocolVersion() > 3 || (PQprotocolVersion() == 3 &&\n> PQprotocolVersionMinor() >= 1)\n>\n> Given that PQprotocolVersion currently has no practical use, because\n> it always returns 3 in practice. I personally think that changing the\n> behaviour of API in the way I suggested is the best option here.\n\nI respect that, but I don't want to get flamed for doing something\nthat might be controversial without anybody else endorsing it. I'll\ncommit this if it gets some support, but not otherwise. I'm willing to\ncommit a patch adding a new function even if nobody else votes, but\nnot this.\n\n> > > Patch 4: Adds a new connection option, but initially all parameters\n> > > that it takes have the same effect.\n> >\n> > Generally seems OK, but:\n>\n> Fixed\n\nNot entirely. The documentation of the environment variable gets the\nname of the environment variable wrong.\n\n> > I wonder whether we could define 3.2 to report on all supported\n> > protocol parameters even if they weren't in the startup message, to\n> > avoid having to jam a lot of stuff we don't really care about into the\n> > startup message.\n>\n> I think that's a good idea. I'll try to look into doing that soonish.\n\nIs this something that you still intend to do? It looks to me like\nthis would change some things as early as 0006.\n\nOther things about 0006:\n\n+ * Since some old servers and poolers don't support the\n+ * NegotiateProtocolVersion message. So we don't want to send a\n\nYou could say \"Some old servers ... message, so we don't\" or \"Some old\nservers ... message. So, we don't\" or \"Since some old servers ...\nmessage, we don't\" but it's not quite grammatical the way it is.\n\n+ * ask for the latest protocol version we support by\ndefault too.\n\nHow about \"default to the latest protocol version we support\"?\n\n+ for (const pg_protocol_parameter *param =\nKnownProtocolParameters; param->name; param++)\n+ {\n+ const char *value = *(char **) ((char *) conn\n+ param->conn_connection_string_value_offset);\n+\n+ if (value && value[0])\n+ {\n+ needs_new_protocol_features = true;\n+ break;\n+ }\n+ }\n\nI would personally prefer this if it were written with explicit tests,\nlike param->name != NULL and value[0] != '\\0' but I realize that's not\neveryone's preferred style.\n\nI guess I'm also a little unsure whether we need to deal with both\nNULL and \"\" here. Are they semantically different? Is it reasonable to\ntry to prevent NULL from occurring in the first place? Or maybe it's\nfine the way it is. Not sure.\n\nI also happened to notice that 0009 has a typo: report_paramters.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2024 16:10:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, 7 Aug 2024 at 22:10, Robert Haas <[email protected]> wrote:\n> I took another look at 0002 today:\n\nApplied all 0002 feedback. Although I used Min(proto,\nPG_PROTOCOL_LATEST) because Max results in the wrong value.\n\n> I respect that, but I don't want to get flamed for doing something\n> that might be controversial without anybody else endorsing it. I'll\n> commit this if it gets some support, but not otherwise. I'm willing to\n> commit a patch adding a new function even if nobody else votes, but\n> not this.\n\nMakes sense. I'm not in too much of a hurry with this specific one. So\nI'll leave it like this for now and hopefully someone else responds.\nIf this becomes close to being the final unmerged patch of this\npatchset, I'll probably cut my losses and create a patch that adds a\nfunction instead.\n\n> Not entirely. The documentation of the environment variable gets the\n> name of the environment variable wrong.\n\nOops... fixed\n\n> > > I wonder whether we could define 3.2 to report on all supported\n> > > protocol parameters even if they weren't in the startup message, to\n> > > avoid having to jam a lot of stuff we don't really care about into the\n> > > startup message.\n> >\n> > I think that's a good idea. I'll try to look into doing that soonish.\n>\n> Is this something that you still intend to do? It looks to me like\n> this would change some things as early as 0006.\n\nYeah I still intend to do this, but haven't found the time yet. For\nnow I've moved all the protocol parameter stuff together in 0008\n(which still needs extra thinking. The only thing that 0006 now does\nis handle different protocol versions better as well as applying your\nother 0006 feedback, but it still errors for all protocol parameters.\n\n> I guess I'm also a little unsure whether we need to deal with both\n> NULL and \"\" here. Are they semantically different? Is it reasonable to\n> try to prevent NULL from occurring in the first place? Or maybe it's\n> fine the way it is. Not sure.\n\nNULL happens when report_parameters is not set at all by the user, and\n\"\" happens when the connection string contains report_parameters=''.\n\nIt's possible to prevent NULL from occurring, by setting the\n\"compiled\" field to the empty string in the PQconninfoOptions array.\nBut this means that an extra strdup will be done for every PGconn that\nis created to copy this empty string. None of the other entries in\nPQconninfoOptions use an empty string for the \"compiled\" field, all\nuse NULL. So I think it's best to use NULL for these protocol\nparameters too. Which then in turn means that we have to check for\nboth values.\n\n> I also happened to notice that 0009 has a typo: report_paramters.\n\nFixed",
"msg_date": "Wed, 14 Aug 2024 20:04:09 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Aug 14, 2024 at 2:04 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Applied all 0002 feedback. Although I used Min(proto,\n> PG_PROTOCOL_LATEST) because Max results in the wrong value.\n\nPicky, picky. :-)\n\nCommitted.\n\n> Makes sense. I'm not in too much of a hurry with this specific one. So\n> I'll leave it like this for now and hopefully someone else responds.\n> If this becomes close to being the final unmerged patch of this\n> patchset, I'll probably cut my losses and create a patch that adds a\n> function instead.\n\nMaybe reorder the series to put that one later then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Aug 2024 10:48:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 14/08/2024 21:04, Jelte Fennema-Nio wrote:\n> On Wed, 7 Aug 2024 at 22:10, Robert Haas <[email protected]> wrote:\n>> I respect that, but I don't want to get flamed for doing something\n>> that might be controversial without anybody else endorsing it. I'll\n>> commit this if it gets some support, but not otherwise. I'm willing to\n>> commit a patch adding a new function even if nobody else votes, but\n>> not this.\n> \n> Makes sense. I'm not in too much of a hurry with this specific one. So\n> I'll leave it like this for now and hopefully someone else responds.\n> If this becomes close to being the final unmerged patch of this\n> patchset, I'll probably cut my losses and create a patch that adds a\n> function instead.\n\nI think Jelte's proposal on PQprotocolVersion() is OK. As he pointed \nout, the function is pretty useless as it is, so I doubt anyone is doing \nanything interesting with it. Perhaps we should even change it to return \n300000 for protocol version 3.0, and just leave a note in the docs like \n\"in older versions of libpq, this returned 3 for protocol version 3.0\".\n\nOn Wed, 7 Aug 2024 at 22:10, Robert Haas <[email protected]> wrote:\n>> > > Patch 7: Bump the protocol version to 3.2 (see commit message for why\n>> > > not bumping to 3.1)\n>> >\n>> > Good commit message. The point is arguable, so putting forth your best\n>> > argument is important.\n>>\n>> Just to clarify: do you agree with the point now, after that argument?\n> \n> Well, here again, I would like to know what other people think. It\n> doesn't intrinsically matter to me that much what we do here, but it\n> matters to me a lot that extensive recriminations don't ensue\n> afterwards.\n\nMakes sense to me. It's sad that pgbouncer had such a bug, but it makes \nsense to accommodate it. We're not going to run out of integers. This \ndeserves some more commentary in the docs I think. If I understand the \nplan correctly, if the client requests version 3.1, the server accepts \nit, but behaves exactly the same as 3.0. Or should 3.1 be forbidden \naltogether?\n\nOn the default for \"max_protocol_version\": I'm pretty disappointed if we \ncannot change the default to \"latest\". I realize that that won't work \nwith poolers that don't implement NegotiateProtocolVersion. But I'm \nafraid if we make the new protocol version opt-in, no one will use it, \nand the poolers etc still won't bother to implement \nNegotiateProtocolVersion for years to come, if ever. We can revisit this \ndecision later in the release cycle though. But I'd suggest changing the \ndefault to \"latest\" for now, so that more hackers working with \nconnection poolers will notice, and we get more testing of the new \nprotocol and the negotiation.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 01:03:57 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 3:04 PM Heikki Linnakangas <[email protected]> wrote:\n> Perhaps we should even change it to return\n> 300000 for protocol version 3.0, and just leave a note in the docs like\n> \"in older versions of libpq, this returned 3 for protocol version 3.0\".\n\nI think that would absolutely break current code. It's not uncommon\n(IME) for hand-built clients wrapping libpq to make sure they're not\ntalking v2 before turning on some feature, and they're allowed to do\nthat with a PQprotocolVersion() == 3 check. A GitHub code search\nbrings up examples.\n\nAs for 30001: I don't see the value in modifying an exported API in\nthis way. Especially since we can add a new entry point that will be\nguaranteed not to break anyone, as Robert suggested. I think it's a\nPOLA violation at minimum; my understanding was that up until this\npoint, the value was incremented during major (incompatible) version\nbumps. And I think other users will have had the same understanding.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 15 Aug 2024 15:39:00 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 16 Aug 2024 at 00:39, Jacob Champion\n<[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 3:04 PM Heikki Linnakangas <[email protected]> wrote:\n> > Perhaps we should even change it to return\n> > 300000 for protocol version 3.0, and just leave a note in the docs like\n> > \"in older versions of libpq, this returned 3 for protocol version 3.0\".\n>\n> I think that would absolutely break current code. It's not uncommon\n> (IME) for hand-built clients wrapping libpq to make sure they're not\n> talking v2 before turning on some feature, and they're allowed to do\n> that with a PQprotocolVersion() == 3 check. A GitHub code search\n> brings up examples.\n\nCan you give a link for that code search and/or an example where\nsomeone used it like that in a real setting? The only example I could\nfind where someone used it at all was psycopg having a unittest for\ntheir python wrapper around this API, and they indeed used == 3.\n\n> As for 30001: I don't see the value in modifying an exported API in\n> this way. Especially since we can add a new entry point that will be\n> guaranteed not to break anyone, as Robert suggested. I think it's a\n> POLA violation at minimum; my understanding was that up until this\n> point, the value was incremented during major (incompatible) version\n> bumps. And I think other users will have had the same understanding.\n\nThe advantage is not introducing yet another API when we already have\none with a great name that no-one is currently using. The current API\nis in practice just a very convoluted way of writing 3. Also doing an\n== 3 check is obviously problematic, if people use this function they\nshould be using > 3 to be compatible with future versions. So if we\never introduce protocol version 4, then these (afaict theoretical)\nusers would break anyway.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 09:04:52 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 6:03 PM Heikki Linnakangas <[email protected]> wrote:\n> On the default for \"max_protocol_version\": I'm pretty disappointed if we\n> cannot change the default to \"latest\". I realize that that won't work\n> with poolers that don't implement NegotiateProtocolVersion. But I'm\n> afraid if we make the new protocol version opt-in, no one will use it,\n> and the poolers etc still won't bother to implement\n> NegotiateProtocolVersion for years to come, if ever. We can revisit this\n> decision later in the release cycle though. But I'd suggest changing the\n> default to \"latest\" for now, so that more hackers working with\n> connection poolers will notice, and we get more testing of the new\n> protocol and the negotiation.\n\nIn this regard, I think your proposed protocol change (bumping the\ncancel-key length) is different from all of the other protocol\nenhancement proposals that I can think of. Most people seem to be\ninterested in adding an optional feature that some clients might want\nand other clients might not care about. Peter Eisentraut's transparent\ncolumn encryption stuff is an example of that. What Jelte wants to do\nhere is too, really, because while these facilities seem like they\ncould be generally useful for poolers -- at least if we could agree on\nwhat to do and work out all the problems -- and could potentially be\nused by applications as well, there would no requirement that any\nparticular application use any of the new facilities and many of them\nwouldn't. So in that kind of world, it makes more sense to me to\ndefault to 3.0 unless the user indicates a desire to use a newer\nfeature. That way, we minimize breakage at very little cost. Desire to\nuse the new features can be expected to spur some development in\necosystem projects, and until that work gets done, many setups are\nunaffected.\n\nBut the cancel key is a whole different kind of thing. I don't expect\npeople to be motivated to add support for newer protocol versions just\nto get a longer cancel key. If we want people to adopt longer cancel\nkeys, we need to change the client default and accept that there's\ngoing to be a bunch of breakage until everybody fixes their code.\n\nBut is that actually a good idea?\n\nI have some misgivings about that. When somebody's stuff stops working\nbecause of some problem that boils down to \"we made the cancel key\nlonger,\" I think we're going to have some pretty irate users. I\nbelieve everybody would agree in a vacuum that making cancel keys\nlonger is probably a good idea, but it seems like a relatively minor\nbenefit for the amount of inconvenience we're going to be imposing on\neveryone. On the other hand, the logical conclusion of that argument\nis that we should never do it, which I don't really believe either.\nI'm actually kind of surprised that nobody else (that I've seen,\nanyway) has expressed concern about the fact that that proposal\ninvolves a protocol version bump. Have people not noticed? Does nobody\ncare? To me, that thread reads like \"I'm going to make a fire in the\nfireplace\" but then there's a footnote that reads \"by setting off a\nnuclear bomb\" but we're only talking about how warm and cozy the fire\nwill be. :-)\n\nI'm sure you're going to say \"it's worth it\" -- you wouldn't have\nwritten the patch otherwise -- but I wonder if it's going to feel\nworth it to everybody who has to deal with the downstream effects.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 08:55:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 16/08/2024 15:55, Robert Haas wrote:\n> On Thu, Aug 15, 2024 at 6:03 PM Heikki Linnakangas <[email protected]> wrote:\n>> On the default for \"max_protocol_version\": I'm pretty disappointed if we\n>> cannot change the default to \"latest\". I realize that that won't work\n>> with poolers that don't implement NegotiateProtocolVersion. But I'm\n>> afraid if we make the new protocol version opt-in, no one will use it,\n>> and the poolers etc still won't bother to implement\n>> NegotiateProtocolVersion for years to come, if ever. We can revisit this\n>> decision later in the release cycle though. But I'd suggest changing the\n>> default to \"latest\" for now, so that more hackers working with\n>> connection poolers will notice, and we get more testing of the new\n>> protocol and the negotiation.\n> \n> In this regard, I think your proposed protocol change (bumping the\n> cancel-key length) is different from all of the other protocol\n> enhancement proposals that I can think of. Most people seem to be\n> interested in adding an optional feature that some clients might want\n> and other clients might not care about. Peter Eisentraut's transparent\n> column encryption stuff is an example of that. What Jelte wants to do\n> here is too, really, because while these facilities seem like they\n> could be generally useful for poolers -- at least if we could agree on\n> what to do and work out all the problems -- and could potentially be\n> used by applications as well, there would no requirement that any\n> particular application use any of the new facilities and many of them\n> wouldn't. So in that kind of world, it makes more sense to me to\n> default to 3.0 unless the user indicates a desire to use a newer\n> feature. That way, we minimize breakage at very little cost. Desire to\n> use the new features can be expected to spur some development in\n> ecosystem projects, and until that work gets done, many setups are\n> unaffected.\n> \n> But the cancel key is a whole different kind of thing. I don't expect\n> people to be motivated to add support for newer protocol versions just\n> to get a longer cancel key. If we want people to adopt longer cancel\n> keys, we need to change the client default and accept that there's\n> going to be a bunch of breakage until everybody fixes their code.\n\nAgreed.\n\n> But is that actually a good idea?\n> \n> I have some misgivings about that. When somebody's stuff stops working\n> because of some problem that boils down to \"we made the cancel key\n> longer,\" I think we're going to have some pretty irate users. I\n> believe everybody would agree in a vacuum that making cancel keys\n> longer is probably a good idea, but it seems like a relatively minor\n> benefit for the amount of inconvenience we're going to be imposing on\n> everyone.\n\nAgreed.\n\n> On the other hand, the logical conclusion of that argument\n> is that we should never do it, which I don't really believe either.\n> I'm actually kind of surprised that nobody else (that I've seen,\n> anyway) has expressed concern about the fact that that proposal\n> involves a protocol version bump. Have people not noticed? Does nobody\n> care? To me, that thread reads like \"I'm going to make a fire in the\n> fireplace\" but then there's a footnote that reads \"by setting off a\n> nuclear bomb\" but we're only talking about how warm and cozy the fire\n> will be. :-)\n> \n> I'm sure you're going to say \"it's worth it\" -- you wouldn't have\n> written the patch otherwise -- but I wonder if it's going to feel\n> worth it to everybody who has to deal with the downstream effects.\n\nI didn't realize the issue with poolers is so widespread when I started \nworking on that patch this spring, and I gave up hoping to get it into \nv17 when that was brought up.\n\nNow, I think we should still do it, but it might not warrant changing \nthe default. Unfortunately that means that it will get very little \nadoption. It will only be adopted as a side-effect of some other changes \nthat make people change the protocol_version setting. Or after a very \nlong transition period.\n\nThat said, I think we *should* change the default for the time being, so \nthat developers working on the bleeding edge and building from git get \nsome exposure to it. Hopefully that will nudge some of the poolers to \nadopt NegotiateProtocolVersion sooner. But we revert the default to 3.0 \nbefore the v18 release.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 17:51:14 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 16 Aug 2024 at 16:51, Heikki Linnakangas <[email protected]> wrote:\n> That said, I think we *should* change the default for the time being, so\n> that developers working on the bleeding edge and building from git get\n> some exposure to it. Hopefully that will nudge some of the poolers to\n> adopt NegotiateProtocolVersion sooner. But we revert the default to 3.0\n> before the v18 release.\n\nThat seems reasonable to me. To be clear, the latest PgBouncer release\nnow supports NegotiateProtocolVersion.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 16:54:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 10:51 AM Heikki Linnakangas <[email protected]> wrote:\n> Now, I think we should still do it, but it might not warrant changing\n> the default. Unfortunately that means that it will get very little\n> adoption. It will only be adopted as a side-effect of some other changes\n> that make people change the protocol_version setting. Or after a very\n> long transition period.\n>\n> That said, I think we *should* change the default for the time being, so\n> that developers working on the bleeding edge and building from git get\n> some exposure to it. Hopefully that will nudge some of the poolers to\n> adopt NegotiateProtocolVersion sooner. But we revert the default to 3.0\n> before the v18 release.\n\nI'm fine with changing the default for the time being. I'm not really\nsure what I think about changing it back before release. Maybe six\nmonths from now it will be clearer what the right thing to do is; or\nmaybe other people will be more certain of what The Right Thing To Do\nis than I am myself; but there's no rush to finalize a decision right\nthis minute.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:34:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 12:05 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 16 Aug 2024 at 00:39, Jacob Champion\n> <[email protected]> wrote:\n> >\n> > On Thu, Aug 15, 2024 at 3:04 PM Heikki Linnakangas <[email protected]> wrote:\n> > > Perhaps we should even change it to return\n> > > 300000 for protocol version 3.0, and just leave a note in the docs like\n> > > \"in older versions of libpq, this returned 3 for protocol version 3.0\".\n> >\n> > I think that would absolutely break current code. It's not uncommon\n> > (IME) for hand-built clients wrapping libpq to make sure they're not\n> > talking v2 before turning on some feature, and they're allowed to do\n> > that with a PQprotocolVersion() == 3 check. A GitHub code search\n> > brings up examples.\n>\n> Can you give a link for that code search and/or an example where\n> someone used it like that in a real setting?\n\nKeeping in mind that I'm responding to the change from 3 to 30000:\n\n https://github.com/search?q=PQprotocolVersion&type=code\n https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n\nBindings re-export this symbol in ways that basically just expand the\nweb of things to talk about. And there's hazards like\n\n https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n\nwhere the old client is fine, but new clients could be tricked into\nwriting similar checks as `>= 30000` -- which is wrong because older\nlibpqs use 3, haha, surprise, have fun with that!\n\n> The only example I could\n> find where someone used it at all was psycopg having a unittest for\n> their python wrapper around this API, and they indeed used == 3.\n\nI've written code that uses exact equality as well, because I cared\nabout the wire protocol version. Even if I hadn't, isn't the first\npublic example enough, since a GitHub search is going to be an\nundercount? What is your acceptable level of breakage?\n\nPeople who are testing against this have some reason to care about the\nunderlying protocol compatibility. I don't need to understand (or even\nagree with!) why they care; we've exported an API that allows them to\ndo something they find useful.\n\n> The advantage is not introducing yet another API when we already have\n> one with a great name\n\nSorry to move the goalposts, but when I say \"value\" I'm specifically\ntalking about value for clients/community, not value for patch writers\n(or even maintainers). A change here doesn't add any value for\nexisting clients when compared to a new API, since they've already\nwritten the version check code and are presumably happy with it. New\nclients that have reason to care about the minor version, whatever\nthat happens to mean for a protocol, can use new code.\n\nI'm not opposed to compatibility breaks when a client can see some\nvalue in what we've done -- but the dev being broken should at least\nbe able to say something like \"oh yeah, it was clearly broken before\nand I'm glad it works now\" or \"wow, what a security hole, I'm glad\nthey patched it\". That's not true here.\n\nlibpq is close to the base of a gigantic software stack and ecosystem.\nWe have an API, we have an SONAME, we have ways to introduce better\nAPIs while not breaking past clients. (And we can collect the list of\ncruft to discard in a future SONAME bump, so we don't have to carry it\nforever... but is the cost of this particularly large?)\n\n> that no-one is currently using.\n\nThis is demonstrably false. You just cited the example you found in\npsycopg, and all those bindings on GitHub have clients of their own,\nnot all of which are going to be easily searchable.\n\n> The current API\n> is in practice just a very convoluted way of writing 3.\n\nThere are versions of libpq still in support that can return 2, and\nclients above us have to write code against the whole spectrum.\n\n> Also doing an\n> == 3 check is obviously problematic, if people use this function they\n> should be using > 3 to be compatible with future versions.\n\nDepends on why they're checking. I regularly write test clients that\ndrop down beneath the libpq layer. I don't want to be silently\nupgraded.\n\nI think I remember some production Go-based libpq clients several\nyears ago that did similar things, dropping down to the packet level\nto do some sort of magic, but I can't remember exactly what now.\nThat's terrifying in the abstract, but it's not my decision or code to\nmaintain. The community is allowed to do things like that; we publish\na protocol definition in addition to an API that exposes the socket.\n\n> So if we\n> ever introduce protocol version 4, then these (afaict theoretical)\n> users would break anyway.\n\nYes -- not theoretical, since I am one of them! -- that's the point.\nSince we've already demonstrated that protocol details can leak up\nabove the API for the 2->3 change, a dev with reason to be paranoid\n(such as myself) can write a canary for the 3->4 change. \"Protocol 4.0\nnot yet supported\" can be a feature.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 16 Aug 2024 10:44:07 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n<[email protected]> wrote:\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n\nIMHO these examples establish beyond doubt that the existing function\nreally is being used in ways that would break if we committed the\nproposed patch. To be honest, I'm slightly surprised, because protocol\nversion 2 has been so dead for so long that I would not have\nanticipated people would even bother checking for it. But these\nexamples show that some people do. If Jacob found these examples this\neasily, there are probably a bunch of others.\n\nIt's not worth breaking existing code to avoid adding one new libpq\nentrypoint. Let's just add the new function and move on.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 14:01:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 16/08/2024 21:01, Robert Haas wrote:\n> On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> <[email protected]> wrote:\n>> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n>> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n> \n> IMHO these examples establish beyond doubt that the existing function\n> really is being used in ways that would break if we committed the\n> proposed patch. To be honest, I'm slightly surprised, because protocol\n> version 2 has been so dead for so long that I would not have\n> anticipated people would even bother checking for it. But these\n> examples show that some people do. If Jacob found these examples this\n> easily, there are probably a bunch of others.\n> \n> It's not worth breaking existing code to avoid adding one new libpq\n> entrypoint. Let's just add the new function and move on.\n\n+1. Jacob is right.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 22:26:51 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 16 Aug 2024 at 15:26, Heikki Linnakangas <[email protected]> wrote:\n\n> On 16/08/2024 21:01, Robert Haas wrote:\n> > On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> > <[email protected]> wrote:\n> >>\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n> >>\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n> >\n> > IMHO these examples establish beyond doubt that the existing function\n> > really is being used in ways that would break if we committed the\n> > proposed patch. To be honest, I'm slightly surprised, because protocol\n> > version 2 has been so dead for so long that I would not have\n> > anticipated people would even bother checking for it. But these\n> > examples show that some people do. If Jacob found these examples this\n> > easily, there are probably a bunch of others.\n> >\n> > It's not worth breaking existing code to avoid adding one new libpq\n> > entrypoint. Let's just add the new function and move on.\n>\n> +1. Jacob is right.\n>\n\nFor those of us who don't use a function. How will this work ?\n\nDave\n\nOn Fri, 16 Aug 2024 at 15:26, Heikki Linnakangas <[email protected]> wrote:On 16/08/2024 21:01, Robert Haas wrote:\n> On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> <[email protected]> wrote:\n>> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n>> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n> \n> IMHO these examples establish beyond doubt that the existing function\n> really is being used in ways that would break if we committed the\n> proposed patch. To be honest, I'm slightly surprised, because protocol\n> version 2 has been so dead for so long that I would not have\n> anticipated people would even bother checking for it. But these\n> examples show that some people do. If Jacob found these examples this\n> easily, there are probably a bunch of others.\n> \n> It's not worth breaking existing code to avoid adding one new libpq\n> entrypoint. Let's just add the new function and move on.\n\n+1. Jacob is right.For those of us who don't use a function. How will this work ?Dave",
"msg_date": "Fri, 16 Aug 2024 15:45:14 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 16/08/2024 22:45, Dave Cramer wrote:\n> On Fri, 16 Aug 2024 at 15:26, Heikki Linnakangas <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 16/08/2024 21:01, Robert Haas wrote:\n> > On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> > <[email protected]\n> <mailto:[email protected]>> wrote:\n> >>\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89 <https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89>\n> >>\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864 <https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864>\n> >\n> > IMHO these examples establish beyond doubt that the existing function\n> > really is being used in ways that would break if we committed the\n> > proposed patch. To be honest, I'm slightly surprised, because\n> protocol\n> > version 2 has been so dead for so long that I would not have\n> > anticipated people would even bother checking for it. But these\n> > examples show that some people do. If Jacob found these examples this\n> > easily, there are probably a bunch of others.\n> >\n> > It's not worth breaking existing code to avoid adding one new libpq\n> > entrypoint. Let's just add the new function and move on.\n> \n> +1. Jacob is right.\n> \n> \n> For those of us who don't use a function. How will this work ?\n\nSorry, I don't understand the question. This sub-thread is all about the \nlibpq PQprotocolVersion() function.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 22:54:25 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 16 Aug 2024 at 15:54, Heikki Linnakangas <[email protected]> wrote:\n\n> On 16/08/2024 22:45, Dave Cramer wrote:\n> > On Fri, 16 Aug 2024 at 15:26, Heikki Linnakangas <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 16/08/2024 21:01, Robert Haas wrote:\n> > > On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> > > <[email protected]\n> > <mailto:[email protected]>> wrote:\n> > >>\n> >\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n> <\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n> >\n> > >>\n> >\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n> <\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n> >\n> > >\n> > > IMHO these examples establish beyond doubt that the existing\n> function\n> > > really is being used in ways that would break if we committed the\n> > > proposed patch. To be honest, I'm slightly surprised, because\n> > protocol\n> > > version 2 has been so dead for so long that I would not have\n> > > anticipated people would even bother checking for it. But these\n> > > examples show that some people do. If Jacob found these examples\n> this\n> > > easily, there are probably a bunch of others.\n> > >\n> > > It's not worth breaking existing code to avoid adding one new\n> libpq\n> > > entrypoint. Let's just add the new function and move on.\n> >\n> > +1. Jacob is right.\n> >\n> >\n> > For those of us who don't use a function. How will this work ?\n>\n> Sorry, I don't understand the question. This sub-thread is all about the\n> libpq PQprotocolVersion() function.\n>\n\nAdmittedly I'm a bit late into this discussion so I may be off base.\nUltimately we need to negotiate the protocol. From what I can tell for\nlibpq we are providing a function that returns a number, currently 3.\n\nThe proposal is to change it to something like 30000.\n\nUltimately this has to go over the wire so that clients that are\nimplementing the protocol themselves can respond to the new behaviour.\n\nWouldn't we have to send this number in the protocol negotiation ?\n\nDave\n\n>\n>\n\nOn Fri, 16 Aug 2024 at 15:54, Heikki Linnakangas <[email protected]> wrote:On 16/08/2024 22:45, Dave Cramer wrote:\n> On Fri, 16 Aug 2024 at 15:26, Heikki Linnakangas <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 16/08/2024 21:01, Robert Haas wrote:\n> > On Fri, Aug 16, 2024 at 1:44 PM Jacob Champion\n> > <[email protected]\n> <mailto:[email protected]>> wrote:\n> >>\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89 <https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89>\n> >>\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864 <https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864>\n> >\n> > IMHO these examples establish beyond doubt that the existing function\n> > really is being used in ways that would break if we committed the\n> > proposed patch. To be honest, I'm slightly surprised, because\n> protocol\n> > version 2 has been so dead for so long that I would not have\n> > anticipated people would even bother checking for it. But these\n> > examples show that some people do. If Jacob found these examples this\n> > easily, there are probably a bunch of others.\n> >\n> > It's not worth breaking existing code to avoid adding one new libpq\n> > entrypoint. Let's just add the new function and move on.\n> \n> +1. Jacob is right.\n> \n> \n> For those of us who don't use a function. How will this work ?\n\nSorry, I don't understand the question. This sub-thread is all about the \nlibpq PQprotocolVersion() function.Admittedly I'm a bit late into this discussion so I may be off base. Ultimately we need to negotiate the protocol. From what I can tell for libpq we are providing a function that returns a number, currently 3. The proposal is to change it to something like 30000. Ultimately this has to go over the wire so that clients that are implementing the protocol themselves can respond to the new behaviour.Wouldn't we have to send this number in the protocol negotiation ? Dave",
"msg_date": "Fri, 16 Aug 2024 16:03:13 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 4:03 PM Dave Cramer <[email protected]> wrote:\n> Admittedly I'm a bit late into this discussion so I may be off base.\n> Ultimately we need to negotiate the protocol. From what I can tell for libpq we are providing a function that returns a number, currently 3.\n>\n> The proposal is to change it to something like 30000.\n>\n> Ultimately this has to go over the wire so that clients that are implementing the protocol themselves can respond to the new behaviour.\n>\n> Wouldn't we have to send this number in the protocol negotiation ?\n\nSee the discussion of the NegotiateProtocolVersion message which has\nbeen around for a long time but is still not supported by all clients.\n\nhttps://www.postgresql.org/docs/current/protocol.html\nhttps://www.postgresql.org/docs/current/protocol-message-formats.html\n\nNo changes to the format of that message are proposed. The startup\nmessage also contains a version number, and changes the format of that\nmessage are also not proposed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 16:34:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 16 Aug 2024 at 19:44, Jacob Champion\n<[email protected]> wrote:\n> Keeping in mind that I'm responding to the change from 3 to 30000:\n\nLet me start with the fact that I agree we **shouldn't** change 3 to\n30000. And the proposed patch also specifically doesn't.\n\n> https://github.com/search?q=PQprotocolVersion&type=code\n> https://github.com/psycopg/psycopg2/blob/658afe4cd90d3e167d7c98d22824a8d6ec895b1c/tests/test_async.py#L89\n\nA **unittest** which is there just to add coverage for a method that\nthe driver exposes is not **actual usage** IMHO. Afaict Daniele (CCd\nfor confirmation) only added this code to add coverage for the API it\nre-exposes. Considering this as an argument against returning\ndifferent values from this function is akin to saying that we should\navoid changing the function if we would have had coverage for this\nfunction ourselves in our own libpq tests by checking for == 3.\nFinally, this function in psycopg was only added in 2020. That's a\ntime when having this function return anything other than 3 when\nconnecting to a supported Postgres version was already not possible\nfor 10 years.\n\n> https://github.com/infusion/PHP/blob/7ebefb6426bb4b4820a30cca5c3a10bfd757b6ea/ext/pgsql/pgsql.c#L864\n>\n> where the old client is fine, but new clients could be tricked into\n> writing similar checks as `>= 30000` -- which is wrong because older\n> libpqs use 3, haha, surprise, have fun with that!\n\nThis is indeed actual usage but, like any sensible production use, it\nactually uses >= 3 so nothing would break even if we changed to using\n30000. Rewording what you're saying to make it sound much less\nterrible: Users of the **new** API who fail to read the docs, and thus\nuse >= 30000 instead of >=3, would then cause breakage when linking to\nolder libpq versions. This seems extremely unlikely for anyone to do.\nBecause if you are a new user of the API, then why on earth would you\ncheck for 3.0 or larger. The last server that used a version lower\nthan 3.0 is 7.4 of which the final release was in 2010... So the only\nreason to use PQprotocolVersion in new code would be to check for\nversions higher than 3.0, for which the checks > 30000, and >= 30001\nwould both work! And finally this would only be the case if we change\nthe definition of 3.0 to 30000. Which as said above the proposed\npatch specifically doesn't, to avoid such confusion.\n\n> > The only example I could\n> > find where someone used it at all was psycopg having a unittest for\n> > their python wrapper around this API, and they indeed used == 3.\n>\n> I've written code that uses exact equality as well, because I cared\n> about the wire protocol version.\n\nSo, if you cared about the exact wire protocol version for your own\nusage. Then why wouldn't you want to know that the minor version\nchanged?\n\n> Even if I hadn't, isn't the first\n> public example enough, since a GitHub search is going to be an\n> undercount? What is your acceptable level of breakage?\n\nAs explained above. Any actual usage that is not a **unittest** of a\ndriver library.\n\n> People who are testing against this have some reason to care about the\n> underlying protocol compatibility. I don't need to understand (or even\n> agree with!) why they care; we've exported an API that allows them to\n> do something they find useful.\n\nYes, and assuming they only care about major version upgrades seems\nvery presumptuous. If they manually parse packets themselves or want\npackage traces to output the same data, then they'd want to pin to an\nexact protocol version, both minor and major. Just because we'd never\nhad a minor version bump before doesn't mean users of this API don't\ncare about being notified by them through the existing\nPQprotocolVersion API.\n\n> > The advantage is not introducing yet another API when we already have\n> > one with a great name\n>\n> Sorry to move the goalposts, but when I say \"value\" I'm specifically\n> talking about value for clients/community, not value for patch writers\n> (or even maintainers). A change here doesn't add any value for\n> existing clients when compared to a new API, since they've already\n> written the version check code and are presumably happy with it. New\n> clients that have reason to care about the minor version, whatever\n> that happens to mean for a protocol, can use new code.\n\nNot introducing new APIs definitely is useful to clients and the\ncommunity. Before users can use a new API, their libpq wrapper needs\nto expose this new function that calls it through FFI. First of all\nthis requires work from client authors. But the **key point being**:\nThe new function would not be present in old libpq versions. So for\nsome clients, the FFI wrapper would also not exist for those old libpq\nversions, or at least would fail. So now users before actually being\nable to check for a minor protocol version, they first need an up to\ndate libpq wrapper library for their language that exposes the new\nfunction, and then they'd still have to check their actual libpq\nversion, before they could finally check for the minor protocol\nversion...\n\nAlso as explained above, the few users that check for == 3 probably\n**actually care** about the exact version. And would want to have that\ncheck fail when the minor version changes.\n\n> > The current API\n> > is in practice just a very convoluted way of writing 3.\n>\n> There are versions of libpq still in support that can return 2, and\n> clients above us have to write code against the whole spectrum.\n\nOnly when connecting to PG7.4 which ended support in 2010. So in\npractice it has been returning only 3 for quite a while.\n\n> > Also doing an\n> > == 3 check is obviously problematic, if people use this function they\n> > should be using > 3 to be compatible with future versions.\n>\n> Depends on why they're checking. I regularly write test clients that\n> drop down beneath the libpq layer. I don't want to be silently\n> upgraded.\n>\n> I think I remember some production Go-based libpq clients several\n> years ago that did similar things, dropping down to the packet level\n> to do some sort of magic, but I can't remember exactly what now.\n> That's terrifying in the abstract, but it's not my decision or code to\n> maintain. The community is allowed to do things like that; we publish\n> a protocol definition in addition to an API that exposes the socket.\n\nI feel like we're agreeing here, but somehow you end with a different\nconclusion: People that care to check for the exact protocol version\nalso care about minor version bumps. In this Go client where someone\nwas doing weird packet level magic they cared about the **exact packet\nlayout**. Which is possible to change in minor version bumps. So by\nnot changing the return value of PQprotocolVersion, we'd be doing\nexactly what you said you don't want: We'd upgrade them silently!\n\n\n",
"msg_date": "Sat, 17 Aug 2024 11:32:03 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Sat, Aug 17, 2024 at 5:32 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Not introducing new APIs definitely is useful to clients and the\n> community. Before users can use a new API, their libpq wrapper needs\n> to expose this new function that calls it through FFI. First of all\n> this requires work from client authors.\n\nSure, just like they do every other new libpq function.\n\n> But the **key point being**:\n> The new function would not be present in old libpq versions. So for\n> some clients, the FFI wrapper would also not exist for those old libpq\n> versions, or at least would fail. So now users before actually being\n> able to check for a minor protocol version, they first need an up to\n> date libpq wrapper library for their language that exposes the new\n> function, and then they'd still have to check their actual libpq\n> version, before they could finally check for the minor protocol\n> version...\n\nI feel like what you're really complaining about here is that libpq is\nnot properly versioned. We've just been calling it libpq.so.5 forever\ninstead of bumping the version number when we change stuff. Maybe we\nshould start doing that, because that's exactly what version numbers\nare for. Alternatively or in addition, maybe we should have a function\nin libpq that returns its own PostgreSQL version, because that would\nsolve this problem for all cases, whereas what you're proposing here\nonly solves it for this particular case (and at the risk of breaking\nthings for somebody).\n\nI just don't see why this particular change is special. We add new\nlibpq interfaces all the time and we don't do anything to make that\neasy for libpq clients to discover. If we implemented longer cancel\nkeys or protocol parameters or transparent column encryption without a\nprotocol version bump, clients would still need to figure out that\nthose features were present in the libpq they are linked against, just\nlike they presumably already need to worry about whether they're\nlinked a new-enough libpq to have any other feature that's been added\nsince forever ago. Sure, that's not great, but it doesn't seem any\nmore not great in this case than any other, and I'd rather see us come\nup with a nice general solution to that problem than hack this\nspecific case by redefining an existing function.\n\nAlso, I kind of wish you had brought this argument up earlier. Maybe\nyou did and I missed it, but I was under the impression that you were\njust arguing that \"nobody will notice or care,\" which is a quite\ndifferent argument than what you make here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 18 Aug 2024 23:43:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I feel like what you're really complaining about here is that libpq is\n> not properly versioned. We've just been calling it libpq.so.5 forever\n> instead of bumping the version number when we change stuff. Maybe we\n> should start doing that, because that's exactly what version numbers\n> are for. Alternatively or in addition, maybe we should have a function\n> in libpq that returns its own PostgreSQL version, because that would\n> solve this problem for all cases, whereas what you're proposing here\n> only solves it for this particular case (and at the risk of breaking\n> things for somebody).\n\nNot really. *No* runtime test is adequate for discovery of a new\nlibrary API, because if you try to call a function that doesn't exist\nin the version you have, you will get a compile or link failure long\nbefore you can call any inquiry function.\n\nBumping the .so's major version just creates another way to fail\nat link time, so I'm not seeing how that would make this better.\n\n> I just don't see why this particular change is special. We add new\n> libpq interfaces all the time and we don't do anything to make that\n> easy for libpq clients to discover.\n\nIndeed. But we have actually paid a little bit of attention to that,\nin the form of inventing #define symbols that can be tested at compile\ntime. (There's an open item for 17 concerning failure to do that for\nsome new-in-17 APIs.) Yeah, it's grotty, but runtime checks aren't\nespecially relevant here.\n\nIn any case, please let us not abuse the wire protocol version number\nas an indicator of the libpq-to-application API version. They are\nfundamentally different things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Aug 2024 00:53:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 19 Aug 2024 at 05:44, Robert Haas <[email protected]> wrote:\n> I feel like what you're really complaining about here is that libpq is\n> not properly versioned. We've just been calling it libpq.so.5 forever\n> instead of bumping the version number when we change stuff.\n\nThere is PQlibVersion() that can be used for this. Which has existed\nsince 2010, so people can assume it exists.\n\n> I just don't see why this particular change is special.\n\nI didn't mean to say that it was, and I don't think the problem is\nenormous either. I mainly meant to say that there is not just a cost\nto Postgres maintainers when we introduce a new API. There's\ndefinitely a cost to users and client authors too.\n\n> Also, I kind of wish you had brought this argument up earlier. Maybe\n> you did and I missed it, but I was under the impression that you were\n> just arguing that \"nobody will notice or care,\" which is a quite\n> different argument than what you make here.\n\n\"nobody will notice or care\" was definitely my argument before Jacob\nresponded. Since Jacob his response I realize there are two valid use\ncases for PQprotocolVersion():\n\n1. Feature detection. For this my argument still is: people won't\nnotice. Many people won't have bothered to use the function and\neveryone else will have used >= 3 here.\n2. Pinning the protocol version, because they care that the exact\nprotocol details are the same. Here people will have used == 3, and\nthus their check will fail when we start to return a different version\nfrom PQprotocolVersion(). But that's actually what this usecase\ndesires. By creating a new function, we actually break the expectation\nof these people: i.e. they want the PQprotocolVersion() to return a\ndifferent version when the protocol changes.\n\nBefore Jacob responded I only considered the first case. So my\nargument was indeed basically: Let's reuse this currently useless\nfunction with the nice name, because no-one will care. But if people\nthought that the risk was too high, I didn't see huge downsides to\nintroducing a new API either.\n\nBut **now I actually feel much more strongly about reusing the same\nfunction**. Because by introducing a new function we actually break\nthe users of the second use-case.\n\nP.S. The docs for PQprotocolVersion[1] have never said that this\nfunction only returns the major protocol version. And by using the\nword \"Currently\" it has always suggested that new return values could\nbe introduced later, and thus for feature detection you should use >=\n3\n\n[1]: https://www.postgresql.org/docs/13/libpq-status.html#LIBPQ-PQPROTOCOLVERSION\n\n\n",
"msg_date": "Mon, 19 Aug 2024 09:29:59 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Aug 19, 2024 at 3:30 AM Jelte Fennema-Nio <[email protected]> wrote:\n> But **now I actually feel much more strongly about reusing the same\n> function**. Because by introducing a new function we actually break\n> the users of the second use-case.\n>\n> P.S. The docs for PQprotocolVersion[1] have never said that this\n> function only returns the major protocol version. And by using the\n> word \"Currently\" it has always suggested that new return values could\n> be introduced later, and thus for feature detection you should use >=\n> 3\n\nIf somebody is using PQprotocolVersion() to detect the arrival of a\nnew protocol version, it stands to reason that they only care about\nnew major protocol versions, because that's what the function is\ndefined to tell you about. Anyone who has done a moderate amount of\nlooking into this area will understand that the protocol has a major\nversion number and a minor version number and that this function only\nreturns the former. Therefore, they should expect that the arrival of\na new minor protocol version won't change the return value of this\nfunction.\n\nI really don't understand why we're still arguing about this. It seems\nto me that we've established that there is some usage of the existing\nfunction, and that changing the return value will break something.\nSure, so far as we know that something is \"only\" regression tests, but\nthere's no guarantee that there couldn't be other code that we don't\nknow about that breaks worse, and even there isn't, who wants to break\nregression tests when there's nothing actually wrong? Now we could\ndecide we're going to do it anyway because of whatever reason we might\nhave, but it doesn't seem like that's what most people want to do.\n\nI feel like we're finally in a position to get some things done here\nand this doesn't seem like the point to get stuck on. YMMV, of course.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Aug 2024 10:16:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, 19 Aug 2024 at 16:16, Robert Haas <[email protected]> wrote:\n> If somebody is using PQprotocolVersion() to detect the arrival of a\n> new protocol version, it stands to reason that they only care about\n> new major protocol versions, because that's what the function is\n> defined to tell you about. Anyone who has done a moderate amount of\n> looking into this area will understand that the protocol has a major\n> version number and a minor version number and that this function only\n> returns the former. Therefore, they should expect that the arrival of\n> a new minor protocol version won't change the return value of this\n> function.\n\nWhat I'm trying to say is: I don't think there's any usecase where\npeople would care about a major bump, but not a minor bump. Especially\nkeeping in mind that a minor bump had never occurred when originally\ncreating this function. And because we never did it, there has so far\nbeen no definition of what is the actual difference between a major\nand a minor bump.\n\n> I really don't understand why we're still arguing about this. It seems\n> to me that we've established that there is some usage of the existing\n> function, and that changing the return value will break something.\n> Sure, so far as we know that something is \"only\" regression tests, but\n> there's no guarantee that there couldn't be other code that we don't\n> know about that breaks worse\n\nMy point is that the code that breaks, actually wants to be broken in this case.\n\n> and even there isn't, who wants to break\n> regression tests when there's nothing actually wrong?\n\nUpdating the regression test would be less work than adding support\nfor a new API. So if the main problem is\n\n> Now we could\n> decide we're going to do it anyway because of whatever reason we might\n> have, but it doesn't seem like that's what most people want to do.\n>\n> I feel like we're finally in a position to get some things done here\n> and this doesn't seem like the point to get stuck on. YMMV, of course.\n\nI'd love to hear a response from Jacob and Heikki on my arguments\nafter their last response. But if after reading those arguments they\nstill think we should add a new function, I'll update the patchset to\ninclude a new function.\n\n\n",
"msg_date": "Mon, 19 Aug 2024 22:53:46 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Aug 19, 2024 at 1:54 PM Jelte Fennema-Nio <[email protected]> wrote:\n> My point is that the code that breaks, actually wants to be broken in this case.\n\nI'll turn this around then and assume for a moment that this is true:\nno matter what the use cases are, they all want to be broken for\ncorrectness. If this version change is allowed to break both the\nendpoints and any intermediaries on the connection, why have we chosen\n30001 as the new reported version as opposed to, say, 4?\n\nPut another way: for a middlebox on the connection (which may be\npassively observing, but also maybe actively adding new messages to\nthe stream), what is guaranteed to remain the same in the protocol\nacross a minor version bump? Hopefully the answer isn't \"nothing\"?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Aug 2024 06:48:00 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 15:48, Jacob Champion\n<[email protected]> wrote:\n> Put another way: for a middlebox on the connection (which may be\n> passively observing, but also maybe actively adding new messages to\n> the stream), what is guaranteed to remain the same in the protocol\n> across a minor version bump? Hopefully the answer isn't \"nothing\"?\n\nI think primarily we do a minor version bump because a major version\nbump would cause existing Postgres servers to throw an error for the\nconnection attempt (and we don't want that). While for a minor version\nbump they will instead send a NegotiateProtocolVersion message.\n\nIn practical terms I think that means for a minor version bump the\nformat of the StartupMessage cannot be changed. Changing anything else\nis fair game for a minor protocol version bump. I think we probably\nwould not want to change the format of ErrorResponse and\nNoticeResponse, since those can be sent by the server before the\nNegotiateProtocolVersion message. But I don't even think that is\nstrictly necessary, as long as clients would be able to parse both the\nold and new versions.\n\nNote that this definition arises from code and behaviour introduced in\nae65f6066dc3 in 2017. And PQprotocolVersion was introduced in\nefc3a25bb0 in 2003. So anyone starting to use the PQprotocolVersion\nfunction in between 2003 and 2017 had no way of knowing that there\nwould ever be a thing called a \"minor\" version, in which anything\nabout the protocol could be changed except for the StartupMessage.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 16:26:05 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On 20/08/2024 16:48, Jacob Champion wrote:\n> On Mon, Aug 19, 2024 at 1:54 PM Jelte Fennema-Nio <[email protected]> wrote:\n>> My point is that the code that breaks, actually wants to be broken in this case.\n> \n> I'll turn this around then and assume for a moment that this is true:\n> no matter what the use cases are, they all want to be broken for\n> correctness. If this version change is allowed to break both the\n> endpoints and any intermediaries on the connection, why have we chosen\n> 30001 as the new reported version as opposed to, say, 4?\n\nThat's not a completely crazy idea, it crossed my mind too. And since we \nalready decided to skip protocol number 3.1, how about we jump directly \nto 3.4. That way:\n\nprotocol |\n version | PQProtocolVersion()\n\n 2 | 2 (in old unsupported library versions)\n 3.0 | 3\n 3.4 | 4\n 3.5 | 5\n\nand so forth.\n\nThis kind of assumes we'll never bump the major protocol version again. \nBut if we do, we could jump to 40000 at that point.\n\n> Put another way: for a middlebox on the connection (which may be\n> passively observing, but also maybe actively adding new messages to\n> the stream), what is guaranteed to remain the same in the protocol\n> across a minor version bump? Hopefully the answer isn't \"nothing\"?\n\nI don't think we can give any future guarantees like that. If you have a \nmiddlebox on the connection, it needs to fully understand all the \nprotocol versions it supports. It cannot safely pass through protocol \nversion 3.5 without knowing what changed between 3.4 and 3.5. If the \nmiddlebox only knows about protocol version 3.4, it should respond with \na NegotiateProtocolVersion packet to downgrade to 3.4, even if both ends \nof the connection could speak 3.5.\n\nThat seems a bit tangential to the PQprotocolVersion() function though. \nA middlebox like that would probably not use libpq.\n\nI'm actually not sure exactly what an application would use \nPQprotocolVersion() for. To check if a feature exists or not? None of \nthe features discussed so far really need an application to check that, \nbut if we introduce one, I think we'd want to add a better feature-check \nfunction for that purpose. Something like \"bool PQsupportsFeature(conn, \nconst char *feature_name)\" perhaps. If we introduce optional protocol \nfeatures rather than bump protocol version in the future, we'll need a \ndifferent mechanism than PQprotocolVersion() anyway.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 20 Aug 2024 18:24:11 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 11:24 AM Heikki Linnakangas <[email protected]> wrote:\n> That's not a completely crazy idea, it crossed my mind too. And since we\n> already decided to skip protocol number 3.1, how about we jump directly\n> to 3.4. That way:\n>\n> protocol |\n> version | PQProtocolVersion()\n>\n> 2 | 2 (in old unsupported library versions)\n> 3.0 | 3\n> 3.4 | 4\n> 3.5 | 5\n>\n> and so forth.\n>\n> This kind of assumes we'll never bump the major protocol version again.\n> But if we do, we could jump to 40000 at that point.\n\nI personally like this less than both (a) adding a new function and\n(b) redefining the existing function as Jelte proposes. It just seems\ntoo clever to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Aug 2024 11:45:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 17:46, Robert Haas <[email protected]> wrote:\n> I personally like this less than both (a) adding a new function and\n> (b) redefining the existing function as Jelte proposes. It just seems\n> too clever to me.\n\nAgreed, I'm not really seeing a benefit of returning 4 instead of\n30004. Both are new numbers that are higher than 3, so on existing\ncode they would have the same impact. But any new code would be more\nreadable when using version >= 30004 imho.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 17:53:33 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 11:53 AM Jelte Fennema-Nio <[email protected]> wrote:\n> On Tue, 20 Aug 2024 at 17:46, Robert Haas <[email protected]> wrote:\n> > I personally like this less than both (a) adding a new function and\n> > (b) redefining the existing function as Jelte proposes. It just seems\n> > too clever to me.\n>\n> Agreed, I'm not really seeing a benefit of returning 4 instead of\n> 30004. Both are new numbers that are higher than 3, so on existing\n> code they would have the same impact. But any new code would be more\n> readable when using version >= 30004 imho.\n\nYes. And the major * 10000 + minor convention is used in other places\nalready, for PG versions, so it might already be familiar to some\npeople. I think if we're going to redefine an existing function, we\nmight as well just redefine it as you propose -- or perhaps even\nredefine it to return major * 10000 + minor always, instead of having\nthe strange exception for 3.0. I think I'm still on the side of not\nredefining it, but if we're going to redefine it, I think we should do\nwhat seems most elegant/logical and just accept that some code may\nbreak.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Aug 2024 12:02:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 9:02 AM Robert Haas <[email protected]> wrote:\n\n> Yes. And the major * 10000 + minor convention is used in other places\n> already, for PG versions, so it might already be familiar to some\n> people.\n>\n\nI'm wondering why we are indicating that minor versions of the protocol are\neven a real thing. We should just use integer version numbers. We are on\n3. The next one is 4 (the trailing .0 is just historical cruft just like\nwith our 3-digit PostgreSQL version number).\n\nv18 libpq-based clients, if they attempt to connect using v4 and fail, will\ntry again using the v3 connection. That will retain status quo behavior\nwhen something like a connection pooler doesn't understand the new\nreality. We can add a libpq option to prevent this auto-downgrade behavior.\n\nAt some point users will want to use something other than the v3 current\ntooling supports and will put pressure on those tools to change. In the\nmean-time our out-of-the-box behavior continues to work using the v3\nprotocol.\n\nFeature detection sounds great, and maybe we want to go there eventually,\nbut everyone understands progressive enhancement represented by version\nnumbering. A given major server version would only support a fixed and\nunchanging set of protocol versions between 3 and N. On the client, if N =\n7 then libpq would be able to choose both 7 and 3 as the version it tries\nout-of-the-box. We can use a libpq parameter to allow the client to\nspecify something between 4 and 6 (which may fail depending on poolers and\nwhat-not). If the chain of servers supports protocol version negotiation\nthen the attempt to connect using 7 can be auto-downgraded to anything\nbetween 3 and 6 (saving effort of a failed attempt and establishing a new\none.) Leaving the client the option to specify a minimum version of the\nprotocol it can accept.\n\nDavid J.\n\nOn Tue, Aug 20, 2024 at 9:02 AM Robert Haas <[email protected]> wrote:Yes. And the major * 10000 + minor convention is used in other places\nalready, for PG versions, so it might already be familiar to some\npeople.I'm wondering why we are indicating that minor versions of the protocol are even a real thing. We should just use integer version numbers. We are on 3. The next one is 4 (the trailing .0 is just historical cruft just like with our 3-digit PostgreSQL version number).v18 libpq-based clients, if they attempt to connect using v4 and fail, will try again using the v3 connection. That will retain status quo behavior when something like a connection pooler doesn't understand the new reality. We can add a libpq option to prevent this auto-downgrade behavior.At some point users will want to use something other than the v3 current tooling supports and will put pressure on those tools to change. In the mean-time our out-of-the-box behavior continues to work using the v3 protocol.Feature detection sounds great, and maybe we want to go there eventually, but everyone understands progressive enhancement represented by version numbering. A given major server version would only support a fixed and unchanging set of protocol versions between 3 and N. On the client, if N = 7 then libpq would be able to choose both 7 and 3 as the version it tries out-of-the-box. We can use a libpq parameter to allow the client to specify something between 4 and 6 (which may fail depending on poolers and what-not). If the chain of servers supports protocol version negotiation then the attempt to connect using 7 can be auto-downgraded to anything between 3 and 6 (saving effort of a failed attempt and establishing a new one.) Leaving the client the option to specify a minimum version of the protocol it can accept.David J.",
"msg_date": "Tue, 20 Aug 2024 09:42:05 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 12:42 PM David G. Johnston\n<[email protected]> wrote:\n> I'm wondering why we are indicating that minor versions of the protocol are even a real thing.\n\nBecause that concept is already a part of the existing wire protocol.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Aug 2024 12:44:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 18:42, David G. Johnston\n<[email protected]> wrote:\n> v18 libpq-based clients, if they attempt to connect using v4 and fail, will try again using the v3 connection. That will retain status quo behavior when something like a connection pooler doesn't understand the new reality.\n\nHaving connection latency double when connecting to an older Postgres\nis something I'd very much like to avoid. Reconnecting functionally\nretains the status quo, but it doesn't retain the expected perf\ncharacteristics. By using a minor protocol version we can easily avoid\nthis connection latency issue.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 18:55:47 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 9:44 AM Robert Haas <[email protected]> wrote:\n\n> On Tue, Aug 20, 2024 at 12:42 PM David G. Johnston\n> <[email protected]> wrote:\n> > I'm wondering why we are indicating that minor versions of the protocol\n> are even a real thing.\n>\n> Because that concept is already a part of the existing wire protocol.\n>\n>\nRight...\n\n\"\nIf the major version requested by the client is not supported by the\nserver, the connection will be rejected ... If the minor version requested\nby the client is not supported by the server ... the server may either\nreject the connection or may respond with a NegotiateProtocolVersion\nmessage containing the highest minor protocol version which it supports.\nThe client may then choose either to continue with the connection using the\nspecified protocol version or to abort the connection.\n\"\n\nSo basically my proposal amounted to making every update a \"major version\nupdate\" and changing the behavior surrounding NegotiateProtocolVersion so\nit applies to major version differences. I'll stand by that change in\ndefinition. The current one doesn't seem all that useful anyway, and as we\nonly have a single version, definitely hasn't been materially implemented.\nOtherwise, at some point a client that knows both v3 and v4 will exist and\nits connection will be rejected instead of downgraded by a v3-only server\neven though such a downgrade would be possible. I suspect we'd go ahead\nand change the rule then - so why not just do so now, while getting rid of\nthe idea that minor versions are a thing.\n\nI suppose we could leave minor versions for patch releases of the main\nserver version - which still leaves the first new feature of a release\nincrementing the major version. That would be incidental to changing how\nwe handle major versions.\n\nDavid J.\n\nOn Tue, Aug 20, 2024 at 9:44 AM Robert Haas <[email protected]> wrote:On Tue, Aug 20, 2024 at 12:42 PM David G. Johnston\n<[email protected]> wrote:\n> I'm wondering why we are indicating that minor versions of the protocol are even a real thing.\n\nBecause that concept is already a part of the existing wire protocol.Right...\"If the major version requested by the client is not supported by the server, the connection will be rejected ... If the minor version requested by the client is not supported by the server ... the server may either reject the connection or may respond with a NegotiateProtocolVersion message containing the highest minor protocol version which it supports. The client may then choose either to continue with the connection using the specified protocol version or to abort the connection.\"So basically my proposal amounted to making every update a \"major version update\" and changing the behavior surrounding NegotiateProtocolVersion so it applies to major version differences. I'll stand by that change in definition. The current one doesn't seem all that useful anyway, and as we only have a single version, definitely hasn't been materially implemented. Otherwise, at some point a client that knows both v3 and v4 will exist and its connection will be rejected instead of downgraded by a v3-only server even though such a downgrade would be possible. I suspect we'd go ahead and change the rule then - so why not just do so now, while getting rid of the idea that minor versions are a thing.I suppose we could leave minor versions for patch releases of the main server version - which still leaves the first new feature of a release incrementing the major version. That would be incidental to changing how we handle major versions.David J.",
"msg_date": "Tue, 20 Aug 2024 10:01:55 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 7:26 AM Jelte Fennema-Nio <[email protected]> wrote:\n> In practical terms I think that means for a minor version bump the\n> format of the StartupMessage cannot be changed. Changing anything else\n> is fair game for a minor protocol version bump.\n\nI may be in a tiny minority here, but when I combine that statement\nwith your opinion from way upthread that\n\n> IMHO, we\n> should get to a state where protocol minor version bumps are so\n> low-risk that we can do them whenever we add message types\n\nthen I don't see this effort ending up in a healthy place or with a\nhappy ecosystem. Pick any IETF-managed protocol, add on the statement\n\"we get to change anything we want in a minor version, and we reserve\nthe right to do it every single year\", and imagine the chaos for\nanyone who doesn't have power over both servers and clients.\n\nTo me it seems that what you're proposing is indistinguishable from\nwhat most other protocols would consider a major version bump; it's\njust that you (reasonably) want existing clients to be able to\nnegotiate multiple major versions in one round trip.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Aug 2024 10:02:55 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 1:02 PM David G. Johnston\n<[email protected]> wrote:\n> So basically my proposal amounted to making every update a \"major version update\" and changing the behavior surrounding NegotiateProtocolVersion so it applies to major version differences. I'll stand by that change in definition. The current one doesn't seem all that useful anyway, and as we only have a single version, definitely hasn't been materially implemented. Otherwise, at some point a client that knows both v3 and v4 will exist and its connection will be rejected instead of downgraded by a v3-only server even though such a downgrade would be possible. I suspect we'd go ahead and change the rule then - so why not just do so now, while getting rid of the idea that minor versions are a thing.\n>\n> I suppose we could leave minor versions for patch releases of the main server version - which still leaves the first new feature of a release incrementing the major version. That would be incidental to changing how we handle major versions.\n\nI don't see how this makes life any better for anyone. At some point\nin the future we may decide to make a protocol change that is big and\nbreaks a lot of stuff, but the current goals are all to make minor\nchanges that break as little stuff as possible. I think it's\nappropriate to call the latter a \"minor\" change and the former a\n\"major\" change. If we adopted this proposal, then we could end up in a\nsituation where versions 3 through 17 are all mostly compatible and\nthen version 18 is something totally different. It sounds much better\nto me to have versions 3.0 through 3.14 and then eventually 4.0. This\nis also what the person who designed the current protocol version\nnumbering scheme seems to have had in mind, even if the implementation\nto make it a reality has been a bit lacking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Aug 2024 13:17:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 10:03 AM Jacob Champion <\[email protected]> wrote:\n\n> On Tue, Aug 20, 2024 at 7:26 AM Jelte Fennema-Nio <[email protected]>\n> wrote:\n> > In practical terms I think that means for a minor version bump the\n> > format of the StartupMessage cannot be changed. Changing anything else\n> > is fair game for a minor protocol version bump.\n>\n> I may be in a tiny minority here, but when I combine that statement\n> with your opinion from way upthread that\n>\n> > IMHO, we\n> > should get to a state where protocol minor version bumps are so\n> > low-risk that we can do them whenever we add message types\n>\n> To me it seems that what you're proposing is indistinguishable from\n> what most other protocols would consider a major version bump; it's\n> just that you (reasonably) want existing clients to be able to\n> negotiate multiple major versions in one round trip.\n>\n>\nThis makes more sense to me - a major version change is one where the\nserver fails to understand the incoming message(s) to the point that it\ncannot make decisions based upon contents.\n\nFramed up this way the two-part versioning works just fine and I concur\nthat PQversionNumber should go ahead and report 10000+minor (starting at 2)\nwith 3.0 remaining as-is since apparently negotiation down to 3.0 is\npossible here if the intermediate and/or final server have such ability.\n\nStill, instead of just failing immediately if 30002 is specified and\nrejected, falling back to trying 3.0 - unless configured to either not do\nthat or to only do 3.0 - is advised to help with the transition.\n\nDavid J.\n\nOn Tue, Aug 20, 2024 at 10:03 AM Jacob Champion <[email protected]> wrote:On Tue, Aug 20, 2024 at 7:26 AM Jelte Fennema-Nio <[email protected]> wrote:\n> In practical terms I think that means for a minor version bump the\n> format of the StartupMessage cannot be changed. Changing anything else\n> is fair game for a minor protocol version bump.\n\nI may be in a tiny minority here, but when I combine that statement\nwith your opinion from way upthread that\n\n> IMHO, we\n> should get to a state where protocol minor version bumps are so\n> low-risk that we can do them whenever we add message types\nTo me it seems that what you're proposing is indistinguishable from\nwhat most other protocols would consider a major version bump; it's\njust that you (reasonably) want existing clients to be able to\nnegotiate multiple major versions in one round trip.This makes more sense to me - a major version change is one where the server fails to understand the incoming message(s) to the point that it cannot make decisions based upon contents.Framed up this way the two-part versioning works just fine and I concur that PQversionNumber should go ahead and report 10000+minor (starting at 2) with 3.0 remaining as-is since apparently negotiation down to 3.0 is possible here if the intermediate and/or final server have such ability.Still, instead of just failing immediately if 30002 is specified and rejected, falling back to trying 3.0 - unless configured to either not do that or to only do 3.0 - is advised to help with the transition.David J.",
"msg_date": "Tue, 20 Aug 2024 10:21:54 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 8:24 AM Heikki Linnakangas <[email protected]> wrote:\n> > Put another way: for a middlebox on the connection (which may be\n> > passively observing, but also maybe actively adding new messages to\n> > the stream), what is guaranteed to remain the same in the protocol\n> > across a minor version bump? Hopefully the answer isn't \"nothing\"?\n>\n> I don't think we can give any future guarantees like that. If you have a\n> middlebox on the connection, it needs to fully understand all the\n> protocol versions it supports.\n\n(GMail has catastrophically unthreaded this conversation for me, so\napologies if I start responding out of order)\n\nMany protocols provide the list of assumptions that intermediates are\nallowed to make within a single group of compatible versions, even as\nthe protocol gets extended. If we choose to provide those, then our\n\"major version\" gains really useful semantics. See also the brief\n\"criticality\" tangent upthread.\n\n> That seems a bit tangential to the PQprotocolVersion() function though.\n> A middlebox like that would probably not use libpq.\n\nIt's applicable to the use case I was talking about with Jelte. A\nlibpq client dropping down to the socket level is relying on\n(implicit, currently undocumented/undecided, possibly incorrect!)\nintermediary guarantees that the protocol provides for a major\nversion. I'm hoping we can provide some, since we haven't broken\nanything yet. If we decide we can't, then so be it -- things will\nbreak either way -- but it's still strange to me that we'd be okay\nwith literally zero forward compatibility and still call that a \"minor\nversion\".\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Aug 2024 10:30:53 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 10:31 AM Jacob Champion <\[email protected]> wrote:\n\n> If we decide we can't, then so be it -- things will\n> break either way -- but it's still strange to me that we'd be okay\n> with literally zero forward compatibility and still call that a \"minor\n> version\".\n>\n\nSemantic versioning guidelines are not something we are following,\nespecially here.\n\nOur protocol version is really just two-part; just like our server major\nversion used to be. We just happen to have named both parts here, unlike\nwith the historical server major version.\n\nWe never have implemented a protocol change during a minor server version\nupdate, it doesn't have (though maybe it needs?) a patch version part.\n\nDavid J.\n\nOn Tue, Aug 20, 2024 at 10:31 AM Jacob Champion <[email protected]> wrote: If we decide we can't, then so be it -- things will\nbreak either way -- but it's still strange to me that we'd be okay\nwith literally zero forward compatibility and still call that a \"minor\nversion\".\nSemantic versioning guidelines are not something we are following, especially here.Our protocol version is really just two-part; just like our server major version used to be. We just happen to have named both parts here, unlike with the historical server major version.We never have implemented a protocol change during a minor server version update, it doesn't have (though maybe it needs?) a patch version part.David J.",
"msg_date": "Tue, 20 Aug 2024 10:42:15 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 10:42 AM David G. Johnston\n<[email protected]> wrote:\n> Semantic versioning guidelines are not something we are following, especially here.\n\nI understand; the protocol is ours, and we'll do whatever we do in the\nend. I'm claiming that we can choose to provide semantics, and if we\ndo, those semantics will help people who are not here on the list to\ndefend their use cases.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Aug 2024 10:46:30 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 10:46 AM Jacob Champion <\[email protected]> wrote:\n\n> On Tue, Aug 20, 2024 at 10:42 AM David G. Johnston\n> <[email protected]> wrote:\n> > Semantic versioning guidelines are not something we are following,\n> especially here.\n>\n> I understand; the protocol is ours, and we'll do whatever we do in the\n> end. I'm claiming that we can choose to provide semantics, and if we\n> do, those semantics will help people who are not here on the list to\n> defend their use cases.\n>\n>\nI was mostly just responding to your surprise given that we have a\ntrack-record here. I agree that our existing effective policy isn't all\nthat well documented, namely as to when the major component might change,\nand the fact that the minor component does not represent a \"bug fix\nrelease\".\n\nDavid J.\n\nOn Tue, Aug 20, 2024 at 10:46 AM Jacob Champion <[email protected]> wrote:On Tue, Aug 20, 2024 at 10:42 AM David G. Johnston\n<[email protected]> wrote:\n> Semantic versioning guidelines are not something we are following, especially here.\n\nI understand; the protocol is ours, and we'll do whatever we do in the\nend. I'm claiming that we can choose to provide semantics, and if we\ndo, those semantics will help people who are not here on the list to\ndefend their use cases.\nI was mostly just responding to your surprise given that we have a track-record here. I agree that our existing effective policy isn't all that well documented, namely as to when the major component might change, and the fact that the minor component does not represent a \"bug fix release\".David J.",
"msg_date": "Tue, 20 Aug 2024 10:55:46 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 19:02, David G. Johnston\n<[email protected]> wrote:\n> So basically my proposal amounted to making every update a \"major version update\" and changing the behavior surrounding NegotiateProtocolVersion so it applies to major version differences. I'll stand by that change in definition. The current one doesn't seem all that useful anyway, and as we only have a single version, definitely hasn't been materially implemented. Otherwise, at some point a client that knows both v3 and v4 will exist and its connection will be rejected instead of downgraded by a v3-only server even though such a downgrade would be possible. I suspect we'd go ahead and change the rule then - so why not just do so now, while getting rid of the idea that minor versions are a thing.\n\nIf we decide to never change the format of the StartupMessage again\n(which may be an okay thing to decide). Then I agree it would make\nsense to update the existing supported servers ASAP to be able to send\nback a NegotiateProtocolVersion message if they receive a 4.x\nStartupMessage, and the server only supports up to 3.x.\n\nHowever, even if we do that, I don't think it makes sense to start\nusing the 4.0 version straight away. Because many older postgres\nservers would still throw an error when receiving the 4.x request. By\nusing a 3.x version we are able to avoid those errors in the existing\necosystem. Basically, I think we should probably wait ~5 years again\nuntil we actually use a 4.0 version.\n\ni.e. I don't see serious benefits to using 4.0. The main benefit you\nseem to describe is: \"it's theoretically cleaner to use major version\nbumps\". And there is a serious downside: \"seriously breaking the\nexisting ecosystem\".\n\n> I suppose we could leave minor versions for patch releases of the main server version - which still leaves the first new feature of a release incrementing the major version. That would be incidental to changing how we handle major versions.\n\nHaving a Postgres server patch update change the protocol version that\nthe server supports sounds pretty scary to me.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 21:55:22 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 19:31, Jacob Champion\n<[email protected]> wrote:\n> It's applicable to the use case I was talking about with Jelte. A\n> libpq client dropping down to the socket level is relying on\n> (implicit, currently undocumented/undecided, possibly incorrect!)\n> intermediary guarantees that the protocol provides for a major\n> version. I'm hoping we can provide some, since we haven't broken\n> anything yet. If we decide we can't, then so be it -- things will\n> break either way -- but it's still strange to me that we'd be okay\n> with literally zero forward compatibility and still call that a \"minor\n> version\".\n\nI think one compatibility guarantee that we might want to uphold is\nsomething like the following: After completing the initial connection\nsetup, a server should only send new message types or new fields on\nexisting message types when the client has specifically advertised\nsupport for that message type in one of two ways:\n1. By configuring a specific protocol parameter\n2. By sending a new message type or using a new field that implicitly\nadvertises support for the new message type/fields. In this case the\nmessage should be of a request-response style, the server cannot\nassume that after the request-response communication happened this new\nmessage type is still supported by the client.\n\nThe reasoning for this was discussed a while back upthread: This would\nbe to allow a connection pooler (like PgBouncer) to have a pool of the\nhighest minor version that the pooler supports e.g 3.8, but then it\ncould still hand out these connections to clients that connected to\nthe pooler using a lower version. Without having these clients receive\nmessages that they do not support.\n\nAnother way of describing this guarantee: If a client connects using\n3.8 and configures no protocol parameters, the client needs to handle\nanything 3.8 specific that the handshake requires (such as longer\ncancel token). But then after that handshake it can choose to send\nonly 3.0 packets and expect to receive only 3.0 packets back.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 21:55:31 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 12:55 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Another way of describing this guarantee: If a client connects using\n> 3.8 and configures no protocol parameters, the client needs to handle\n> anything 3.8 specific that the handshake requires (such as longer\n> cancel token). But then after that handshake it can choose to send\n> only 3.0 packets and expect to receive only 3.0 packets back.\n\nThat guarantee (if adopted) would also make it possible for my use\ncase to proceed correctly, since a libpq client can still speak 3.0\npackets on the socket safely. But in that case, PQprotocolVersion\nshould keep returning 3, because there's an explicit reason to care\nabout the major version by itself.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Aug 2024 14:48:19 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 23:48, Jacob Champion\n<[email protected]> wrote:\n> That guarantee (if adopted) would also make it possible for my use\n> case to proceed correctly, since a libpq client can still speak 3.0\n> packets on the socket safely.\n\nNot necessarily (at least not how I defined it). If a protocol\nparameter has been configured (possibly done by default by libpq),\nthen that might not be the case anymore. So, you'd also need to\ncompare the current values of the protocol parameters to their\nexpected value.\n\n> But in that case, PQprotocolVersion\n> should keep returning 3, because there's an explicit reason to care\n> about the major version by itself.\n\nI agree that there's a reason to care about the major version then,\nbut it might still be better to fail hard for people that care about\nprotocol details. Because when writing the code that checked\nPQprotocolVersion there was no definition at all of what could change\nduring a minor version bump. So there was no possibility to reason for\nthat person if they should be notified of a minor version bump or not.\nSo notifying the author of the code by failing the check would be\nerring on the safe side, because maybe they would need to update their\ncode that depends on the protocol details. If not, and they then\nrealize that our guarantee is strong enough for their use case, then\nthey could replace their check with something like:\n\nPQprotocolVersion() >= 3 && PQprotocolVersion() < 40000\n\nTo be clear, the argument for changing PQprotocolVersion() is\ndefinitely less strong if we'd provide such a guarantee. But I don't\nthink the problem is completely gone either.\n\n\n",
"msg_date": "Wed, 21 Aug 2024 00:17:09 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 3:17 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Tue, 20 Aug 2024 at 23:48, Jacob Champion\n> <[email protected]> wrote:\n> > That guarantee (if adopted) would also make it possible for my use\n> > case to proceed correctly, since a libpq client can still speak 3.0\n> > packets on the socket safely.\n>\n> Not necessarily (at least not how I defined it). If a protocol\n> parameter has been configured (possibly done by default by libpq),\n> then that might not be the case anymore. So, you'd also need to\n> compare the current values of the protocol parameters to their\n> expected value.\n\nWith your definition, I agree. But I was about to sneakily suggest\n(muahaha) that if you want to go that route, maybe protocol extensions\nneed to provide their own forward compatibility statements. Whether\nvia the same mechanism, or with something like criticality.\n\n> > But in that case, PQprotocolVersion\n> > should keep returning 3, because there's an explicit reason to care\n> > about the major version by itself.\n>\n> I agree that there's a reason to care about the major version then,\n> but it might still be better to fail hard for people that care about\n> protocol details.\n\nMaybe? In the span of a couple of days we've gone from \"minor versions\nare actually major versions and we will break all intermediaries all\nthe time\" to \"maybe not, actually\". It's difficult for me to reason\nthrough things that quickly. And on some level, that's fine and\nexpected, if we're still at the debate-and-design stage.\n\nBut personally I'd hoped that most of the conversation around\nsomething this disruptive would be about what's going to break and\nwhat's not, with the goal of making the broken set as small as\npossible in exchange for specific benefits. Instead it seems like use\ncases are having to justify themselves to avoid being broken, which is\nnot really the stance I want to see from a protocol maintainer.\nEspecially not if your stated goal is to bump versions whenever we\nwant (which, just for the record, I do not agree with).\n\nPut another way: we've seen that our protocol-version joint has rusted\n[1, 2]. I agree that needs to be fixed. But I also believe that we\nshouldn't try to smash the joint open with a hammer, and that belief\nseems philosophically at odds with the approach being taken upthread.\nSo if I'm the only one who feels this way, please someone let me know\nso I can bow out instead of throwing up roadblocks... I don't want to\nbe a distraction from incremental protocol improvements.\n\n--Jacob\n\n[1] https://en.wikipedia.org/wiki/Protocol_ossification\n[2] https://www.imperialviolet.org/2016/05/16/agility.html\n\n\n",
"msg_date": "Wed, 21 Aug 2024 12:14:35 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 3:14 PM Jacob Champion\n<[email protected]> wrote:\n> Put another way: we've seen that our protocol-version joint has rusted\n> [1, 2]. I agree that needs to be fixed. But I also believe that we\n> shouldn't try to smash the joint open with a hammer, and that belief\n> seems philosophically at odds with the approach being taken upthread.\n\n+1. We are inevitably going to break some things, especially if we\nintroduce changes that affect everyone, such as longer cancel keys,\nrather than just optional features. But we owe it not only to our\nrather large user base but also to ourselves to minimize the extent of\nthe breakage as much as possible. I have yet to experience the\nsituation where I commit something that angers users and my life\nnevertheless improves afterward.\n\nPersonally, I'm 100% convinced at this point that we're arguing about\nthe wrong problem. Before, I didn't know for sure whether anyone would\nbe mad if we redefined PQprotocolVersion(), but now I know that there\nis at least one person who will be, and that's Jacob. If there's one\namong regular -hackers posters, there are probably more. Since Jelte\ndoesn't seem to want to produce the patch to add\nPQminorProtocolVersion(), I suggest that somebody else does that --\nJacob, do you want to? -- and we commit that and move on.\n\nThen we can get down to the business of actually changing some stuff\nat the protocol level. IMHO, that's what should be scary and/or\ncontroversial here, and it's also imperative that if we're going to do\nit, we do it soon. If we make the mistake of dumping a bunch of\nchanges that break half of the ecosystem into the tree just before\nfeature freeze, there's no time for us to fix anything more than\ntrivial problems. If more serious problems turn up, it's a revert. If\nwe start to get some of these changes made now, there's a lot more\nroom for error. Let's take advantage of the time available while we\nstill have it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Aug 2024 10:02:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, 23 Aug 2024 at 16:02, Robert Haas <[email protected]> wrote:\n> Personally, I'm 100% convinced at this point that we're arguing about\n> the wrong problem. Before, I didn't know for sure whether anyone would\n> be mad if we redefined PQprotocolVersion(), but now I know that there\n> is at least one person who will be, and that's Jacob.\n\nI could be interpreting Jacob his response incorrectly, but my\nunderstanding is that the type of protocol changes we would actually\ndo in this version bump, would determine if he's either mad or happy\nthat we redefined PQprotocolVersion.\n\n> If there's one\n> among regular -hackers posters, there are probably more. Since Jelte\n> doesn't seem to want to produce the patch to add\n> PQminorProtocolVersion(), I suggest that somebody else does that --\n> Jacob, do you want to? -- and we commit that and move on.\n\nLet's call it PQfullProtocolVersion and make it return 30002. I'm fine\nwith updating the patch. But I'll be unavailable for the next ~3\nweeks.\n\n> Then we can get down to the business of actually changing some stuff\n> at the protocol level. IMHO, that's what should be scary and/or\n> controversial here, and it's also imperative that if we're going to do\n> it, we do it soon.\n\nAgreed, but I don't think doing so is blocked on merging a\nPQfullProtocolVersion libpq function.\n\n\n",
"msg_date": "Fri, 23 Aug 2024 16:42:27 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 23, 2024 at 7:42 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 23 Aug 2024 at 16:02, Robert Haas <[email protected]> wrote:\n> > Personally, I'm 100% convinced at this point that we're arguing about\n> > the wrong problem. Before, I didn't know for sure whether anyone would\n> > be mad if we redefined PQprotocolVersion(), but now I know that there\n> > is at least one person who will be, and that's Jacob.\n>\n> I could be interpreting Jacob his response incorrectly, but my\n> understanding is that the type of protocol changes we would actually\n> do in this version bump, would determine if he's either mad or happy\n> that we redefined PQprotocolVersion.\n\nYes, but my conclusion is pretty much the same: let's talk about the\nprotocol changes. If we get to the end and revert the new API because\nit's no longer adding anything -- e.g. because we've decided that\nminor versions no longer have any compatibility guarantees at all --\nso be it.\n\n> > If there's one\n> > among regular -hackers posters, there are probably more. Since Jelte\n> > doesn't seem to want to produce the patch to add\n> > PQminorProtocolVersion(), I suggest that somebody else does that --\n> > Jacob, do you want to? -- and we commit that and move on.\n>\n> Let's call it PQfullProtocolVersion and make it return 30002. I'm fine\n> with updating the patch. But I'll be unavailable for the next ~3\n> weeks.\n\nAgreed on the name. I've attached a reconfigured version of v15-0003,\nwith an extension that should hopefully not throw off the cfbot, and a\ncommit message that should hopefully not misrepresent the discussion\nso far?\n\nThanks,\n--Jacob",
"msg_date": "Fri, 23 Aug 2024 12:39:58 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Fri, Aug 23, 2024 at 3:40 PM Jacob Champion\n<[email protected]> wrote:\n> Agreed on the name. I've attached a reconfigured version of v15-0003,\n> with an extension that should hopefully not throw off the cfbot, and a\n> commit message that should hopefully not misrepresent the discussion\n> so far?\n\nLGTM. Objections?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 08:40:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 8:40 AM Robert Haas <[email protected]> wrote:\n> On Fri, Aug 23, 2024 at 3:40 PM Jacob Champion\n> <[email protected]> wrote:\n> > Agreed on the name. I've attached a reconfigured version of v15-0003,\n> > with an extension that should hopefully not throw off the cfbot, and a\n> > commit message that should hopefully not misrepresent the discussion\n> > so far?\n>\n> LGTM. Objections?\n\nHearing none, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:57:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add new protocol message to change GUCs for usage with future\n protocol-only GUCs"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nThe function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)\nHas a copy and paste error.\n\nThe commit affected is dc212340058b4e7ecfc5a7a81ec50e7a207bf288\n<http://dc21234>\n\nTrivial patch attached.\n\nBest regards,\nRanier Vilela",
"msg_date": "Sat, 30 Dec 2023 08:43:39 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)"
},
{
"msg_contents": "On Sat, 30 Dec 2023 at 12:44, Ranier Vilela <[email protected]> wrote:\n> The function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)\n> Has a copy and paste error.\n\nNice catch. Patch looks good to me.\n\n\n",
"msg_date": "Sat, 30 Dec 2023 13:38:00 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)"
},
{
"msg_contents": "On Sat, Dec 30, 2023 at 01:38:00PM +0100, Jelte Fennema-Nio wrote:\n> On Sat, 30 Dec 2023 at 12:44, Ranier Vilela <[email protected]> wrote:\n> > The function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)\n> > Has a copy and paste error.\n> \n> Nice catch. Patch looks good to me.\n\nPlease see my reply here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nSo these will likely be handled in a single batch.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:21:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nThe function scan_for_existing_tablespaces in\n(src/bin/pg_combinebackup/pg_combinebackup.c)\nHas a resource leak.\n\nThe function opendir opendir\n<https://pubs.opengroup.org/onlinepubs/009604599/functions/opendir.html>\nMust be freed with closedir\n<https://pubs.opengroup.org/onlinepubs/009604599/functions/closedir.html>\n\nThe commit affected is dc21234\n<http://dc212340058b4e7ecfc5a7a81ec50e7a207bf288>\n\nTrivial patch attached.\n\nBest regards,\nRanier Vilela",
"msg_date": "Sat, 30 Dec 2023 10:34:12 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix resource leak (src/bin/pg_combinebackup/pg_combinebackup.c)"
},
{
"msg_contents": "Hi Ranier,\n\nOn Sat, Dec 30, 2023 at 10:34:12AM -0300, Ranier Vilela wrote:\n> The function scan_for_existing_tablespaces in\n> (src/bin/pg_combinebackup/pg_combinebackup.c)\n> Has a resource leak.\n> \n> The function opendir opendir\n> <https://pubs.opengroup.org/onlinepubs/009604599/functions/opendir.html>\n> Must be freed with closedir\n> <https://pubs.opengroup.org/onlinepubs/009604599/functions/closedir.html>\n> \n> The commit affected is dc21234\n> <http://dc212340058b4e7ecfc5a7a81ec50e7a207bf288>\n\nThe community receives its own coverity reports. These are not public\nbut we are aware of the reports related to pg_basebackup and\npg_combinebackup as an effect of dc212340058b. Robert is planning to\nhandle all these AFAIK once the new year vacations and such cool down.\n\nIn short there is no need to worry here :)\n\nThanks for the patches.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:18:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix resource leak (src/bin/pg_combinebackup/pg_combinebackup.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhat do you think of adding a NO RESET option to the SET ROLE command?\n\nRight now Postgres can enforce data security with roles and RLS, but\nrole-per-end-user doesn't really scale: Db connections are per-role, so a\nconnection pooler can't share connections across users. We can work around\nthis with policies that use session variables and checks against\ncurrent_user, but it seems like role-per end user would be more beautiful.\nIf SET ROLE had a NO RESET option, you could connect through a connection\npooler as a privileged user, but downgrade to the user's role for the\nduration of the session.\n\nThanks,\nEric\n\nHi,What do you think of adding a NO RESET option to the SET ROLE command?Right now Postgres can enforce data security with roles and RLS, but role-per-end-user doesn't really scale: Db connections are per-role, so a connection pooler can't share connections across users. We can work around this with policies that use session variables and checks against current_user, but it seems like role-per end user would be more beautiful. If SET ROLE had a NO RESET option, you could connect through a connection pooler as a privileged user, but downgrade to the user's role for the duration of the session.Thanks,Eric",
"msg_date": "Sat, 30 Dec 2023 10:16:59 -0600",
"msg_from": "Eric Hanson <[email protected]>",
"msg_from_op": true,
"msg_subject": "SET ROLE x NO RESET"
},
{
"msg_contents": "On 12/30/23 11:16, Eric Hanson wrote:\n> Hi,\n> \n> What do you think of adding a NO RESET option to the SET ROLE command?\n> \n> Right now Postgres can enforce data security with roles and RLS, but \n> role-per-end-user doesn't really scale: Db connections are per-role, so \n> a connection pooler can't share connections across users. We can work \n> around this with policies that use session variables and checks against \n> current_user, but it seems like role-per end user would be more \n> beautiful. If SET ROLE had a NO RESET option, you could connect through \n> a connection pooler as a privileged user, but downgrade to the user's \n> role for the duration of the session.\n\n+1\n\nI agree this would be useful.\n\nIn the meantime, in case it helps, see\n\n https://github.com/pgaudit/set_user\n\nSpecifically set_session_auth(text):\n-------------\nWhen set_session_auth(text) is called, the effective session and current \nuser is switched to the rolename supplied, irrevocably. Unlike \nset_user() or set_user_u(), it does not affect logging nor allowed \nstatements. If set_user.exit_on_error is \"on\" (the default), and any \nerror occurs during execution, a FATAL error is thrown and the backend \nsession exits.\n-------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 30 Dec 2023 12:50:08 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "> On 30 Dec 2023, at 17:16, Eric Hanson <[email protected]> wrote:\n> \n> What do you think of adding a NO RESET option to the SET ROLE command?\n\nWhat I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so that you could later: RESET ROLE WITH ‘password'\n\nhttps://www.postgresql.org/message-id/F9428C6E-4CCC-441D-A148-67BF36526D45%40kleczek.org\n\n—\nMIchal\nOn 30 Dec 2023, at 17:16, Eric Hanson <[email protected]> wrote:What do you think of adding a NO RESET option to the SET ROLE command?What I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so that you could later: RESET ROLE WITH ‘password'https://www.postgresql.org/message-id/F9428C6E-4CCC-441D-A148-67BF36526D45%40kleczek.org—MIchal",
"msg_date": "Sat, 30 Dec 2023 23:19:04 +0100",
"msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On 12/30/23 17:19, Michał Kłeczek wrote:\n> \n>> On 30 Dec 2023, at 17:16, Eric Hanson <[email protected]> wrote:\n>>\n>> What do you think of adding a NO RESET option to the SET ROLE command?\n> \n> What I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so \n> that you could later: RESET ROLE WITH ‘password'\n\nI like that too, but see it as a separate feature. FWIW that is also \nsupported by the set_user extension referenced elsewhere on this thread.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sun, 31 Dec 2023 14:19:50 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 2:20 PM Joe Conway <[email protected]> wrote:\n> On 12/30/23 17:19, Michał Kłeczek wrote:\n> >> On 30 Dec 2023, at 17:16, Eric Hanson <[email protected]> wrote:\n> >>\n> >> What do you think of adding a NO RESET option to the SET ROLE command?\n> >\n> > What I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so\n> > that you could later: RESET ROLE WITH ‘password'\n>\n> I like that too, but see it as a separate feature. FWIW that is also\n> supported by the set_user extension referenced elsewhere on this thread.\n\nIMHO, the best solution here would be a protocol message to change the\nsession user. The pooler could use that repeatedly on the same\nsession, but refuse to propagate such messages from client\nconnections.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 12:36:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "\n\n> On 2 Jan 2024, at 18:36, Robert Haas <[email protected]> wrote:\n> \n> On Sun, Dec 31, 2023 at 2:20 PM Joe Conway <[email protected]> wrote:\n>> On 12/30/23 17:19, Michał Kłeczek wrote:\n>>>> On 30 Dec 2023, at 17:16, Eric Hanson <[email protected]> wrote:\n>>>> \n>>>> What do you think of adding a NO RESET option to the SET ROLE command?\n>>> \n>>> What I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so\n>>> that you could later: RESET ROLE WITH ‘password'\n>> \n>> I like that too, but see it as a separate feature. FWIW that is also\n>> supported by the set_user extension referenced elsewhere on this thread.\n> \n> IMHO, the best solution here would be a protocol message to change the\n> session user. The pooler could use that repeatedly on the same\n> session, but refuse to propagate such messages from client\n> connections.\n\nI think that is a different use case and both are needed.\n\nIn my case I have scripts that I want to execute with limited privileges\nand make sure the scripts cannot escape the sandbox via RESET ROLE.\n\nThanks,\nMichal \n\n",
"msg_date": "Tue, 2 Jan 2024 23:23:23 +0100",
"msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "\n\n\n\n\nOn 12/31/23 1:19 PM, Joe Conway wrote:\n\nOn\n 12/30/23 17:19, Michał Kłeczek wrote:\n \n\n\nOn 30 Dec 2023, at 17:16, Eric Hanson\n <[email protected]> wrote:\n \n\n What do you think of adding a NO RESET option to the SET ROLE\n command?\n \n\n\n What I proposed some time ago is SET ROLE … GUARDED BY\n ‘password’, so that you could later: RESET ROLE WITH ‘password'\n \n\n\n I like that too, but see it as a separate feature. FWIW that is\n also supported by the set_user extension referenced elsewhere on\n this thread.\n \n That means you'd need to manage that password. ISTM a better\n mechanism to do this in SQL would be a method of changing roles that\n returns a nonce that you'd then use to reset your role. I believe\n that'd also make it practical to do this server-side in the various\n PLs.\n\n\n",
"msg_date": "Tue, 2 Jan 2024 16:53:55 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 23:23, Michał Kłeczek <[email protected]> wrote:\n> > On 2 Jan 2024, at 18:36, Robert Haas <[email protected]> wrote:\n> > IMHO, the best solution here would be a protocol message to change the\n> > session user. The pooler could use that repeatedly on the same\n> > session, but refuse to propagate such messages from client\n> > connections.\n>\n> I think that is a different use case and both are needed.\n\n\nFYI I implemented something just now that's pretty much what Robert\nwas talking about:\nhttps://www.postgresql.org/message-id/flat/CAGECzQR%253D1t1TL-eS9HAjoGysdprPci5K7-C353PnON6W-_s9uw%2540mail.gmail.com\n\n> In my case I have scripts that I want to execute with limited privileges\n> and make sure the scripts cannot escape the sandbox via RESET ROLE.\n\nDepending on the desired workflow I think that could work for you too.\nBecause it allows you to do this (and use -f script.sql instead of -c\n'select ...):\n\n❯ psql \"user=postgres _pq_.protocol_managed_params=role options='-c\nrole=pg_read_all_data'\" -c 'select current_user; set role postgres'\n current_user\n──────────────────\n pg_read_all_data\n(1 row)\n\nERROR: 42501: parameter can only be set at the protocol level \"role\"\nLOCATION: set_config_with_handle, guc.c:3583\nTime: 0.667 ms\n\n\n",
"msg_date": "Wed, 3 Jan 2024 18:22:15 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On Sat, Dec 30, 2023 at 10:16:59AM -0600, Eric Hanson wrote:\n> What do you think of adding a NO RESET option to the SET ROLE command?\n\nI've wanted this forever. Consider using this to implement user\nauthentication mechanisms in user-defined SQL functions that use `SET\nROLE` with `NO RESET` to \"login\" the user. One could implement JWT (or\nwhatever bearer token schemes) on the server side in PlPgSQL w/ pgcrypto\nthis way, with zero changes to PG itself, no protocol changes, etc.\n\nFor bearer token schemes one could acquire the token externally to the\nclient and then just `SELECT login(?)`, bind the token, and execute to\nlogin.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:40:43 -0600",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 12:36:38PM -0500, Robert Haas wrote:\n> IMHO, the best solution here would be a protocol message to change the\n> session user. The pooler could use that repeatedly on the same\n> session, but refuse to propagate such messages from client\n> connections.\n\nBut this requires upgrading clients too.\n\nIMO `SET ROLE .. NO RESET` would be terribly useful. One could build:\n\n - login systems (e.g., bearer tokens, passwords) in SQL / PlPgSQL / etc\n\n - sudo-like things\n\nThough maybe `NO RESET` isn't really needed to build these, since after\nall one could use an unprivileged role and a SECURITY DEFINER function\nthat does the `SET ROLE` following some user-defined authentication\nmethod, and so what if the client can RESET the role, since that brings\nit back to the otherwise unprivileged role.\n\nWho needs to RESET roles anyways? Answer: connection pools, but not\nevery connection is used via a pool. This brings up something: attempts\nto reset a NO RESET session need to fail in such a way that a connection\npool can detect this and disconnect, or else it needs to fail by\nterminating the connection altogether.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:47:19 -0600",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "\n\n\n\n\nOn 1/3/24 5:47 PM, Nico Williams wrote:\n\n\nThough maybe `NO RESET` isn't really needed to build these, since after\nall one could use an unprivileged role and a SECURITY DEFINER function\nthat does the `SET ROLE` following some user-defined authentication\nmethod, and so what if the client can RESET the role, since that brings\nit back to the otherwise unprivileged role.\n\nAn unprivileged role that has the ability to assume any other\n role ;p\nAlso, last I checked you can't do SET ROLE in a security definer\n function.\n\n\n\nWho needs to RESET roles anyways? Answer: connection pools, but not\nevery connection is used via a pool. This brings up something: attempts\nto reset a NO RESET session need to fail in such a way that a connection\npool can detect this and disconnect, or else it needs to fail by\nterminating the connection altogether.\n\n\nRESET ROLE is also useful for setting up a \"superuser role\" that\n DBAs have access to via a NO INHERIT role. It allows them to do\n things like...\nSET ROLE su;\n-- Do some superuserly thing\nRESET ROLE;\nAdmittedly, the last step could be just to set their role back to\n themselves, but RESET ROLE removes the risk of typos.\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n",
"msg_date": "Thu, 4 Jan 2024 16:08:28 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "\n\n> On 3 Jan 2024, at 18:22, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> \n>> In my case I have scripts that I want to execute with limited privileges\n>> and make sure the scripts cannot escape the sandbox via RESET ROLE.\n> \n> Depending on the desired workflow I think that could work for you too.\n> Because it allows you to do this (and use -f script.sql instead of -c\n> 'select ...):\n> \n> ❯ psql \"user=postgres _pq_.protocol_managed_params=role options='-c\n> role=pg_read_all_data'\" -c 'select current_user; set role postgres'\n> current_user\n> ──────────────────\n> pg_read_all_data\n> (1 row)\n> \n> ERROR: 42501: parameter can only be set at the protocol level \"role\"\n> LOCATION: set_config_with_handle, guc.c:3583\n> Time: 0.667 ms\n\nMy scripts are actually Liquibase change logs.\nI’ve extended Liquibase so that each change set is executed with limited privileges.\n\nWhile doable with protocol level implementation, it would require support from PgJDBC.\n\n—\nMichal\n\n\n\n",
"msg_date": "Fri, 5 Jan 2024 07:10:39 +0100",
"msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE x NO RESET"
},
{
"msg_contents": "On Sat, Dec 30, 2023 at 11:50 AM Joe Conway <[email protected]> wrote:\n\n> In the meantime, in case it helps, see\n>\n> https://github.com/pgaudit/set_user\n>\n> Specifically set_session_auth(text):\n> -------------\n> When set_session_auth(text) is called, the effective session and current\n> user is switched to the rolename supplied, irrevocably. Unlike\n> set_user() or set_user_u(), it does not affect logging nor allowed\n> statements. If set_user.exit_on_error is \"on\" (the default), and any\n> error occurs during execution, a FATAL error is thrown and the backend\n> session exits.\n>\n\nThis helps, but has the downside (of course) of being a compiled extension\nwhich limits its use on hosted services and such unless they decide to\nsupport it.\n\nWould be really great if pooling could co-exist with per-user roles\nsomehow, I'm not the best to weigh in on how, but it's bottlenecking the\nwhole space of using roles per-user, and AFAICT this pattern would\notherwise be totally feasible and awesome, with all the progress that's\nbeen made in this space.\n\nEric\n\nOn Sat, Dec 30, 2023 at 11:50 AM Joe Conway <[email protected]> wrote:In the meantime, in case it helps, see\n\n https://github.com/pgaudit/set_user\n\nSpecifically set_session_auth(text):\n-------------\nWhen set_session_auth(text) is called, the effective session and current \nuser is switched to the rolename supplied, irrevocably. Unlike \nset_user() or set_user_u(), it does not affect logging nor allowed \nstatements. If set_user.exit_on_error is \"on\" (the default), and any \nerror occurs during execution, a FATAL error is thrown and the backend \nsession exits.This helps, but has the downside (of course) of being a compiled extension which limits its use on hosted services and such unless they decide to support it.Would be really great if pooling could co-exist with per-user roles somehow, I'm not the best to weigh in on how, but it's bottlenecking the whole space of using roles per-user, and AFAICT this pattern would otherwise be totally feasible and awesome, with all the progress that's been made in this space.Eric",
"msg_date": "Fri, 5 Jan 2024 11:48:03 -0600",
"msg_from": "Eric Hanson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SET ROLE x NO RESET"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile investigating something, I noticed it's not really possible to\ndisable syncscans for a particular parallel scan.\n\nThat is, with serial scans, we can do this:\n\n table_index_build_scan(heap, index, indexInfo, false, true,\n callback, state, NULL);\n\nwhere false means \"allow_sync=false\", i.e. disabling synchronized scans.\n\nHowever, if this is tied to a parallel scan, that's not possible. If you\ntry doing this:\n\n table_index_build_scan(heap, index, indexInfo, false, true,\n callback, state, parallel_scan);\n\nit hits this code in heapam_index_build_range_scan()\n\n /*\n * Parallel index build.\n *\n * Parallel case never registers/unregisters own snapshot. Snapshot\n * is taken from parallel heap scan, and is SnapshotAny or an MVCC\n * snapshot, based on same criteria as serial case.\n */\n Assert(!IsBootstrapProcessingMode());\n Assert(allow_sync);\n snapshot = scan->rs_snapshot;\n\nSadly, there's no explanation why parallel scans do not allow disabling\nsync scans just like serial scans - and it's not quite obvious to me.\n\nJust removing the assert is not sufficient to make this work, though.\nBecause just a little later we call heap_setscanlimits() which does:\n\n Assert(!scan->rs_inited); /* else too late to change */\n /* else rs_startblock is significant */\n Assert(!(scan->rs_base.rs_flags & SO_ALLOW_SYNC));\n\nI don't quite understand what the comments for the asserts say, but\nSO_ALLOW_SYNC seems to come from table_beginscan_parallel.\n\nPerhaps the goal is to prevent mismatch between the parallel and serial\nparts of the scans - OK, that probably makes sense. But I still don't\nunderstand why table_beginscan_parallel forces the SO_ALLOW_SYNC instead\nof having an allow_sync parameters too.\n\nIs there a reason why all parallel plans should / must be synchronized?\n\nWell, in fact they are not *required* because if I set\n\n synchronize_seqscans=off\n\nthis makes the initscan() to remove the SO_ALLOW_SYNC ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 31 Dec 2023 00:09:55 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why do parallel scans require syncscans (but not really)?"
},
{
"msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\n\nHi, Tomas\nOn Dec 31, 2023 at 07:10 +0800, Tomas Vondra <[email protected]>, wrote:\n> Sadly, there's no explanation why parallel scans do not allow disabling\n> sync scans just like serial scans - and it's not quite obvious to me.\n\nFeel confused too.\n\n```\n\tAssert(!IsBootstrapProcessingMode());\n \tAssert(allow_sync);\n \tsnapshot = scan->rs_snapshot;\n```\n\nI dig this for a while and found that there a close relationship between field phs_syncscan(For parallel scan) and the allow_sync flag.\n\nIn table_block_parallelscan_initialize() ParallelTableScanDescData.phs_syncscan is set.\n\n\t/* compare phs_syncscan initialization to similar logic in initscan */\n\tbpscan->base.phs_syncscan = synchronize_seqscans &&\n\t\t!RelationUsesLocalBuffers(rel) &&\n\t\tbpscan->phs_nblocks > NBuffers / 4;\n\nAnd the allow_sync is always set to true in initscan(), when phs_syncscan is not NULL.\n\n\tif (scan->rs_base.rs_parallel != NULL)\n\t{\n\t\t/* For parallel scan, believe whatever ParallelTableScanDesc says. */\n\t\tif (scan->rs_base.rs_parallel->phs_syncscan)\n\t\t\tscan->rs_base.rs_flags |= SO_ALLOW_SYNC;\n\t\telse\n\t\t\tscan->rs_base.rs_flags &= ~SO_ALLOW_SYNC;\n\t}\n\nAnd phs_syncscan is used in table_block_parallelscan_startblock_init(),table_block_parallelscan_nextpage() to do sth of syncscan.\n\nBack to the Assertion, else branch means param scan(parallel scan desc) is not null and we are in parallel scan mode.\n\telse\n\t{\n\t\t/*\n\t\t * Parallel index build.\n\t\t *\n\t\t * Parallel case never registers/unregisters own snapshot. Snapshot\n\t\t * is taken from parallel heap scan, and is SnapshotAny or an MVCC\n\t\t * snapshot, based on same criteria as serial case.\n\t\t */\n\t\tAssert(!IsBootstrapProcessingMode());\n\t\tAssert(allow_sync);\n\t\tsnapshot = scan->rs_snapshot;\n\t}\n\nAgree with you that: why all parallel plans should / must be synchronized?\nParallel scan should have a choice about syncscan.\nBesides that I think there is a risk Assert(allow_sync), at least should use phs_syncscan field here to judge if allow_sync is true according to above.\nSo I guess, there should be an Assertion failure of Assert(allow_sync) in theory when we use a parallel scan but phs_syncscan is false.\n\n\t/* compare phs_syncscan initialization to similar logic in initscan */\n\tbpscan->base.phs_syncscan = synchronize_seqscans &&\n\t\t!RelationUsesLocalBuffers(rel) &&\n\t\tbpscan->phs_nblocks > NBuffers / 4;\n\nHowever, I didn’t produce it because phs_syncscan is set according to data size, even with some parallel cost GUCs set to 0.\nAnd if there is not enough data, we usually do not choose a parallel plan,\nlike this case: build a index with parallel scan on underlying tables.\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\nHi, Tomas\n\n\n\nOn Dec 31, 2023 at 07:10 +0800, Tomas Vondra <[email protected]>, wrote:\nSadly, there's no explanation why parallel scans do not allow disabling\nsync scans just like serial scans - and it's not quite obvious to me.\n\nFeel confused too.\n\n```\n\tAssert(!IsBootstrapProcessingMode());\n \tAssert(allow_sync);\n \tsnapshot = scan->rs_snapshot;\n```\n\nI dig this for a while and found that there a close relationship between field phs_syncscan(For parallel scan) and the allow_sync flag.\n\nIn table_block_parallelscan_initialize() ParallelTableScanDescData.phs_syncscan is set.\n\n\t/* compare phs_syncscan initialization to similar logic in initscan */\n\tbpscan->base.phs_syncscan = synchronize_seqscans &&\n\t\t!RelationUsesLocalBuffers(rel) &&\n\t\tbpscan->phs_nblocks > NBuffers / 4;\n\nAnd the allow_sync is always set to true in initscan(), when phs_syncscan is not NULL.\n\n\tif (scan->rs_base.rs_parallel != NULL)\n\t{\n\t\t/* For parallel scan, believe whatever ParallelTableScanDesc says. */\n\t\tif (scan->rs_base.rs_parallel->phs_syncscan)\n\t\t\tscan->rs_base.rs_flags |= SO_ALLOW_SYNC;\n\t\telse\n\t\t\tscan->rs_base.rs_flags &= ~SO_ALLOW_SYNC;\n\t}\n\nAnd phs_syncscan is used in table_block_parallelscan_startblock_init(),table_block_parallelscan_nextpage() to do sth of syncscan.\n\nBack to the Assertion, else branch means param scan(parallel scan desc) is not null and we are in parallel scan mode.\n\telse\n\t{\n\t\t/*\n\t\t * Parallel index build.\n\t\t *\n\t\t * Parallel case never registers/unregisters own snapshot. Snapshot\n\t\t * is taken from parallel heap scan, and is SnapshotAny or an MVCC\n\t\t * snapshot, based on same criteria as serial case.\n\t\t */\n\t\tAssert(!IsBootstrapProcessingMode());\n\t\tAssert(allow_sync);\n\t\tsnapshot = scan->rs_snapshot;\n\t}\n\nAgree with you that: why all parallel plans should / must be synchronized?\nParallel scan should have a choice about syncscan.\nBesides that I think there is a risk Assert(allow_sync), at least should use phs_syncscan field here to judge if allow_sync is true according to above.\nSo I guess, there should be an Assertion failure of Assert(allow_sync) in theory when we use a parallel scan but phs_syncscan is false.\n\n\t/* compare phs_syncscan initialization to similar logic in initscan */\n\tbpscan->base.phs_syncscan = synchronize_seqscans &&\n\t\t!RelationUsesLocalBuffers(rel) &&\n\t\tbpscan->phs_nblocks > NBuffers / 4;\n\nHowever, I didn’t produce it because phs_syncscan is set according to data size, even with some parallel cost GUCs set to 0. \nAnd if there is not enough data, we usually do not choose a parallel plan,\nlike this case: build a index with parallel scan on underlying tables.",
"msg_date": "Sun, 31 Dec 2023 13:55:39 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do parallel scans require syncscans (but not really)?"
}
] |
[
{
"msg_contents": "Hi,\n\nI think we need to add pgstat_wait_event.c, wait_event_types.h and\nwait_event_funcs_data.c to pgindent/exclude_file_patterns, otherwise,\npgindent on the entire source tree after the source code compilation\nshows up these generated files. I've attached a patch to fix it.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 31 Dec 2023 07:24:02 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Exclude generated wait_event files from pgindent"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 07:24:02AM +0530, Bharath Rupireddy wrote:\n> I think we need to add pgstat_wait_event.c, wait_event_types.h and\n> wait_event_funcs_data.c to pgindent/exclude_file_patterns, otherwise,\n> pgindent on the entire source tree after the source code compilation\n> shows up these generated files. I've attached a patch to fix it.\n> \n> Thoughts?\n\nIndeed, will fix.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 12:03:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exclude generated wait_event files from pgindent"
}
] |
[
{
"msg_contents": "Hi,\n\nSome of the TAP tests are explicitly setting PGDATABASE environment\nvariable to 'postgres', which isn't needed because the TAP test's perl\nlibrary PostgreSQL::Test::Cluster already sets it once and for all.\nI've attached a patch that removes all such unneeded PGDATABASE\nsettings.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 31 Dec 2023 07:24:08 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove unneeded PGDATABASE setting from TAP tests"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 07:24:08AM +0530, Bharath Rupireddy wrote:\n> Some of the TAP tests are explicitly setting PGDATABASE environment\n> variable to 'postgres', which isn't needed because the TAP test's perl\n> library PostgreSQL::Test::Cluster already sets it once and for all.\n> I've attached a patch that removes all such unneeded PGDATABASE\n> settings.\n> \n> Thoughts?\n\nHmm. I don't see any reason to not do that as these are not really\nself-documenting either. How about 011_clusterdb_all.pl for\nPGDATABASE?\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 12:06:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unneeded PGDATABASE setting from TAP tests"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 8:36 AM Michael Paquier <[email protected]> wrote:\n>\n> On Sun, Dec 31, 2023 at 07:24:08AM +0530, Bharath Rupireddy wrote:\n> > Some of the TAP tests are explicitly setting PGDATABASE environment\n> > variable to 'postgres', which isn't needed because the TAP test's perl\n> > library PostgreSQL::Test::Cluster already sets it once and for all.\n> > I've attached a patch that removes all such unneeded PGDATABASE\n> > settings.\n> >\n> > Thoughts?\n>\n> Hmm. I don't see any reason to not do that as these are not really\n> self-documenting either. How about 011_clusterdb_all.pl for\n> PGDATABASE?\n\nOh, yeah. We can remove that too, after all, PGDATABASE is being set\nto postgres the default database which PostgreSQL::Test::Cluster does\nset for us. I was earlier swayed by the comment atop the PGDATABASE\nsetting in 011_clusterdb_all.pl. Attached is v2 patch with this\nchange.\n\nWe perhaps can re-word and retain the comment in 011_clusterdb_all.pl,\nwhich I think unnecessary as the code around 'alldb' if-else condition\nin clusterdb.c can tell that.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 1 Jan 2024 13:52:10 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove unneeded PGDATABASE setting from TAP tests"
},
{
"msg_contents": "On Mon, Jan 01, 2024 at 01:52:10PM +0530, Bharath Rupireddy wrote:\n> Oh, yeah. We can remove that too, after all, PGDATABASE is being set\n> to postgres the default database which PostgreSQL::Test::Cluster does\n> set for us. I was earlier swayed by the comment atop the PGDATABASE\n> setting in 011_clusterdb_all.pl. Attached is v2 patch with this\n> change.\n\nThanks. Applied after keeping the comment in 011_clusterdb_all.pl,\nrewording it a bit.\n--\nMichael",
"msg_date": "Wed, 3 Jan 2024 10:30:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unneeded PGDATABASE setting from TAP tests"
}
] |
[
{
"msg_contents": "Hi,\n\nIn order to ship libpq on Android, it needs to be built without a version\nsuffix on the .so file.\nThe attached patch will make sure that no version is added when built for\nAndroid.\nI was wondering if there are a) any comments on the approach and if I\nshould be handed in for a commitfest (currently waiting for the cooldown\nperiod after account activation, I am not sure how long that is)\n\nThank you for any feedback\nMatthias",
"msg_date": "Sun, 31 Dec 2023 06:08:04 +0100",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Build versionless .so for Android"
},
{
"msg_contents": "Matthias Kuhn <[email protected]> writes:\n> In order to ship libpq on Android, it needs to be built without a version\n> suffix on the .so file.\n> The attached patch will make sure that no version is added when built for\n> Android.\n\nThis ... seems incredibly brain-dead. Does Android really not cope\nwith version-to-version ABI changes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Dec 2023 01:19:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 06:08:04AM +0100, Matthias Kuhn wrote:\n> In order to ship libpq on Android, it needs to be built without a version\n> suffix on the .so file.\n> The attached patch will make sure that no version is added when built for\n> Android.\n\nFWIW, I have mixed feelings about patching the code to treat\nandroid-linux as an exception while changing nothing in the\ndocumentation to explain why this is required. A custom patch may\nserve you better here, and note that you did not even touch the meson\npaths. See libpq_c_args in src/interfaces/libpq/meson.build as one\nexample. That's just my opinion, of course, still there are no\nbuildfarm members that would cover your patch.\n\n> I was wondering if there are a) any comments on the approach and if I\n> should be handed in for a commitfest (currently waiting for the cooldown\n> period after account activation, I am not sure how long that is)\n\nSuch discussions are adapted in a commit fest entry. No idea if there\nis a cooldown period after account creation before being able to touch\nthe CF app contents, though.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 15:24:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 12/31/23 01:24, Michael Paquier wrote:\n> On Sun, Dec 31, 2023 at 06:08:04AM +0100, Matthias Kuhn wrote:\n>> I was wondering if there are a) any comments on the approach and if I\n>> should be handed in for a commitfest (currently waiting for the cooldown\n>> period after account activation, I am not sure how long that is)\n> \n> Such discussions are adapted in a commit fest entry. No idea if there\n> is a cooldown period after account creation before being able to touch\n> the CF app contents, though.\n\n\nFWIW I have expedited the cooldown period, so Matthias can log in right \naway.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sun, 31 Dec 2023 14:16:59 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Thanks for all the comments and help.\n\nI have added the patch to the January CF.\n\nIt looks like meson does not currently support building for android, the\nfollowing output is what I get (but I have actually no experience with\nmeson):\n\n meson.build:320:2: ERROR: Problem encountered: unknown host system:\nandroid\n\nPlease find an attached patch which also includes a documentation section.\nI am happy to adjust if needed.\n\nKind regards\nMatthias",
"msg_date": "Mon, 1 Jan 2024 06:25:26 +0100",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 1:24 AM Michael Paquier <[email protected]> wrote:\n> FWIW, I have mixed feelings about patching the code to treat\n> android-linux as an exception while changing nothing in the\n> documentation to explain why this is required. A custom patch may\n> serve you better here, and note that you did not even touch the meson\n> paths. See libpq_c_args in src/interfaces/libpq/meson.build as one\n> example. That's just my opinion, of course, still there are no\n> buildfarm members that would cover your patch.\n\nIt's reasonable to want good comments -- the one in the patch (1)\ndoesn't explain why this is required and (2) suggests that it is only\nneeded when cross-compiling which seems surprising and (3) has a typo.\nBut if it's true that this is needed, I tentatively think we might do\nbetter to take the patch than force people to carry it out of tree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 12:43:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Hi,\n\nAttached a patch with a (hopefully) better wording of the comment.\n\nI have unsuccessfully tried to find an official source for this policy.\nSo for reference some discussions about the topic:\n\n-\nhttps://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk\n-\nhttps://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files\n\nPlease let me know if I can help in any way\n\nMatthias\n\nOn Tue, Jan 2, 2024 at 6:43 PM Robert Haas <[email protected]> wrote:\n\n> On Sun, Dec 31, 2023 at 1:24 AM Michael Paquier <[email protected]>\n> wrote:\n> > FWIW, I have mixed feelings about patching the code to treat\n> > android-linux as an exception while changing nothing in the\n> > documentation to explain why this is required. A custom patch may\n> > serve you better here, and note that you did not even touch the meson\n> > paths. See libpq_c_args in src/interfaces/libpq/meson.build as one\n> > example. That's just my opinion, of course, still there are no\n> > buildfarm members that would cover your patch.\n>\n> It's reasonable to want good comments -- the one in the patch (1)\n> doesn't explain why this is required and (2) suggests that it is only\n> needed when cross-compiling which seems surprising and (3) has a typo.\n> But if it's true that this is needed, I tentatively think we might do\n> better to take the patch than force people to carry it out of tree.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>",
"msg_date": "Fri, 5 Jan 2024 01:00:05 +0100",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 7:00 PM Matthias Kuhn <[email protected]> wrote:\n> Attached a patch with a (hopefully) better wording of the comment.\n>\n> I have unsuccessfully tried to find an official source for this policy.\n> So for reference some discussions about the topic:\n>\n> - https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk\n> - https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files\n\nThis is interesting, but these links are also very old.\n\nI like the new wording of the comment better, but I think we need more\nconfirmation that this is correct before committing it. Anyone else\nhere familiar with building on Android?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 09:44:55 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 05.01.24 01:00, Matthias Kuhn wrote:\n> Attached a patch with a (hopefully) better wording of the comment.\n> \n> I have unsuccessfully tried to find an official source for this policy.\n> So for reference some discussions about the topic:\n> \n> - \n> https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk <https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk>\n> - \n> https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files <https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files>\n What I would like to see is a specific thing that you are trying to do \nthat doesn't work. Probably, you are writing a program that is meant to \nrun on Android, and you are linking it (provide command line), and then \nwhat happens? The linking fails? It fails to run? What is the error? \nCan you provide a minimal example? And so on.\n\n\n\n",
"msg_date": "Fri, 5 Jan 2024 15:57:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 01.01.24 06:25, Matthias Kuhn wrote:\n> It looks like meson does not currently support building for android, the \n> following output is what I get (but I have actually no experience with \n> meson):\n> \n> meson.build:320:2: ERROR: Problem encountered: unknown host system: \n> android\n\nFWIW, the meson source code contains numerous mentions of an 'android' \nplatform, so it seems like this is expected to work.\n\n\n\n",
"msg_date": "Fri, 5 Jan 2024 16:00:01 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-05 16:00:01 +0100, Peter Eisentraut wrote:\n> On 01.01.24 06:25, Matthias Kuhn wrote:\n> > It looks like meson does not currently support building for android, the\n> > following output is what I get (but I have actually no experience with\n> > meson):\n> > \n> > � � meson.build:320:2: ERROR: Problem encountered: unknown host system:\n> > android\n> \n> FWIW, the meson source code contains numerous mentions of an 'android'\n> platform, so it seems like this is expected to work.\n\nThis error is from our code, not meson's - the simplest fix would be to just\nmap android to linux, similar to how dragonflybsd is mapped to netbsd. Blind\nattempt attached.\n\nIt looks like that might be all that's required, as it looks like meson knows\nthat android doesn't properly deal with soversion.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 5 Jan 2024 12:34:06 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-05 15:57:23 +0100, Peter Eisentraut wrote:\n> On 05.01.24 01:00, Matthias Kuhn wrote:\n> > Attached a patch with a (hopefully) better wording of the comment.\n> > \n> > I have unsuccessfully tried to find an official source for this policy.\n> > So for reference some discussions about the topic:\n> > \n> > - https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk <https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk>\n> > - https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files <https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files>\n> What I would like to see is a specific thing that you are trying to do that\n> doesn't work. Probably, you are writing a program that is meant to run on\n> Android, and you are linking it (provide command line), and then what\n> happens? The linking fails? It fails to run? What is the error? Can you\n> provide a minimal example? And so on.\n\nI looked around for a bit, and couldn't really find good documentation around\nthis. The best I found is\nhttps://developer.android.com/ndk/guides/abis#native-code-in-app-packages\nwhich does strongly imply that the library names aren't versioned.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jan 2024 13:46:58 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "What I try to do is packaging an app with androiddeployqt which fails with\nan error:\n\nThe bundled library lib/libpq.so.5 doesn't end with .so. Android only\nsupports versionless libraries ending with the .so suffix.\n\nThis error was introduced in response to this issue which contains hints\nabout the underlying problem:\n\nhttps://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346\n\nI hope this sheds some light\nMatthias\n\nOn Fri, Jan 5, 2024, 21:57 Peter Eisentraut <[email protected]> wrote:\n\n> On 05.01.24 01:00, Matthias Kuhn wrote:\n> > Attached a patch with a (hopefully) better wording of the comment.\n> >\n> > I have unsuccessfully tried to find an official source for this policy.\n> > So for reference some discussions about the topic:\n> >\n> > -\n> >\n> https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk\n> <\n> https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk\n> >\n> > -\n> >\n> https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files\n> <\n> https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files\n> >\n> What I would like to see is a specific thing that you are trying to do\n> that doesn't work. Probably, you are writing a program that is meant to\n> run on Android, and you are linking it (provide command line), and then\n> what happens? The linking fails? It fails to run? What is the error?\n> Can you provide a minimal example? And so on.\n>\n>\n\nWhat I try to do is packaging an app with androiddeployqt which fails with an error:The bundled library lib/libpq.so.5 doesn't end with .so. Android only supports versionless libraries ending with the .so suffix.This error was introduced in response to this issue which contains hints about the underlying problem:https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346I hope this sheds some lightMatthiasOn Fri, Jan 5, 2024, 21:57 Peter Eisentraut <[email protected]> wrote:On 05.01.24 01:00, Matthias Kuhn wrote:\n> Attached a patch with a (hopefully) better wording of the comment.\n> \n> I have unsuccessfully tried to find an official source for this policy.\n> So for reference some discussions about the topic:\n> \n> - \n> https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk <https://stackoverflow.com/questions/11491065/linking-with-versioned-shared-library-in-android-ndk>\n> - \n> https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files <https://stackoverflow.com/questions/18681401/how-can-i-remove-all-versioning-information-from-shared-object-files>\n What I would like to see is a specific thing that you are trying to do \nthat doesn't work. Probably, you are writing a program that is meant to \nrun on Android, and you are linking it (provide command line), and then \nwhat happens? The linking fails? It fails to run? What is the error? \nCan you provide a minimal example? And so on.",
"msg_date": "Sun, 14 Jan 2024 18:37:00 +0700",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 14.01.24 12:37, Matthias Kuhn wrote:\n> What I try to do is packaging an app with androiddeployqt which fails \n> with an error:\n> \n> The bundled library lib/libpq.so.5 doesn't end with .so. Android only \n> supports versionless libraries ending with the .so suffix.\n> \n> This error was introduced in response to this issue which contains hints \n> about the underlying problem:\n> \n> https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346 \n> <https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346>\n\nI think the scenario there is using dlopen(), possibly via Qt, possibly \nvia Java, so it's a very specific scenario, not necessarily indicative \nof a general requirement on Android, and apparently not using build-time \nlinking. It's quite conceivable that this issue would also exist with \nQt on other platforms.\n\nAs I mentioned before, Meson has Android support. Some people there \nclearly thought about it. So I suggest you try building PostgreSQL for \nyour Android environment using Meson and see what it produces.(*) If \nthe output files are the same as the autoconf/make build, then I would \nthink your scenario is nonstandard. If the output files are different, \nthen we should check that and consider changes to get them to match.\n\nIt's of course possible that Meson is wrong, too, but then we need to \nhave a broader analysis, because the implicit goal is to keep the two \nbuild systems for PostgreSQL consistent.\n\n(*) - This means specifically that the installation trees produced by \n\"make install-world-bin\" and \"meson install\" should produce exactly the \nsame set of files (same names in the same directories, not necessarily \nthe same contents).\n\n\n\n",
"msg_date": "Thu, 18 Jan 2024 09:27:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Hi\n\nOn Thu, Jan 18, 2024 at 9:27 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 14.01.24 12:37, Matthias Kuhn wrote:\n> > What I try to do is packaging an app with androiddeployqt which fails\n> > with an error:\n> >\n> > The bundled library lib/libpq.so.5 doesn't end with .so. Android only\n> > supports versionless libraries ending with the .so suffix.\n> >\n> > This error was introduced in response to this issue which contains hints\n> > about the underlying problem:\n> >\n> > https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346\n> > <https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346>\n>\n> I think the scenario there is using dlopen(), possibly via Qt, possibly\n> via Java, so it's a very specific scenario, not necessarily indicative\n> of a general requirement on Android, and apparently not using build-time\n> linking. It's quite conceivable that this issue would also exist with\n> Qt on other platforms.\n>\n>\nI haven't experienced this issue with Qt on any other platform (and I build\nand run the same stack on windows, linux, macos and ios).\nAlso the links posted earlier in this thread hint to the same limitation of\nAndroid, unrelated to Qt.\n\n\n> As I mentioned before, Meson has Android support. Some people there\n> clearly thought about it. So I suggest you try building PostgreSQL for\n> your Android environment using Meson and see what it produces.(*) If\n> the output files are the same as the autoconf/make build, then I would\n> think your scenario is nonstandard. If the output files are different,\n> then we should check that and consider changes to get them to match.\n>\n> It's of course possible that Meson is wrong, too, but then we need to\n> have a broader analysis, because the implicit goal is to keep the two\n> build systems for PostgreSQL consistent.\n>\n\nWhen trying to build with meson, including the patch which was provided by\nAndres Freud (thanks),\nI am currently stuck with the following error:\n\nConfiguring pg_config_ext.h using configuration\n\n../src/tgresql-16-685bc9fc97.clean/src/include/meson.build:12:0: ERROR:\nFile port/android.h does not exist.\n\nKind regards\nMatthias\n\n\n>\n> (*) - This means specifically that the installation trees produced by\n> \"make install-world-bin\" and \"meson install\" should produce exactly the\n> same set of files (same names in the same directories, not necessarily\n> the same contents).\n>\n>\n\nHiOn Thu, Jan 18, 2024 at 9:27 AM Peter Eisentraut <[email protected]> wrote:On 14.01.24 12:37, Matthias Kuhn wrote:\n> What I try to do is packaging an app with androiddeployqt which fails \n> with an error:\n> \n> The bundled library lib/libpq.so.5 doesn't end with .so. Android only \n> supports versionless libraries ending with the .so suffix.\n> \n> This error was introduced in response to this issue which contains hints \n> about the underlying problem:\n> \n> https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346 \n> <https://bugreports.qt.io/plugins/servlet/mobile#issue/QTBUG-101346>\n\nI think the scenario there is using dlopen(), possibly via Qt, possibly \nvia Java, so it's a very specific scenario, not necessarily indicative \nof a general requirement on Android, and apparently not using build-time \nlinking. It's quite conceivable that this issue would also exist with \nQt on other platforms.\nI haven't experienced this issue with Qt on any other platform (and I build and run the same stack on windows, linux, macos and ios).Also the links posted earlier in this thread hint to the same limitation of Android, unrelated to Qt. \nAs I mentioned before, Meson has Android support. Some people there \nclearly thought about it. So I suggest you try building PostgreSQL for \nyour Android environment using Meson and see what it produces.(*) If \nthe output files are the same as the autoconf/make build, then I would \nthink your scenario is nonstandard. If the output files are different, \nthen we should check that and consider changes to get them to match.\n\nIt's of course possible that Meson is wrong, too, but then we need to \nhave a broader analysis, because the implicit goal is to keep the two \nbuild systems for PostgreSQL consistent.When trying to build with meson, including the patch which was provided by Andres Freud (thanks),I am currently stuck with the following error:Configuring pg_config_ext.h using configuration../src/tgresql-16-685bc9fc97.clean/src/include/meson.build:12:0: ERROR: File port/android.h does not exist.Kind regardsMatthias \n\n(*) - This means specifically that the installation trees produced by \n\"make install-world-bin\" and \"meson install\" should produce exactly the \nsame set of files (same names in the same directories, not necessarily \nthe same contents).",
"msg_date": "Fri, 19 Jan 2024 11:08:59 +0100",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 19.01.24 11:08, Matthias Kuhn wrote:\n> When trying to build with meson, including the patch which was provided \n> by Andres Freud (thanks),\n> I am currently stuck with the following error:\n> \n> Configuring pg_config_ext.h using configuration\n> \n> ../src/tgresql-16-685bc9fc97.clean/src/include/meson.build:12:0: ERROR: \n> File port/android.h does not exist.\n\nI think there is actually small bug in the meson setup.\n\nIn the top-level meson.build, line 166 has\n\n portname = host_system\n\nbut then later around line 190, host_system is changed for some cases \n(including the android case that was added in the proposed patch). So \nthat won't work, and probably also won't work for the existing cases \nthere. The portname assignment needs to be moved later in the file. \nMaybe you can try that on your own.\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 15:00:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "Thanks,\n\nThis patch brought it further and indeed, the resulting libpq.so file is\nunversioned when built for Android while it's versioned when built for\nlinux.\n\nMatthias\n\nOn Fri, Jan 19, 2024 at 3:00 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 19.01.24 11:08, Matthias Kuhn wrote:\n> > When trying to build with meson, including the patch which was provided\n> > by Andres Freud (thanks),\n> > I am currently stuck with the following error:\n> >\n> > Configuring pg_config_ext.h using configuration\n> >\n> > ../src/tgresql-16-685bc9fc97.clean/src/include/meson.build:12:0: ERROR:\n> > File port/android.h does not exist.\n>\n> I think there is actually small bug in the meson setup.\n>\n> In the top-level meson.build, line 166 has\n>\n> portname = host_system\n>\n> but then later around line 190, host_system is changed for some cases\n> (including the android case that was added in the proposed patch). So\n> that won't work, and probably also won't work for the existing cases\n> there. The portname assignment needs to be moved later in the file.\n> Maybe you can try that on your own.\n>\n>\n\nThanks,This patch brought it further and indeed, the resulting libpq.so file is unversioned when built for Android while it's versioned when built for linux.MatthiasOn Fri, Jan 19, 2024 at 3:00 PM Peter Eisentraut <[email protected]> wrote:On 19.01.24 11:08, Matthias Kuhn wrote:\n> When trying to build with meson, including the patch which was provided \n> by Andres Freud (thanks),\n> I am currently stuck with the following error:\n> \n> Configuring pg_config_ext.h using configuration\n> \n> ../src/tgresql-16-685bc9fc97.clean/src/include/meson.build:12:0: ERROR: \n> File port/android.h does not exist.\n\nI think there is actually small bug in the meson setup.\n\nIn the top-level meson.build, line 166 has\n\n portname = host_system\n\nbut then later around line 190, host_system is changed for some cases \n(including the android case that was added in the proposed patch). So \nthat won't work, and probably also won't work for the existing cases \nthere. The portname assignment needs to be moved later in the file. \nMaybe you can try that on your own.",
"msg_date": "Fri, 19 Jan 2024 18:12:24 +0100",
"msg_from": "Matthias Kuhn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build versionless .so for Android"
},
{
"msg_contents": "On 19.01.24 18:12, Matthias Kuhn wrote:\n> Thanks,\n> \n> This patch brought it further and indeed, the resulting libpq.so file is \n> unversioned when built for Android while it's versioned when built for \n> linux.\n\nOk, I have committed all of this now:\n\n- Fix for correct order of host_system and portname assignment in \nmeson.build.\n\n- Patch from Andres to map android to linux in meson.build.\n\n- Makefile.shlib support for Android (based on your patch, but I \nreworked it a bit).\n\nSo this should all work for you now.\n\n\n\n",
"msg_date": "Tue, 23 Jan 2024 20:42:54 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build versionless .so for Android"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.