threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nI happened to notice a typo in pg_rotate_logfile in ipc/signalfuncs.c\n- the hint message wrongly mentions that pg_logfile_rotate is part of\nthe core; which is actually not. pg_logfile_rotate is an adminpack's\n1.0 SQL function dropped in 2.0. The core defines pg_rotate_logfile\nSQL function instead, so use that. Here's a patch to fix the typo.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 13 Feb 2024 02:02:21 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "> On 12 Feb 2024, at 21:32, Bharath Rupireddy <[email protected]> wrote:\n\n> I happened to notice a typo in pg_rotate_logfile in ipc/signalfuncs.c\n> - the hint message wrongly mentions that pg_logfile_rotate is part of\n> the core; which is actually not. pg_logfile_rotate is an adminpack's\n> 1.0 SQL function dropped in 2.0. The core defines pg_rotate_logfile\n> SQL function instead, so use that. Here's a patch to fix the typo.\n\nNice catch! This needs to be backpatched all the way down to 12 as that\nfunction wen't away a long time ago (it was marked as deprecated all the way\nback in 9.1).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 12 Feb 2024 21:39:06 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "On Mon, Feb 12, 2024 at 09:39:06PM +0100, Daniel Gustafsson wrote:\n>> On 12 Feb 2024, at 21:32, Bharath Rupireddy <[email protected]> wrote:\n>> I happened to notice a typo in pg_rotate_logfile in ipc/signalfuncs.c\n>> - the hint message wrongly mentions that pg_logfile_rotate is part of\n>> the core; which is actually not. pg_logfile_rotate is an adminpack's\n>> 1.0 SQL function dropped in 2.0. The core defines pg_rotate_logfile\n>> SQL function instead, so use that. Here's a patch to fix the typo.\n> \n> Nice catch! This needs to be backpatched all the way down to 12 as that\n> function wen't away a long time ago (it was marked as deprecated all the way\n> back in 9.1).\n\nThis is a bit strange because, with this patch, the HINT suggests using a\nfunction with the same name as the one it lives in. IIUC this is because\nadminpack's pg_logfile_rotate() uses pg_rotate_logfile(), while core's\npg_rotate_logfile() uses pg_rotate_logfile_v2(). I suppose trying to\nrename these might be more trouble than it's worth at this point, though...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Feb 2024 14:46:13 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "> On 12 Feb 2024, at 21:46, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Feb 12, 2024 at 09:39:06PM +0100, Daniel Gustafsson wrote:\n>>> On 12 Feb 2024, at 21:32, Bharath Rupireddy <[email protected]> wrote:\n>>> I happened to notice a typo in pg_rotate_logfile in ipc/signalfuncs.c\n>>> - the hint message wrongly mentions that pg_logfile_rotate is part of\n>>> the core; which is actually not. pg_logfile_rotate is an adminpack's\n>>> 1.0 SQL function dropped in 2.0. The core defines pg_rotate_logfile\n>>> SQL function instead, so use that. Here's a patch to fix the typo.\n>> \n>> Nice catch! This needs to be backpatched all the way down to 12 as that\n>> function wen't away a long time ago (it was marked as deprecated all the way\n>> back in 9.1).\n> \n> This is a bit strange because, with this patch, the HINT suggests using a\n> function with the same name as the one it lives in. IIUC this is because\n> adminpack's pg_logfile_rotate() uses pg_rotate_logfile(), while core's\n> pg_rotate_logfile() uses pg_rotate_logfile_v2(). I suppose trying to\n> rename these might be more trouble than it's worth at this point, though...\n\nYeah, I doubt that's worth the churn.\n\nOn that note though, we might want to consider just dropping it altogether in\nv17 (while fixing the incorrect hint in backbranches)? I can't imagine\nadminpack 1.0 being in heavy use today, and skimming pgAdmin code it seems it's\nonly used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 12 Feb 2024 21:59:15 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 2:29 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> On that note though, we might want to consider just dropping it altogether in\n> v17 (while fixing the incorrect hint in backbranches)? I can't imagine\n> adminpack 1.0 being in heavy use today, and skimming pgAdmin code it seems it's\n> only used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?\n\nhttps://codesearch.debian.net/search?q=pg_logfile_rotate&literal=1\nshows no users for it though. There's pgadmin3 using it\nhttps://github.com/search?q=repo%3Apgadmin-org%2Fpgadmin3%20pg_logfile_rotate&type=code,\nhowever the repo is archived. Surprisingly, core has to maintain the\nold code needed for adminpack 1.0 - pg_rotate_logfile_old SQL function\nand pg_rotate_logfile function in signalfuncs.c. These things could\nhave been moved to adminpack.c back then and pointed CREATE FUNCTION\npg_catalog.pg_logfile_rotate() to use it from adminpack.c. If we\ndecide to remove adminpack 1.0 version completely, the 1.0 functions\npg_file_read, pg_file_length and pg_logfile_rotate will also go away\nmaking adminpack code simpler.\n\nHaving said that, it's good to hear from others, preferably from\npgadmin developers - added Dave Page ([email protected]) in here for\ninputs.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Feb 2024 03:01:39 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "Hi\n\nOn Mon, 12 Feb 2024 at 21:31, Bharath Rupireddy <\[email protected]> wrote:\n\n> On Tue, Feb 13, 2024 at 2:29 AM Daniel Gustafsson <[email protected]> wrote:\n> >\n> > On that note though, we might want to consider just dropping it\n> altogether in\n> > v17 (while fixing the incorrect hint in backbranches)? I can't imagine\n> > adminpack 1.0 being in heavy use today, and skimming pgAdmin code it\n> seems it's\n> > only used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?\n>\n> https://codesearch.debian.net/search?q=pg_logfile_rotate&literal=1\n> shows no users for it though. There's pgadmin3 using it\n>\n> https://github.com/search?q=repo%3Apgadmin-org%2Fpgadmin3%20pg_logfile_rotate&type=code\n> ,\n> however the repo is archived. Surprisingly, core has to maintain the\n> old code needed for adminpack 1.0 - pg_rotate_logfile_old SQL function\n> and pg_rotate_logfile function in signalfuncs.c. These things could\n> have been moved to adminpack.c back then and pointed CREATE FUNCTION\n> pg_catalog.pg_logfile_rotate() to use it from adminpack.c. If we\n> decide to remove adminpack 1.0 version completely, the 1.0 functions\n> pg_file_read, pg_file_length and pg_logfile_rotate will also go away\n> making adminpack code simpler.\n>\n> Having said that, it's good to hear from others, preferably from\n> pgadmin developers - added Dave Page ([email protected]) in here for\n> inputs.\n>\n\nAs it happens we're currently implementing a redesigned version of that\nfunctionality from pgAdmin III in pgAdmin 4. However, we are not using\nadminpack for it.\n\nFWIW, the reason for the weird naming is that originally all the\nfunctionality for reading/managing files was added entirely as the\nadminpack extension. It was only later that some of the functionality was\nmoved into core, and renamed along the way (everyone likes blue for their\nbikeshed right?). The old functions (albeit, rewritten to use the new core\nfunctions) were kept in adminpack for backwards compatibility.\n\nThat said, pgAdmin III has been out of support for many years, and as far\nas I know, it (and similarly old versions of EDB's PEM which was based on\nit) were the only consumers of adminpack. I would not be sad to see it\nremoved entirely - except for the fact that I fondly remember being invited\nto join -core immediately after a heated discussion with Tom about it!\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Mon, 12 Feb 2024 at 21:31, Bharath Rupireddy <[email protected]> wrote:On Tue, Feb 13, 2024 at 2:29 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> On that note though, we might want to consider just dropping it altogether in\n> v17 (while fixing the incorrect hint in backbranches)? I can't imagine\n> adminpack 1.0 being in heavy use today, and skimming pgAdmin code it seems it's\n> only used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?\n\nhttps://codesearch.debian.net/search?q=pg_logfile_rotate&literal=1\nshows no users for it though. There's pgadmin3 using it\nhttps://github.com/search?q=repo%3Apgadmin-org%2Fpgadmin3%20pg_logfile_rotate&type=code,\nhowever the repo is archived. Surprisingly, core has to maintain the\nold code needed for adminpack 1.0 - pg_rotate_logfile_old SQL function\nand pg_rotate_logfile function in signalfuncs.c. These things could\nhave been moved to adminpack.c back then and pointed CREATE FUNCTION\npg_catalog.pg_logfile_rotate() to use it from adminpack.c. If we\ndecide to remove adminpack 1.0 version completely, the 1.0 functions\npg_file_read, pg_file_length and pg_logfile_rotate will also go away\nmaking adminpack code simpler.\n\nHaving said that, it's good to hear from others, preferably from\npgadmin developers - added Dave Page ([email protected]) in here for\ninputs.As it happens we're currently implementing a redesigned version of that functionality from pgAdmin III in pgAdmin 4. However, we are not using adminpack for it.FWIW, the reason for the weird naming is that originally all the functionality for reading/managing files was added entirely as the adminpack extension. It was only later that some of the functionality was moved into core, and renamed along the way (everyone likes blue for their bikeshed right?). The old functions (albeit, rewritten to use the new core functions) were kept in adminpack for backwards compatibility.That said, pgAdmin III has been out of support for many years, and as far as I know, it (and similarly old versions of EDB's PEM which was based on it) were the only consumers of adminpack. I would not be sad to see it removed entirely - except for the fact that I fondly remember being invited to join -core immediately after a heated discussion with Tom about it! -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 14 Feb 2024 10:35:56 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "> On 14 Feb 2024, at 11:35, Dave Page <[email protected]> wrote:\n\n> That said, pgAdmin III has been out of support for many years, and as far as I know, it (and similarly old versions of EDB's PEM which was based on it) were the only consumers of adminpack. I would not be sad to see it removed entirely\n\nSearching on Github and Debian Codesearch I cannot find any reference to anyone\nusing any function from adminpack. With pgAdminIII being EOL it might be to\nremove it now rather than be on the hook to maintain it for another 5 years\nuntil v17 goes EOL. It'll still be around for years in V16->.\n\nIf anyone still uses pgAdminIII then I have a hard time believing they are\ndiligently updating to the latest major version of postgres..\n\nAttached is a diff to show what it would look like to remove adminpack (catalog\nversion bump omitted on purpose to avoid conflicts until commit).\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 14 Feb 2024 14:35:39 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 14 Feb 2024, at 11:35, Dave Page <[email protected]> wrote:\n>> That said, pgAdmin III has been out of support for many years, and as far as I know, it (and similarly old versions of EDB's PEM which was based on it) were the only consumers of adminpack. I would not be sad to see it removed entirely\n\n> Searching on Github and Debian Codesearch I cannot find any reference to anyone\n> using any function from adminpack. With pgAdminIII being EOL it might be to\n> remove it now rather than be on the hook to maintain it for another 5 years\n> until v17 goes EOL. It'll still be around for years in V16->.\n\nWorks for me.\n\n> Attached is a diff to show what it would look like to remove adminpack (catalog\n> version bump omitted on purpose to avoid conflicts until commit).\n\nI don't see any references you missed, so +1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Feb 2024 10:04:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 10:04:49AM -0500, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>> On 14 Feb 2024, at 11:35, Dave Page <[email protected]> wrote:\n>>> That said, pgAdmin III has been out of support for many years, and as\n>>> far as I know, it (and similarly old versions of EDB's PEM which was\n>>> based on it) were the only consumers of adminpack. I would not be sad\n>>> to see it removed entirely\n> \n>> Searching on Github and Debian Codesearch I cannot find any reference to anyone\n>> using any function from adminpack. With pgAdminIII being EOL it might be to\n>> remove it now rather than be on the hook to maintain it for another 5 years\n>> until v17 goes EOL. It'll still be around for years in V16->.\n> \n> Works for me.\n> \n>> Attached is a diff to show what it would look like to remove adminpack (catalog\n>> version bump omitted on purpose to avoid conflicts until commit).\n> \n> I don't see any references you missed, so +1.\n\nSeems reasonable to me, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Feb 2024 12:51:30 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "> On 14 Feb 2024, at 19:51, Nathan Bossart <[email protected]> wrote:\n> \n> On Wed, Feb 14, 2024 at 10:04:49AM -0500, Tom Lane wrote:\n>> Daniel Gustafsson <[email protected]> writes:\n>>> On 14 Feb 2024, at 11:35, Dave Page <[email protected]> wrote:\n>>>> That said, pgAdmin III has been out of support for many years, and as\n>>>> far as I know, it (and similarly old versions of EDB's PEM which was\n>>>> based on it) were the only consumers of adminpack. I would not be sad\n>>>> to see it removed entirely\n>> \n>>> Searching on Github and Debian Codesearch I cannot find any reference to anyone\n>>> using any function from adminpack. With pgAdminIII being EOL it might be to\n>>> remove it now rather than be on the hook to maintain it for another 5 years\n>>> until v17 goes EOL. It'll still be around for years in V16->.\n>> \n>> Works for me.\n>> \n>>> Attached is a diff to show what it would look like to remove adminpack (catalog\n>>> version bump omitted on purpose to avoid conflicts until commit).\n>> \n>> I don't see any references you missed, so +1.\n> \n> Seems reasonable to me, too.\n\nThanks! I'll put this in the next CF to keep it open for comments a bit\nlonger, but will close it early in the CF.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 14 Feb 2024 21:48:43 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 2:18 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> >>> Searching on Github and Debian Codesearch I cannot find any reference to anyone\n> >>> using any function from adminpack. With pgAdminIII being EOL it might be to\n> >>> remove it now rather than be on the hook to maintain it for another 5 years\n> >>> until v17 goes EOL. It'll still be around for years in V16->.\n> >>\n> >> Works for me.\n> >>\n> >>> Attached is a diff to show what it would look like to remove adminpack (catalog\n> >>> version bump omitted on purpose to avoid conflicts until commit).\n> >>\n> >> I don't see any references you missed, so +1.\n> >\n> > Seems reasonable to me, too.\n>\n> Thanks! I'll put this in the next CF to keep it open for comments a bit\n> longer, but will close it early in the CF.\n\nLGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Feb 2024 12:52:47 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
},
{
"msg_contents": "> On 14 Feb 2024, at 21:48, Daniel Gustafsson <[email protected]> wrote:\n>> On 14 Feb 2024, at 19:51, Nathan Bossart <[email protected]> wrote:\n>> On Wed, Feb 14, 2024 at 10:04:49AM -0500, Tom Lane wrote:\n>>> Daniel Gustafsson <[email protected]> writes:\n\n>>>> Attached is a diff to show what it would look like to remove adminpack (catalog\n>>>> version bump omitted on purpose to avoid conflicts until commit).\n>>> \n>>> I don't see any references you missed, so +1.\n>> \n>> Seems reasonable to me, too.\n> \n> Thanks! I'll put this in the next CF to keep it open for comments a bit\n> longer, but will close it early in the CF.\n\nThis has now been pushed, adminpack has left the building.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 21:04:14 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pg_rotate_logfile"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgres has a good amount of code for dealing with backtraces - two\nGUCs backtrace_functions and backtrace_on_internal_error,\nerrbacktrace; all of which use core function set_backtrace from\nelog.c. I've not seen this code being tested at all, see code coverage\nreport - https://coverage.postgresql.org/src/backend/utils/error/elog.c.gcov.html.\n\nI think adding a simple test module (containing no .c files) with only\nTAP tests will help cover this code. I ended up having it as a\nseparate module under src/test/modules/test_backtrace as I was not\nable to find an existing TAP file in src/test to add these tests. I'm\nable to verify the backtrace related code with the attached patch\nconsistently. The TAP tests rely on the fact that the server emits\ntext \"BACKTRACE: \" to server logs before logging the backtrace, and\nthe backtrace contains the function name in which the error occurs.\nI've turned off query statement logging (set log_statement = none,\nlog_min_error_statement = fatal) so that the tests get to see the\nfunctions only in the backtrace. Although the CF bot is happy with the\nattached patch https://github.com/BRupireddy2/postgres/tree/add_test_module_for_bcktrace_functionality_v1,\nthere might be some more flakiness to it.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 13 Feb 2024 02:11:47 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add test module for verifying backtrace functionality"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 2:11 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Postgres has a good amount of code for dealing with backtraces - two\n> GUCs backtrace_functions and backtrace_on_internal_error,\n> errbacktrace; all of which use core function set_backtrace from\n> elog.c. I've not seen this code being tested at all, see code coverage\n> report - https://coverage.postgresql.org/src/backend/utils/error/elog.c.gcov.html.\n>\n> I think adding a simple test module (containing no .c files) with only\n> TAP tests will help cover this code. I ended up having it as a\n> separate module under src/test/modules/test_backtrace as I was not\n> able to find an existing TAP file in src/test to add these tests. I'm\n> able to verify the backtrace related code with the attached patch\n> consistently. The TAP tests rely on the fact that the server emits\n> text \"BACKTRACE: \" to server logs before logging the backtrace, and\n> the backtrace contains the function name in which the error occurs.\n> I've turned off query statement logging (set log_statement = none,\n> log_min_error_statement = fatal) so that the tests get to see the\n> functions only in the backtrace. Although the CF bot is happy with the\n> attached patch https://github.com/BRupireddy2/postgres/tree/add_test_module_for_bcktrace_functionality_v1,\n> there might be some more flakiness to it.\n>\n> Thoughts?\n\nRan pgperltidy on the new TAP test file added. Please see the attached v2 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Feb 2024 11:30:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add test module for verifying backtrace functionality"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 11:30 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Feb 13, 2024 at 2:11 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Postgres has a good amount of code for dealing with backtraces - two\n> > GUCs backtrace_functions and backtrace_on_internal_error,\n> > errbacktrace; all of which use core function set_backtrace from\n> > elog.c. I've not seen this code being tested at all, see code coverage\n> > report - https://coverage.postgresql.org/src/backend/utils/error/elog.c.gcov.html.\n> >\n> > I think adding a simple test module (containing no .c files) with only\n> > TAP tests will help cover this code. I ended up having it as a\n> > separate module under src/test/modules/test_backtrace as I was not\n> > able to find an existing TAP file in src/test to add these tests. I'm\n> > able to verify the backtrace related code with the attached patch\n> > consistently. The TAP tests rely on the fact that the server emits\n> > text \"BACKTRACE: \" to server logs before logging the backtrace, and\n> > the backtrace contains the function name in which the error occurs.\n> > I've turned off query statement logging (set log_statement = none,\n> > log_min_error_statement = fatal) so that the tests get to see the\n> > functions only in the backtrace. Although the CF bot is happy with the\n> > attached patch https://github.com/BRupireddy2/postgres/tree/add_test_module_for_bcktrace_functionality_v1,\n> > there might be some more flakiness to it.\n> >\n> > Thoughts?\n>\n> Ran pgperltidy on the new TAP test file added. Please see the attached v2 patch.\n\nI've now moved the new TAP test file to src/test/modules/test_misc/t\nas opposed to a new test module to keep it simple. I was not sure why\nI hadn't done that in the first place.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 16 Mar 2024 09:55:53 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add TAP tests for backtrace functionality (was Re: Add test module\n for verifying backtrace functionality)"
},
{
"msg_contents": "On 16.03.24 05:25, Bharath Rupireddy wrote:\n>>> Postgres has a good amount of code for dealing with backtraces - two\n>>> GUCs backtrace_functions and backtrace_on_internal_error,\n>>> errbacktrace; all of which use core function set_backtrace from\n>>> elog.c. I've not seen this code being tested at all, see code coverage\n>>> report - https://coverage.postgresql.org/src/backend/utils/error/elog.c.gcov.html.\n>>>\n>>> I think adding a simple test module (containing no .c files) with only\n>>> TAP tests will help cover this code. I ended up having it as a\n>>> separate module under src/test/modules/test_backtrace as I was not\n>>> able to find an existing TAP file in src/test to add these tests. I'm\n>>> able to verify the backtrace related code with the attached patch\n>>> consistently. The TAP tests rely on the fact that the server emits\n>>> text \"BACKTRACE: \" to server logs before logging the backtrace, and\n>>> the backtrace contains the function name in which the error occurs.\n>>> I've turned off query statement logging (set log_statement = none,\n>>> log_min_error_statement = fatal) so that the tests get to see the\n>>> functions only in the backtrace. Although the CF bot is happy with the\n>>> attached patch https://github.com/BRupireddy2/postgres/tree/add_test_module_for_bcktrace_functionality_v1,\n>>> there might be some more flakiness to it.\n>>>\n>>> Thoughts?\n>>\n>> Ran pgperltidy on the new TAP test file added. Please see the attached v2 patch.\n> \n> I've now moved the new TAP test file to src/test/modules/test_misc/t\n> as opposed to a new test module to keep it simple. I was not sure why\n> I hadn't done that in the first place.\n\nNote that backtrace_on_internal_error has been removed, so this patch \nwill need to be adjusted for that.\n\nI suggest you consider joining forces with thread [0] where a \nreplacement for backtrace_on_internal_error would be discussed. Having \nsome test coverage for whatever is being developed there might be useful.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAGECzQTpdujCEt2SH4DBwRLoDq4HJArGDaxJSsWX0G=tNnzaVA@mail.gmail.com\n\n\n\n",
"msg_date": "Sun, 12 May 2024 13:46:29 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP tests for backtrace functionality (was Re: Add test\n module for verifying backtrace functionality)"
}
] |
[
{
"msg_contents": "Hello \n\n\n\nI would like to share a patch that adds a feature to libpq to automatically select the best client certificate to send to the server (if it requests one). This feature is inspired by this email discussion years ago: https://www.postgresql.org/message-id/200905081539.n48Fdl2Y003286%40no.baka.org. This feature is useful if libpq client needs to communicate with multiple TLS-enabled PostgreSQL servers with different TLS certificate setups. Instead of letting the application to figure out the right certificate for the right server, the patch allows libpq library itself to pick the most ideal client certificate to send to the server.\n\n\n\nCurrently, we rely on options “sslcert” and “sslkey” parameters on the client side to select a client certificate + private key to send to the server, the patch adds 2 new options. “sslcertdir” and “sslkeydir” to specify directories where all possible certificate and private key files are stored. The new options cannot be used with “sslcert” and “sslkey” at the same time.\n\n\n\nThe most ideal certificate selection is based on the trusted CA names sent by the server in “Certificate Request” handshake message; obtained by the client making a call to “SSL_get0_peer_CA_list()” function. This list of trusted CA names tells the client the list of “issuers” that this server can trust. Inside “sslcertdir”, If a client certificate candidate’s issuer name equals to one of the trusted CA names, then that is the certificate to use. Once a candidate certificate is identified, the patch will then look for a matching private key in “sslkeydir”. These actions are performed in certificate callback function (cert_cb), which gets called when server requests a client certificate during TLS handshake.\n\n\n\nThis patch requires OpenSSL version 1.1.1 or later to work. The feature will be disabled with older OpenSSL versions. Attached is a POC patch containing the described feature.\n\n\n\nLimitations:\n\n\n\nOne limitation of this feature is that it does not quite support the case where multiple private key files inside “sslkeydir” are encrypted with different passwords. When the client wants to find a matching private key from “sslkeydir”, it will always use the same password supplied by the client (via “sslpassword” option) to decrypt the private key it tries to access.\n\n\n\n\n\nAlso, no tap tests have been added to the patch to test this feature yet. So, to test this feature, we will need to prepare the environment manually:\n\n\n\n1. generate 2 root CA certificates (ca1 and ca2), which sign 2 sets of client and server certificates.\n\n2. configure the server to use a server certificate signed by either ca1 or ca2.\n\n3. put all client certificates and private keys (signed by both ca1 and ca2) into a directory (we will point\"sslcertdir\" and \"sslkeydir\" to this directory)\n\n4. based on the root CA certificate configured at the server side, the client will pick the certificate that the server can trust from specified \"sslcertdir\" and \"sslkeydir\" directories\n\n\n\nPlease let me know what you think. Any comments / feedback are greatly appreciated.\n\n\n\nBest regards\n\n\n\n================\nCary Huang\n\nHighgo Software (Canada)\n\nwww.highgo.ca",
"msg_date": "Mon, 12 Feb 2024 15:40:55 -0700",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Patch] add multiple client certificate selection feature"
},
{
"msg_contents": "Hello\n\nI would like to share a version 2 patch for multiple client certificate selection feature with several enhancements over v1. I removed the extra parameter \"sslcertdir\" and \"sslkeydir\". Instead, I reuse the existing sslcert, ssldir and sslpassword parameters but allow multiple entries to be supplied separated by comma. This way, we are able to use a different sslpassword to decrypt different sslkey files based on the selected certificate. This was not possible in v1.\n\nWhen a client is doing a TLS handshake with a server that requires client certificate, the client will obtain a list of trusted CA names from the server and try to match it from the list of certificates provided via sslcert option. A client certificate is chosen if its issuer matches one of the server’s trusted CA names. Once a certificate is chosen, the corresponding private key and sslpassword (if required) will be used to establish a secured TLS connection.\n\nThe feature is useful when a libpq client needs to communicate with multiple TLS-enabled PostgreSQL server instances with different TLS certificate setups. Instead of letting the application to figure out what certificate to send to what server, we can configure all possible certificate candidates to libpq and have it choose the best one to use instead.\n\n \n\nHello Daniel\n\nSorry to bother. I am just wondering your opinion about this feature? Should this be added to commitfest for review? This feature involves certificates issued by different root CAs to test the its ability to pick the right certificate, so the existing ssl tap test’s certificate generation script needs an update to test this. I have not done so yet, because I would like to discuss with you first.\n\nAny comments and recommendations are welcome. Thank you!\n\n\n\n\n\nBest regards\n\nCary Huang",
"msg_date": "Fri, 01 Mar 2024 12:14:43 -0700",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] add multiple client certificate selection feature"
},
{
"msg_contents": "Hello \n\n\n\nI would like to share an updated patch that adds a feature to libpq to automatically select the best client certificate to send to the server (if it requests one). This feature is inspired by this email discussion years ago: https://www.postgresql.org/message-id/200905081539.n48Fdl2Y003286%40no.baka.org, which makes it easier for a single client to communicate TLS with multiple TLS-enabled PostgreSQL servers with different certificate setups.\n\n\n\nInstead of specifying just one sslcert, sslkey, or sslpassword, this patch allows multiple to be specified and libpq is able to pick the matching one to send to the PostgreSQL server based on the trusted CA names sent during TLS handshake.\n\n\n\nIf anyone finds it useful and would like to give it as try, I wrote a blog on how to test and verify this feature here: https://www.highgo.ca/2024/03/28/procedure-to-multiple-client-certificate-feature/\n\n\n\nthank you\n\n\n\nBest regards\n\n\n\nCary Huang",
"msg_date": "Thu, 11 Apr 2024 14:24:00 -0700",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] add multiple client certificate selection feature"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'd like to bring to your attention that I recently identified some\nfunctions in pgcrypto that are using PG_GETARG functions in a way that\ndoesn't match the expected function signature of the stored\nprocedures. This patch proposes a solution to address these\ninconsistencies and ensure proper alignment.\n\nThanks,\nShihao",
"msg_date": "Mon, 12 Feb 2024 23:30:40 -0500",
"msg_from": "shihao zhong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix incorrect PG_GETARG in pgcrypto"
},
{
"msg_contents": "On Mon, Feb 12, 2024 at 11:30:40PM -0500, shihao zhong wrote:\n> I'd like to bring to your attention that I recently identified some\n> functions in pgcrypto that are using PG_GETARG functions in a way that\n> doesn't match the expected function signature of the stored\n> procedures. This patch proposes a solution to address these\n> inconsistencies and ensure proper alignment.\n\nYou've indeed grabbed some historical inconsistencies here. Please\nnote that your patch has reversed diffs (for example, the SQL\ndefinition of pgp_sym_encrypt_bytea uses bytea,text,text as arguments\nand your resulting patch shows how HEAD does the job with\nbytea,bytea,bytea), but perhaps you have generated it with a command\nlike `git diff -R`? \n--\nMichael",
"msg_date": "Tue, 13 Feb 2024 17:36:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect PG_GETARG in pgcrypto"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 05:36:36PM +0900, Michael Paquier wrote:\n> You've indeed grabbed some historical inconsistencies here. Please\n> note that your patch has reversed diffs (for example, the SQL\n> definition of pgp_sym_encrypt_bytea uses bytea,text,text as arguments\n> and your resulting patch shows how HEAD does the job with\n> bytea,bytea,bytea), but perhaps you have generated it with a command\n> like `git diff -R`? \n\nThe reversed part of the patch put aside aside, I've double-checked\nyour patch and the inconsistencies seem to be all addressed in this\narea.\n\nThe symmetry that we have now between the bytea and text versions of\nthe functions is stunning, but I cannot really get excited about\nmerging all of them either as it would imply a bump of pgcrypto to\nupdate the prosrc of these functions, and we have to maintain runtime\ncompatibility with older versions.\n--\nMichael",
"msg_date": "Wed, 14 Feb 2024 09:08:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect PG_GETARG in pgcrypto"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 7:08 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Feb 13, 2024 at 05:36:36PM +0900, Michael Paquier wrote:\n> > You've indeed grabbed some historical inconsistencies here. Please\n> > note that your patch has reversed diffs (for example, the SQL\n> > definition of pgp_sym_encrypt_bytea uses bytea,text,text as arguments\n> > and your resulting patch shows how HEAD does the job with\n> > bytea,bytea,bytea), but perhaps you have generated it with a command\n> > like `git diff -R`?\n>\n> The reversed part of the patch put aside aside, I've double-checked\n> your patch and the inconsistencies seem to be all addressed in this\n> area.\nThanks for fixing and merging this patch, I appreciate it!\n\nThanks,\nShihao\n\n\n> The symmetry that we have now between the bytea and text versions of\n> the functions is stunning, but I cannot really get excited about\n> merging all of them either as it would imply a bump of pgcrypto to\n> update the prosrc of these functions, and we have to maintain runtime\n> compatibility with older versions.\n> --\n> Michael\n\n\n",
"msg_date": "Thu, 15 Feb 2024 20:35:12 -0500",
"msg_from": "shihao zhong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect PG_GETARG in pgcrypto"
},
{
"msg_contents": "On 2/16/24 02:35, shihao zhong wrote:\n> On Tue, Feb 13, 2024 at 7:08 PM Michael Paquier <[email protected]> wrote:\n>>\n>> On Tue, Feb 13, 2024 at 05:36:36PM +0900, Michael Paquier wrote:\n>>> You've indeed grabbed some historical inconsistencies here. Please\n>>> note that your patch has reversed diffs (for example, the SQL\n>>> definition of pgp_sym_encrypt_bytea uses bytea,text,text as arguments\n>>> and your resulting patch shows how HEAD does the job with\n>>> bytea,bytea,bytea), but perhaps you have generated it with a command\n>>> like `git diff -R`?\n>>\n>> The reversed part of the patch put aside aside, I've double-checked\n>> your patch and the inconsistencies seem to be all addressed in this\n>> area.\n> Thanks for fixing and merging this patch, I appreciate it!\n> \n\nShould this be marked as committed, or is there some remaining part?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Feb 2024 14:47:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect PG_GETARG in pgcrypto"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 02:47:27PM +0100, Tomas Vondra wrote:\n> Should this be marked as committed, or is there some remaining part?\n\nThanks. I've missed the existence of [1]. It is now marked as\ncommitted. \n\n[1]: https://commitfest.postgresql.org/47/4822/\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 11:51:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect PG_GETARG in pgcrypto"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed an assumption [1] at WALRead() call sites expecting the\nflushed WAL page to be zero-padded after the flush LSN. I think this\ncan't always be true as the WAL can get flushed after determining the\nflush LSN before reading it from the WAL file using WALRead(). I've\nhacked the code up a bit to check if that's true -\nhttps://github.com/BRupireddy2/postgres/tree/ensure_extra_read_WAL_page_is_zero_padded_at_the_end_WIP,\nthe tests hit the Assert(false); added. Which means, the zero-padding\ncomment around WALRead() call sites isn't quite right.\n\nI'm wondering why the WALRead() callers are always reading XLOG_BLCKSZ\ndespite knowing exactly how much to read. Is it to tell the OS to\nexplicitly fetch the whole page from the disk? If yes, the OS will do\nthat anyway because the page transfers from disk to OS page cache are\nalways in terms of disk block sizes, no?\n\nAlthough, there's no immediate problem with it right now, the\nassumption is going to be incorrect when reading WAL from WAL buffers\nusing WALReadFromBuffers -\nhttps://www.postgresql.org/message-id/CALj2ACV=C1GZT9XQRm4iN1NV1T=hLA_hsGWNx2Y5-G+mSwdhNg@mail.gmail.com.\n\nIf we have no reason, can the WALRead() callers just read how much\nthey want like walsender for physical replication? Attached a patch\nfor the change.\n\nThoughts?\n\n[1]\n /*\n * Even though we just determined how much of the page can be validly read\n * as 'count', read the whole page anyway. It's guaranteed to be\n * zero-padded up to the page boundary if it's incomplete.\n */\n if (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,\n &errinfo))\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 13 Feb 2024 11:47:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "Hi,\n\nOn Tue, 13 Feb 2024 at 09:17, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I noticed an assumption [1] at WALRead() call sites expecting the\n> flushed WAL page to be zero-padded after the flush LSN. I think this\n> can't always be true as the WAL can get flushed after determining the\n> flush LSN before reading it from the WAL file using WALRead(). I've\n> hacked the code up a bit to check if that's true -\n> https://github.com/BRupireddy2/postgres/tree/ensure_extra_read_WAL_page_is_zero_padded_at_the_end_WIP,\n> the tests hit the Assert(false); added. Which means, the zero-padding\n> comment around WALRead() call sites isn't quite right.\n>\n> I'm wondering why the WALRead() callers are always reading XLOG_BLCKSZ\n> despite knowing exactly how much to read. Is it to tell the OS to\n> explicitly fetch the whole page from the disk? If yes, the OS will do\n> that anyway because the page transfers from disk to OS page cache are\n> always in terms of disk block sizes, no?\n\nI am curious about the same. The page size and disk block size could\nbe different, so the reason could be explicitly fetching the whole\npage from the disk as you said. Is this the reason or are there any\nother benefits of always reading XLOG_BLCKSZ instead of reading the\nsufficient part? I tried to search in older threads and code comments\nbut I could not find an explanation.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 15 Feb 2024 13:19:35 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "At Tue, 13 Feb 2024 11:47:06 +0530, Bharath Rupireddy <[email protected]> wrote in \n> Hi,\n> \n> I noticed an assumption [1] at WALRead() call sites expecting the\n> flushed WAL page to be zero-padded after the flush LSN. I think this\n> can't always be true as the WAL can get flushed after determining the\n> flush LSN before reading it from the WAL file using WALRead(). I've\n> hacked the code up a bit to check if that's true -\n\nGood catch! The comment seems wrong also to me. The subsequent bytes\ncan be written simultaneously, and it's very normal that there are\nunflushed bytes are in OS's page buffer. The objective of the comment\nseems to be to declare that there's no need to clear out the remaining\nbytes, here. I agree that it's not a problem for now. However, I think\nwe need two fixes here.\n\n1. It's useless to copy the whole page regardless of the 'count'. It's\n enough to copy only up to the 'count'. The patch looks good in this\n regard.\n\n2. Maybe we need a comment that states the page_read callback\n functions leave garbage bytes beyond the returned count, due to\n partial copying without clearing the unused portion.\n\n> I'm wondering why the WALRead() callers are always reading XLOG_BLCKSZ\n> despite knowing exactly how much to read. Is it to tell the OS to\n> explicitly fetch the whole page from the disk? If yes, the OS will do\n> that anyway because the page transfers from disk to OS page cache are\n> always in terms of disk block sizes, no?\n\nIf I understand your question correctly, I guess that the whole-page\ncopy was expected to clear out the remaining bytes, as I mentioned\nearlier.\n\n> Although, there's no immediate problem with it right now, the\n> assumption is going to be incorrect when reading WAL from WAL buffers\n> using WALReadFromBuffers -\n> https://www.postgresql.org/message-id/CALj2ACV=C1GZT9XQRm4iN1NV1T=hLA_hsGWNx2Y5-G+mSwdhNg@mail.gmail.com.\n>\n> If we have no reason, can the WALRead() callers just read how much\n> they want like walsender for physical replication? Attached a patch\n> for the change.\n> \n> Thoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Feb 2024 10:40:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 3:49 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > I'm wondering why the WALRead() callers are always reading XLOG_BLCKSZ\n> > despite knowing exactly how much to read. Is it to tell the OS to\n> > explicitly fetch the whole page from the disk? If yes, the OS will do\n> > that anyway because the page transfers from disk to OS page cache are\n> > always in terms of disk block sizes, no?\n>\n> I am curious about the same. The page size and disk block size could\n> be different,\n\nYes, they can be different, but.... (see below)\n\n> so the reason could be explicitly fetching the whole\n> page from the disk as you said.\n\nUpon OS page cache miss, the whole page (of disk block size) gets\nfetched from disk even if we just read 'count' bytes (< disk block\nsize), no? This is my understanding about page transfers between disk\nand OS page cache.\n\n> Is this the reason or are there any\n> other benefits of always reading XLOG_BLCKSZ instead of reading the\n> sufficient part? I tried to search in older threads and code comments\n> but I could not find an explanation.\n\nFWIW, walsender for physical replication will just read as much as it\nwants to read which can range from WAL of size < XLOG_BLCKSZ to\nMAX_SEND_SIZE (XLOG_BLCKSZ * 16). I mean, it does not read the whole\npage of bytes XLOG_BLCKSZ when it wants to read < XLOG_BLCKSZ.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Feb 2024 19:40:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 7:10 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Good catch! The comment seems wrong also to me. The subsequent bytes\n> can be written simultaneously, and it's very normal that there are\n> unflushed bytes are in OS's page buffer. The objective of the comment\n> seems to be to declare that there's no need to clear out the remaining\n> bytes, here. I agree that it's not a problem for now. However, I think\n> we need two fixes here.\n>\n> 1. It's useless to copy the whole page regardless of the 'count'. It's\n> enough to copy only up to the 'count'. The patch looks good in this\n> regard.\n\nYes, it's not needed to copy the whole page. Per my understanding\nabout page transfers between disk and OS page cache - upon OS page\ncache miss, the whole page (of disk block size) gets fetched from disk\neven if we just read 'count' bytes (< disk block size).\n\n> 2. Maybe we need a comment that states the page_read callback\n> functions leave garbage bytes beyond the returned count, due to\n> partial copying without clearing the unused portion.\n\nIsn't the comment around page_read callback at\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/access/xlogreader.h;h=2e9e5f43eb2de1ca9ba81afe76d21357065c61aa;hb=d57b7cc3338e9d9aa1d7c5da1b25a17c5a72dcce#l78\nenough?\n\n\"The callback shall return the number of bytes read (never more than\nXLOG_BLCKSZ), or -1 on failure.\"\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Feb 2024 19:50:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "At Fri, 16 Feb 2024 19:50:00 +0530, Bharath Rupireddy <[email protected]> wrote in \r\n> On Fri, Feb 16, 2024 at 7:10 AM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> > 1. It's useless to copy the whole page regardless of the 'count'. It's\r\n> > enough to copy only up to the 'count'. The patch looks good in this\r\n> > regard.\r\n> \r\n> Yes, it's not needed to copy the whole page. Per my understanding\r\n> about page transfers between disk and OS page cache - upon OS page\r\n> cache miss, the whole page (of disk block size) gets fetched from disk\r\n> even if we just read 'count' bytes (< disk block size).\r\n\r\nRight, but with a possibly-different block size. Anyway that behavior\r\ndoesn't affect the result of this change. (Could affect performance\r\nhereafter if it were not the case, though..)\r\n\r\n> > 2. Maybe we need a comment that states the page_read callback\r\n> > functions leave garbage bytes beyond the returned count, due to\r\n> > partial copying without clearing the unused portion.\r\n> \r\n> Isn't the comment around page_read callback at\r\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/access/xlogreader.h;h=2e9e5f43eb2de1ca9ba81afe76d21357065c61aa;hb=d57b7cc3338e9d9aa1d7c5da1b25a17c5a72dcce#l78\r\n> enough?\r\n> \r\n> \"The callback shall return the number of bytes read (never more than\r\n> XLOG_BLCKSZ), or -1 on failure.\"\r\n\r\nYeah, perhaps I was overly concerned. The removed comment made me\r\nthink that someone could add code relying on the incorrect assumption\r\nthat the remaining bytes beyond the returned count are cleared out. On\r\nthe flip side, SimpleXLogPageRead always reads a whole page and\r\nreturns XLOG_BLCKSZ. However, as you know, the returned buffer doesn't\r\ncontain random garbage bytes. Therefore, it's safe as long as the\r\ncaller doesn't access beyond the returned count. As a result, the\r\ndescription you pointed out seems to be enough.\r\n\r\nAfter all, the patch looks good to me.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 19 Feb 2024 11:56:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "At Mon, 19 Feb 2024 11:56:22 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Yeah, perhaps I was overly concerned. The removed comment made me\n> think that someone could add code relying on the incorrect assumption\n> that the remaining bytes beyond the returned count are cleared out. On\n> the flip side, SimpleXLogPageRead always reads a whole page and\n> returns XLOG_BLCKSZ. However, as you know, the returned buffer doesn't\n> contain random garbage bytes. Therefore, it's safe as long as the\n\nForgot to mention that there is a case involving non-initialized\npages, but it doesn't affect the correctness of the description you\npointed out.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Feb 2024 12:02:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 8:26 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> On\n> the flip side, SimpleXLogPageRead always reads a whole page and\n> returns XLOG_BLCKSZ. However, as you know, the returned buffer doesn't\n> contain random garbage bytes.\n\nIs this assumption true when wal_init_zero is off? I think when\nwal_init_zero is off, the last few bytes of the last page from the WAL\nfile may contain garbage bytes i.e. not zero bytes, no?\n\n> Therefore, it's safe as long as the\n> caller doesn't access beyond the returned count. As a result, the\n> description you pointed out seems to be enough.\n\nRight.\n\n> After all, the patch looks good to me.\n\nThanks. It was committed -\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=73f0a1326608ac3a7d390706fdeec59fe4dc42c0.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:02:39 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
},
{
"msg_contents": "At Mon, 19 Feb 2024 11:02:39 +0530, Bharath Rupireddy <[email protected]> wrote in \n> > After all, the patch looks good to me.\n> \n> Thanks. It was committed -\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=73f0a1326608ac3a7d390706fdeec59fe4dc42c0.\n\nYeah. I realied that after I had already sent the mail.. No harm done:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Feb 2024 14:47:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do away with zero-padding assumption before WALRead()"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a patch that fixes some overflow/underflow hazards that I\ndiscovered in the interval rounding code.\n\nThe lines look a bit long, but I did run the following before committing:\n`$ curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o\nsrc/tools/pgindent/typedefs.list && src/tools/pgindent/pgindent\nsrc/backend/utils/adt/timestamp.c`\n\nThanks,\nJoe Koshakow",
"msg_date": "Tue, 13 Feb 2024 13:31:22 -0500",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix overflow hazard in interval rounding"
},
{
"msg_contents": "Joseph Koshakow <[email protected]> writes:\n> Attached is a patch that fixes some overflow/underflow hazards that I\n> discovered in the interval rounding code.\n\nI think you need to use ereturn not ereport here; see other error\ncases in AdjustIntervalForTypmod.\n\n(We'd need ereport in back branches, but this problem seems to\nme to probably not be worth back-patching.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Feb 2024 13:46:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow hazard in interval rounding"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-13 13:31:22 -0500, Joseph Koshakow wrote:\n> Attached is a patch that fixes some overflow/underflow hazards that I\n> discovered in the interval rounding code.\n\nRandom, mildly related thought: I wonder if it's time to, again, look at\nenabling -ftrapv in assert enabled builds. I had looked at that a few years\nback, and fixed a number of instances, but not all I think. But I think we are\na lot closer to avoiding signed overflows everywhere, and it'd be nice to find\noverflow hazards more easily. Many places are broken even with -fwrapv\nsemantics (which we don't have on all compilers!). Trapping on such overflows\nmakes it far easier to find problems with tools like sqlsmith.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Feb 2024 11:14:01 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow hazard in interval rounding"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 1:46 PM Tom Lane <[email protected]> wrote:\n\n> I think you need to use ereturn not ereport here; see other error\n> cases in AdjustIntervalForTypmod.\n\nAttached is an updated patch that makes this adjustment.\n\n> (We'd need ereport in back branches, but this problem seems to\n> me to probably not be worth back-patching.)\n\nAgreed, this seems like a pretty rare overflow/underflow.\n\nThanks,\nJoe Koshakow",
"msg_date": "Tue, 13 Feb 2024 15:28:08 -0500",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow hazard in interval rounding"
},
{
"msg_contents": "Joseph Koshakow <[email protected]> writes:\n> On Tue, Feb 13, 2024 at 1:46 PM Tom Lane <[email protected]> wrote:\n>> (We'd need ereport in back branches, but this problem seems to\n>> me to probably not be worth back-patching.)\n\n> Agreed, this seems like a pretty rare overflow/underflow.\n\nOK, pushed to HEAD only. I converted the second steps to be like\n\"a -= a%b\" instead of \"a = (a/b)*b\" to make it a little clearer\nthat they don't have their own risks of overflow. Maybe it's a\nshade faster that way too, not sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Feb 2024 16:01:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow hazard in interval rounding"
},
{
"msg_contents": "Hi Andres,\n\nSorry for such a late reply.\n\nOn Tue, Feb 13, 2024 at 2:14 PM Andres Freund <[email protected]> wrote:\n\n> Random, mildly related thought: I wonder if it's time to, again, look at\n> enabling -ftrapv in assert enabled builds.I had looked at that a few years\n> back, and fixed a number of instances, but not all I think. But I think\nwe are\n> a lot closer to avoiding signed overflows everywhere, and it'd be nice to\nfind\n> overflow hazards more easily.\n\nI agree that this would be very helpful.\n\n> Many places are broken even with -fwrapv\n> semantics (which we don't have on all compilers!). Trapping on such\noverflows\n> makes it far easier to find problems with tools like sqlsmith.\n\nDoes this mean that some of our existing tests will panic when compiled\nwith -ftrapv or -fwrapv? If so I'd be interested in resolving the\nremaining issues if you could point me in the right direction of how to\nset the flag.\n\nThanks,\nJoe Koshakow\n\nHi Andres,Sorry for such a late reply.On Tue, Feb 13, 2024 at 2:14 PM Andres Freund <[email protected]> wrote:> Random, mildly related thought: I wonder if it's time to, again, look at> enabling -ftrapv in assert enabled builds.I had looked at that a few years> back, and fixed a number of instances, but not all I think. But I think we are> a lot closer to avoiding signed overflows everywhere, and it'd be nice to find> overflow hazards more easily. I agree that this would be very helpful.> Many places are broken even with -fwrapv> semantics (which we don't have on all compilers!). Trapping on such overflows> makes it far easier to find problems with tools like sqlsmith.Does this mean that some of our existing tests will panic when compiledwith -ftrapv or -fwrapv? If so I'd be interested in resolving theremaining issues if you could point me in the right direction of how toset the flag.Thanks,Joe Koshakow",
"msg_date": "Sun, 2 Jun 2024 19:01:15 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow hazard in interval rounding"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a patch set which refactors BitmapHeapScan such that it\ncan use the streaming read API [1]. It also resolves the long-standing\nFIXME in the BitmapHeapScan code suggesting that the skip fetch\noptimization should be pushed into the table AMs. Additionally, it\nmoves table scan initialization to after the index scan and bitmap\ninitialization.\n\npatches 0001-0002 are assorted cleanup needed later in the set.\npatches 0003 moves the table scan initialization to after bitmap creation\npatch 0004 is, I think, a bug fix. see [2].\npatches 0005-0006 push the skip fetch optimization into the table AMs\npatches 0007-0009 change the control flow of BitmapHeapNext() to match\nthat required by the streaming read API\npatch 0010 is the streaming read code not yet in master\npatch 0011 is the actual bitmapheapscan streaming read user.\n\npatches 0001-0009 apply on top of master but 0010 and 0011 must be\napplied on top of a commit before a 21d9c3ee4ef74e2 (until a rebased\nversion of the streaming read API is on the mailing list).\n\nThe caveat is that these patches introduce breaking changes to two\ntable AM functions for bitmapheapscan: table_scan_bitmap_next_block()\nand table_scan_bitmap_next_tuple().\n\nA TBMIterateResult used to be threaded through both of these functions\nand used in BitmapHeapNext(). This patch set removes all references to\nTBMIterateResults from BitmapHeapNext. Because the streaming read API\nrequires the callback to specify the next block, BitmapHeapNext() can\nno longer pass a TBMIterateResult to table_scan_bitmap_next_block().\n\nMore subtly, table_scan_bitmap_next_block() used to return false if\nthere were no more visible tuples on the page or if the block that was\nrequested was not valid. With these changes,\ntable_scan_bitmap_next_block() will only return false when the bitmap\nhas been exhausted and the scan can end. In order to use the streaming\nread API, the user must be able to request the blocks it needs without\nrequiring synchronous feedback per block. Thus, this table AM function\nmust change its meaning.\n\nI think the way the patches are split up could be improved. I will\nthink more about this. There are also probably a few mistakes with\nwhich comments are updated in which patches in the set.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAAKRu_bxrXeZ2rCnY8LyeC2Ls88KpjWrQ%2BopUrXDRXdcfwFZGA%40mail.gmail.com",
"msg_date": "Tue, 13 Feb 2024 18:11:25 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\n> On Feb 13, 2024, at 3:11 PM, Melanie Plageman <[email protected]> wrote:\n\nThanks for the patch...\n\n> Attached is a patch set which refactors BitmapHeapScan such that it\n> can use the streaming read API [1]. It also resolves the long-standing\n> FIXME in the BitmapHeapScan code suggesting that the skip fetch\n> optimization should be pushed into the table AMs. Additionally, it\n> moves table scan initialization to after the index scan and bitmap\n> initialization.\n> \n> patches 0001-0002 are assorted cleanup needed later in the set.\n> patches 0003 moves the table scan initialization to after bitmap creation\n> patch 0004 is, I think, a bug fix. see [2].\n> patches 0005-0006 push the skip fetch optimization into the table AMs\n> patches 0007-0009 change the control flow of BitmapHeapNext() to match\n> that required by the streaming read API\n> patch 0010 is the streaming read code not yet in master\n> patch 0011 is the actual bitmapheapscan streaming read user.\n> \n> patches 0001-0009 apply on top of master but 0010 and 0011 must be\n> applied on top of a commit before a 21d9c3ee4ef74e2 (until a rebased\n> version of the streaming read API is on the mailing list).\n\nI followed your lead and applied them to 6a8ffe812d194ba6f4f26791b6388a4837d17d6c. `make check` worked fine, though I expect you know that already.\n\n> The caveat is that these patches introduce breaking changes to two\n> table AM functions for bitmapheapscan: table_scan_bitmap_next_block()\n> and table_scan_bitmap_next_tuple().\n\nYou might want an independent perspective on how much of a hassle those breaking changes are, so I took a stab at that. Having written a custom proprietary TAM for postgresql 15 here at EDB, and having ported it and released it for postgresql 16, I thought I'd try porting it to the the above commit with your patches. Even without your patches, I already see breaking changes coming from commit f691f5b80a85c66d715b4340ffabb503eb19393e, which creates a similar amount of breakage for me as does your patches. Dealing with the combined breakage might amount to a day of work, including testing, half of which I think I've already finished. In other words, it doesn't seem like a big deal.\n\nWere postgresql 17 shaping up to be compatible with TAMs written for 16, your patch would change that qualitatively, but since things are already incompatible, I think you're in the clear.\n\n> A TBMIterateResult used to be threaded through both of these functions\n> and used in BitmapHeapNext(). This patch set removes all references to\n> TBMIterateResults from BitmapHeapNext. Because the streaming read API\n> requires the callback to specify the next block, BitmapHeapNext() can\n> no longer pass a TBMIterateResult to table_scan_bitmap_next_block().\n> \n> More subtly, table_scan_bitmap_next_block() used to return false if\n> there were no more visible tuples on the page or if the block that was\n> requested was not valid. With these changes,\n> table_scan_bitmap_next_block() will only return false when the bitmap\n> has been exhausted and the scan can end. In order to use the streaming\n> read API, the user must be able to request the blocks it needs without\n> requiring synchronous feedback per block. Thus, this table AM function\n> must change its meaning.\n> \n> I think the way the patches are split up could be improved. I will\n> think more about this. There are also probably a few mistakes with\n> which comments are updated in which patches in the set.\n\nI look forward to the next version of the patch set. Thanks again for working on this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 13 Feb 2024 20:34:20 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 11:34 PM Mark Dilger\n<[email protected]> wrote:\n>\n> > On Feb 13, 2024, at 3:11 PM, Melanie Plageman <[email protected]> wrote:\n>\n> Thanks for the patch...\n>\n> > Attached is a patch set which refactors BitmapHeapScan such that it\n> > can use the streaming read API [1]. It also resolves the long-standing\n> > FIXME in the BitmapHeapScan code suggesting that the skip fetch\n> > optimization should be pushed into the table AMs. Additionally, it\n> > moves table scan initialization to after the index scan and bitmap\n> > initialization.\n> >\n> > patches 0001-0002 are assorted cleanup needed later in the set.\n> > patches 0003 moves the table scan initialization to after bitmap creation\n> > patch 0004 is, I think, a bug fix. see [2].\n> > patches 0005-0006 push the skip fetch optimization into the table AMs\n> > patches 0007-0009 change the control flow of BitmapHeapNext() to match\n> > that required by the streaming read API\n> > patch 0010 is the streaming read code not yet in master\n> > patch 0011 is the actual bitmapheapscan streaming read user.\n> >\n> > patches 0001-0009 apply on top of master but 0010 and 0011 must be\n> > applied on top of a commit before a 21d9c3ee4ef74e2 (until a rebased\n> > version of the streaming read API is on the mailing list).\n>\n> I followed your lead and applied them to 6a8ffe812d194ba6f4f26791b6388a4837d17d6c. `make check` worked fine, though I expect you know that already.\n\nThanks for taking a look!\n\n> > The caveat is that these patches introduce breaking changes to two\n> > table AM functions for bitmapheapscan: table_scan_bitmap_next_block()\n> > and table_scan_bitmap_next_tuple().\n>\n> You might want an independent perspective on how much of a hassle those breaking changes are, so I took a stab at that. Having written a custom proprietary TAM for postgresql 15 here at EDB, and having ported it and released it for postgresql 16, I thought I'd try porting it to the the above commit with your patches. Even without your patches, I already see breaking changes coming from commit f691f5b80a85c66d715b4340ffabb503eb19393e, which creates a similar amount of breakage for me as does your patches. Dealing with the combined breakage might amount to a day of work, including testing, half of which I think I've already finished. In other words, it doesn't seem like a big deal.\n>\n> Were postgresql 17 shaping up to be compatible with TAMs written for 16, your patch would change that qualitatively, but since things are already incompatible, I think you're in the clear.\n\nOh, good to know! I'm very happy to have the perspective of a table AM\nauthor. Just curious, did your table AM implement\ntable_scan_bitmap_next_block() and table_scan_bitmap_next_tuple(),\nand, if so, did you use the TBMIterateResult? Since it is not used in\nBitmapHeapNext() in my version, table AMs would have to change how\nthey use TBMIterateResults anyway. But I assume they could add it to a\ntable AM specific scan descriptor if they want access to a\nTBMIterateResult of their own making in both\ntable_san_bitmap_next_block() and next_tuple()?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 14 Feb 2024 09:47:20 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\n> On Feb 14, 2024, at 6:47 AM, Melanie Plageman <[email protected]> wrote:\n> \n> Just curious, did your table AM implement\n> table_scan_bitmap_next_block() and table_scan_bitmap_next_tuple(),\n> and, if so, did you use the TBMIterateResult? Since it is not used in\n> BitmapHeapNext() in my version, table AMs would have to change how\n> they use TBMIterateResults anyway. But I assume they could add it to a\n> table AM specific scan descriptor if they want access to a\n> TBMIterateResult of their own making in both\n> table_san_bitmap_next_block() and next_tuple()?\n\nMy table AM does implement those two functions and does use the TBMIterateResult *tbmres argument, yes. I would deal with the issue in very much the same way that your patches modify heapam. I don't really have any additional comments about that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 14 Feb 2024 08:41:20 -0800",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-13 18:11:25 -0500, Melanie Plageman wrote:\n> Attached is a patch set which refactors BitmapHeapScan such that it\n> can use the streaming read API [1]. It also resolves the long-standing\n> FIXME in the BitmapHeapScan code suggesting that the skip fetch\n> optimization should be pushed into the table AMs. Additionally, it\n> moves table scan initialization to after the index scan and bitmap\n> initialization.\n\nThanks for working on this! While I have some quibbles with details, I think\nthis is quite a bit of progress in the right direction.\n\n\n> patches 0001-0002 are assorted cleanup needed later in the set.\n> patches 0003 moves the table scan initialization to after bitmap creation\n> patch 0004 is, I think, a bug fix. see [2].\n\nI'd not quite call it a bugfix, it's not like it leads to wrong\nbehaviour. Seems more like an optimization. But whatever :)\n\n\n\n> The caveat is that these patches introduce breaking changes to two\n> table AM functions for bitmapheapscan: table_scan_bitmap_next_block()\n> and table_scan_bitmap_next_tuple().\n\nThat's to be expected, I don't think it's worth worrying about. Right now a\nbunch of TAMs can't implement bitmap scans, this goes a fair bit towards\nallowing that...\n\n\n\n\n\n> From d6dd6eb21dcfbc41208f87d1d81ffe3960130889 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 12 Feb 2024 18:50:29 -0500\n> Subject: [PATCH v1 03/11] BitmapHeapScan begin scan after bitmap setup\n>\n> There is no reason for table_beginscan_bm() to begin the actual scan of\n> the underlying table in ExecInitBitmapHeapScan(). We can begin the\n> underlying table scan after the index scan has been completed and the\n> bitmap built.\n>\n> The one use of the scan descriptor during initialization was\n> ExecBitmapHeapInitializeWorker(), which set the scan descriptor snapshot\n> with one from an array in the parallel state. This overwrote the\n> snapshot set in table_beginscan_bm().\n>\n> By saving that worker snapshot as a member in the BitmapHeapScanState\n> during initialization, it can be restored in table_beginscan_bm() after\n> returning from the table AM specific begin scan function.\n\nI don't understand what the point of passing two different snapshots to\ntable_beginscan_bm() is. What does that even mean? Why can't we just use the\ncorrect snapshot initially?\n\n\n> From a3f62e4299663d418531ae61bb16ea39f0836fac Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 12 Feb 2024 19:03:24 -0500\n> Subject: [PATCH v1 04/11] BitmapPrefetch use prefetch block recheck for skip\n> fetch\n>\n> Previously BitmapPrefetch() used the recheck flag for the current block\n> to determine whether or not it could skip prefetching the proposed\n> prefetch block. It makes more sense for it to use the recheck flag from\n> the TBMIterateResult for the prefetch block instead.\n\nI'd mention the commit that introduced the current logic and link to the\nthe thread that you started about this.\n\n\n> From d56be7741765d93002649ef912ef4b8256a5b9af Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 12 Feb 2024 19:04:48 -0500\n> Subject: [PATCH v1 05/11] Update BitmapAdjustPrefetchIterator parameter type\n> to BlockNumber\n>\n> BitmapAdjustPrefetchIterator() only used the blockno member of the\n> passed in TBMIterateResult to ensure that the prefetch iterator and\n> regular iterator stay in sync. Pass it the BlockNumber only. This will\n> allow us to move away from using the TBMIterateResult outside of table\n> AM specific code.\n\nHm - I'm not convinced this is a good direction - doesn't that arguably\n*increase* TAM awareness? Perhaps it doesn't make much sense to use bitmap\nheap scans in a TAM without blocks, but still.\n\n\n\n> From 202b16d3a381210e8dbee69e68a8310be8ee11d2 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 12 Feb 2024 20:15:05 -0500\n> Subject: [PATCH v1 06/11] Push BitmapHeapScan skip fetch optimization into\n> table AM\n>\n> This resolves the long-standing FIXME in BitmapHeapNext() which said that\n> the optmization to skip fetching blocks of the underlying table when\n> none of the column data was needed should be pushed into the table AM\n> specific code.\n\nLong-standing? Sure, it's old enough to walk, but we have FIXMEs that are old\nenough to drink, at least in some countries. :)\n\n\n> The table AM agnostic functions for prefetching still need to know if\n> skipping fetching is permitted for this scan. However, this dependency\n> will be removed when that prefetching code is removed in favor of the\n> upcoming streaming read API.\n\n> ---\n> src/backend/access/heap/heapam.c | 10 +++\n> src/backend/access/heap/heapam_handler.c | 29 +++++++\n> src/backend/executor/nodeBitmapHeapscan.c | 100 ++++++----------------\n> src/include/access/heapam.h | 2 +\n> src/include/access/tableam.h | 17 ++--\n> src/include/nodes/execnodes.h | 6 --\n> 6 files changed, 74 insertions(+), 90 deletions(-)\n>\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 707460a5364..7aae1ecf0a9 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -955,6 +955,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> \tscan->rs_base.rs_flags = flags;\n> \tscan->rs_base.rs_parallel = parallel_scan;\n> \tscan->rs_strategy = NULL;\t/* set in initscan */\n> +\tscan->vmbuffer = InvalidBuffer;\n> +\tscan->empty_tuples = 0;\n\nThese don't follow the existing naming pattern for HeapScanDescData. While I\nexplicitly dislike the practice of adding prefixes to struct members, I don't\nthink mixing conventions within a single struct improves things.\n\nI also think it'd be good to note in comments that the vm buffer currently is\nonly used for bitmap heap scans, otherwise one might think they'd also be used\nfor normal scans, where we don't need them, because of the page level flag.\n\nAlso, perhaps worth renaming \"empty_tuples\" to something indicating that it's\nthe number of empty tuples to be returned later? num_empty_tuples_pending or\nsuch? Or the current \"return_empty_tuples\".\n\n\n> @@ -1043,6 +1045,10 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool set_params,\n> \tif (BufferIsValid(scan->rs_cbuf))\n> \t\tReleaseBuffer(scan->rs_cbuf);\n>\n> +\tif (BufferIsValid(scan->vmbuffer))\n> +\t\tReleaseBuffer(scan->vmbuffer);\n> +\tscan->vmbuffer = InvalidBuffer;\n\nIt does not matter one iota here, but personally I prefer moving the write\ninside the if, as dirtying the cacheline after we just figured out whe\n\n\n\n> diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c\n> index 9372b49bfaa..c0fb06c9688 100644\n> --- a/src/backend/executor/nodeBitmapHeapscan.c\n> +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> @@ -108,6 +108,7 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> \t */\n> \tif (!node->initialized)\n> \t{\n> +\t\tbool can_skip_fetch;\n> \t\t/*\n> \t\t * We can potentially skip fetching heap pages if we do not need any\n> \t\t * columns of the table, either for checking non-indexable quals or\n\nPretty sure pgindent will move this around.\n\n> +++ b/src/include/access/tableam.h\n> @@ -62,6 +62,7 @@ typedef enum ScanOptions\n>\n> \t/* unregister snapshot at scan end? */\n> \tSO_TEMP_SNAPSHOT = 1 << 9,\n> +\tSO_CAN_SKIP_FETCH = 1 << 10,\n> }\t\t\tScanOptions;\n\nWould be nice to add a comment explaining what this flag means.\n\n\n> From 500c84019b982a1e6c8b8dd40240c8510d83c287 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Tue, 13 Feb 2024 10:05:04 -0500\n> Subject: [PATCH v1 07/11] BitmapHeapScan scan desc counts lossy and exact\n> pages\n>\n> Future commits will remove the TBMIterateResult from BitmapHeapNext(),\n> pushing it into the table AM-specific code. So we will have to keep\n> track of the number of lossy and exact pages in the scan descriptor.\n> Doing this change to lossy/exact page counting in a separate commit just\n> simplifies the diff.\n\n> ---\n> src/backend/access/heap/heapam.c | 2 ++\n> src/backend/access/heap/heapam_handler.c | 9 +++++++++\n> src/backend/executor/nodeBitmapHeapscan.c | 18 +++++++++++++-----\n> src/include/access/relscan.h | 4 ++++\n> 4 files changed, 28 insertions(+), 5 deletions(-)\n>\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 7aae1ecf0a9..88b4aad5820 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -957,6 +957,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> \tscan->rs_strategy = NULL;\t/* set in initscan */\n> \tscan->vmbuffer = InvalidBuffer;\n> \tscan->empty_tuples = 0;\n> +\tscan->rs_base.lossy_pages = 0;\n> +\tscan->rs_base.exact_pages = 0;\n>\n> \t/*\n> \t * Disable page-at-a-time mode if it's not a MVCC-safe snapshot.\n> diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\n> index baba09c87c0..6e85ef7a946 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -2242,6 +2242,15 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,\n> \tAssert(ntup <= MaxHeapTuplesPerPage);\n> \thscan->rs_ntuples = ntup;\n>\n> +\t/* Only count exact and lossy pages with visible tuples */\n> +\tif (ntup > 0)\n> +\t{\n> +\t\tif (tbmres->ntuples >= 0)\n> +\t\t\tscan->exact_pages++;\n> +\t\telse\n> +\t\t\tscan->lossy_pages++;\n> +\t}\n> +\n> \treturn ntup > 0;\n> }\n>\n> diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c\n> index c0fb06c9688..19d115de06f 100644\n> --- a/src/backend/executor/nodeBitmapHeapscan.c\n> +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> @@ -53,6 +53,8 @@\n> #include \"utils/spccache.h\"\n>\n> static TupleTableSlot *BitmapHeapNext(BitmapHeapScanState *node);\n> +static inline void BitmapAccumCounters(BitmapHeapScanState *node,\n> +\t\t\t\t\t\t\t\t\t TableScanDesc scan);\n> static inline void BitmapDoneInitializingSharedState(ParallelBitmapHeapState *pstate);\n> static inline void BitmapAdjustPrefetchIterator(BitmapHeapScanState *node,\n> \t\t\t\t\t\t\t\t\t\t\t\tBlockNumber blockno);\n> @@ -234,11 +236,6 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> \t\t\t\tcontinue;\n> \t\t\t}\n>\n> -\t\t\tif (tbmres->ntuples >= 0)\n> -\t\t\t\tnode->exact_pages++;\n> -\t\t\telse\n> -\t\t\t\tnode->lossy_pages++;\n> -\n> \t\t\t/* Adjust the prefetch target */\n> \t\t\tBitmapAdjustPrefetchTarget(node);\n> \t\t}\n> @@ -315,9 +312,20 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> \t/*\n> \t * if we get here it means we are at the end of the scan..\n> \t */\n> +\tBitmapAccumCounters(node, scan);\n> \treturn ExecClearTuple(slot);\n> }\n>\n> +static inline void\n> +BitmapAccumCounters(BitmapHeapScanState *node,\n> +\t\t\t\t\tTableScanDesc scan)\n> +{\n> +\tnode->exact_pages += scan->exact_pages;\n> +\tscan->exact_pages = 0;\n> +\tnode->lossy_pages += scan->lossy_pages;\n> +\tscan->lossy_pages = 0;\n> +}\n> +\n\nI don't think this is quite right - you're calling BitmapAccumCounters() only\nwhen the scan doesn't return anything anymore, but there's no guarantee\nthat'll ever be reached. E.g. a bitmap heap scan below a limit node. I think\nthis needs to be in a) ExecEndBitmapHeapScan() b) ExecReScanBitmapHeapScan()\n\n\n> /*\n> *\tBitmapDoneInitializingSharedState - Shared state is initialized\n> *\n> diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> index 521043304ab..b74e08dd745 100644\n> --- a/src/include/access/relscan.h\n> +++ b/src/include/access/relscan.h\n> @@ -40,6 +40,10 @@ typedef struct TableScanDescData\n> \tItemPointerData rs_mintid;\n> \tItemPointerData rs_maxtid;\n>\n> +\t/* Only used for Bitmap table scans */\n> +\tlong\t\texact_pages;\n> +\tlong\t\tlossy_pages;\n> +\n> \t/*\n> \t * Information about type and behaviour of the scan, a bitmask of members\n> \t * of the ScanOptions enum (see tableam.h).\n\nI wonder if this really is the best place for the data to be accumulated. This\nrequires the accounting to be implemented in each AM, which doesn't obviously\nseem required. Why can't the accounting continue to live in\nnodeBitmapHeapscan.c, to be done after each table_scan_bitmap_next_block()\ncall?\n\n\n> From 555743e4bc885609d20768f7f2990c6ba69b13a9 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Tue, 13 Feb 2024 10:57:07 -0500\n> Subject: [PATCH v1 09/11] Make table_scan_bitmap_next_block() async friendly\n>\n> table_scan_bitmap_next_block() previously returned false if we did not\n> wish to call table_scan_bitmap_next_tuple() on the tuples on the page.\n> This could happen when there were no visible tuples on the page or, due\n> to concurrent activity on the table, the block returned by the iterator\n> is past the known end of the table.\n\nThis sounds a bit like the block is actually past the end of the table,\nbut in reality this happens if the block is past the end of the table as it\nwas when the scan was started. Somehow that feels significant, but I don't\nreally know why I think that.\n\n\n\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 88b4aad5820..d8569373987 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -959,6 +959,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> \tscan->empty_tuples = 0;\n> \tscan->rs_base.lossy_pages = 0;\n> \tscan->rs_base.exact_pages = 0;\n> +\tscan->rs_base.shared_tbmiterator = NULL;\n> +\tscan->rs_base.tbmiterator = NULL;\n>\n> \t/*\n> \t * Disable page-at-a-time mode if it's not a MVCC-safe snapshot.\n> @@ -1051,6 +1053,18 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool set_params,\n> \t\tReleaseBuffer(scan->vmbuffer);\n> \tscan->vmbuffer = InvalidBuffer;\n>\n> +\tif (scan->rs_base.rs_flags & SO_TYPE_BITMAPSCAN)\n> +\t{\n> +\t\tif (scan->rs_base.shared_tbmiterator)\n> +\t\t\ttbm_end_shared_iterate(scan->rs_base.shared_tbmiterator);\n> +\n> +\t\tif (scan->rs_base.tbmiterator)\n> +\t\t\ttbm_end_iterate(scan->rs_base.tbmiterator);\n> +\t}\n> +\n> +\tscan->rs_base.shared_tbmiterator = NULL;\n> +\tscan->rs_base.tbmiterator = NULL;\n> +\n> \t/*\n> \t * reinitialize scan descriptor\n> \t */\n\nIf every AM would need to implement this, perhaps this shouldn't be done here,\nbut in generic code?\n\n\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -2114,17 +2114,49 @@ heapam_estimate_rel_size(Relation rel, int32 *attr_widths,\n> \n> static bool\n> heapam_scan_bitmap_next_block(TableScanDesc scan,\n> -\t\t\t\t\t\t\t TBMIterateResult *tbmres)\n> +\t\t\t\t\t\t\t bool *recheck, BlockNumber *blockno)\n> {\n> \tHeapScanDesc hscan = (HeapScanDesc) scan;\n> -\tBlockNumber block = tbmres->blockno;\n> +\tBlockNumber block;\n> \tBuffer\t\tbuffer;\n> \tSnapshot\tsnapshot;\n> \tint\t\t\tntup;\n> +\tTBMIterateResult *tbmres;\n> \n> \thscan->rs_cindex = 0;\n> \thscan->rs_ntuples = 0;\n> \n> +\t*blockno = InvalidBlockNumber;\n> +\t*recheck = true;\n> +\n> +\tdo\n> +\t{\n> +\t\tif (scan->shared_tbmiterator)\n> +\t\t\ttbmres = tbm_shared_iterate(scan->shared_tbmiterator);\n> +\t\telse\n> +\t\t\ttbmres = tbm_iterate(scan->tbmiterator);\n> +\n> +\t\tif (tbmres == NULL)\n> +\t\t{\n> +\t\t\t/* no more entries in the bitmap */\n> +\t\t\tAssert(hscan->empty_tuples == 0);\n> +\t\t\treturn false;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * Ignore any claimed entries past what we think is the end of the\n> +\t\t * relation. It may have been extended after the start of our scan (we\n> +\t\t * only hold an AccessShareLock, and it could be inserts from this\n> +\t\t * backend). We don't take this optimization in SERIALIZABLE\n> +\t\t * isolation though, as we need to examine all invisible tuples\n> +\t\t * reachable by the index.\n> +\t\t */\n> +\t} while (!IsolationIsSerializable() && tbmres->blockno >= hscan->rs_nblocks);\n\nHm. Isn't it a problem that we have no CHECK_FOR_INTERRUPTS() in this loop?\n\n\n> @@ -2251,7 +2274,14 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,\n> \t\t\tscan->lossy_pages++;\n> \t}\n>\n> -\treturn ntup > 0;\n> +\t/*\n> +\t * Return true to indicate that a valid block was found and the bitmap is\n> +\t * not exhausted. If there are no visible tuples on this page,\n> +\t * hscan->rs_ntuples will be 0 and heapam_scan_bitmap_next_tuple() will\n> +\t * return false returning control to this function to advance to the next\n> +\t * block in the bitmap.\n> +\t */\n> +\treturn true;\n> }\n\nWhy can't we fetch the next block immediately?\n\n\n\n> @@ -201,46 +197,23 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tcan_skip_fetch);\n> \t\t}\n>\n> -\t\tnode->tbmiterator = tbmiterator;\n> -\t\tnode->shared_tbmiterator = shared_tbmiterator;\n> +\t\tscan->tbmiterator = tbmiterator;\n> +\t\tscan->shared_tbmiterator = shared_tbmiterator;\n\nIt seems a bit odd that this code modifies the scan descriptor, instead of\npassing the iterator, or perhaps better the bitmap itself, to\ntable_beginscan_bm()?\n\n\n\n> diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> index b74e08dd745..bf7ee044268 100644\n> --- a/src/include/access/relscan.h\n> +++ b/src/include/access/relscan.h\n> @@ -16,6 +16,7 @@\n>\n> #include \"access/htup_details.h\"\n> #include \"access/itup.h\"\n> +#include \"nodes/tidbitmap.h\"\n\nI'd like to avoid exposing this to everything including relscan.h. I think we\ncould just forward declare the structs and use them here to avoid that?\n\n\n\n\n\n> From aac60985d6bc70bfedf77a77ee3c512da87bfcb1 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Tue, 13 Feb 2024 14:27:57 -0500\n> Subject: [PATCH v1 11/11] BitmapHeapScan uses streaming read API\n>\n> Remove all of the code to do prefetching from BitmapHeapScan code and\n> rely on the streaming read API prefetching. Heap table AM implements a\n> streaming read callback which uses the iterator to get the next valid\n> block that needs to be fetched for the streaming read API.\n> ---\n> src/backend/access/gin/ginget.c | 15 +-\n> src/backend/access/gin/ginscan.c | 7 +\n> src/backend/access/heap/heapam.c | 71 +++++\n> src/backend/access/heap/heapam_handler.c | 78 +++--\n> src/backend/executor/nodeBitmapHeapscan.c | 328 +---------------------\n> src/backend/nodes/tidbitmap.c | 80 +++---\n> src/include/access/heapam.h | 2 +\n> src/include/access/tableam.h | 14 +-\n> src/include/nodes/execnodes.h | 19 --\n> src/include/nodes/tidbitmap.h | 8 +-\n> 10 files changed, 178 insertions(+), 444 deletions(-)\n>\n> diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c\n> index 0b4f2ebadb6..3ce28078a6f 100644\n> --- a/src/backend/access/gin/ginget.c\n> +++ b/src/backend/access/gin/ginget.c\n> @@ -373,7 +373,10 @@ restartScanEntry:\n> \t\t\tif (entry->matchBitmap)\n> \t\t\t{\n> \t\t\t\tif (entry->matchIterator)\n> +\t\t\t\t{\n> \t\t\t\t\ttbm_end_iterate(entry->matchIterator);\n> +\t\t\t\t\tpfree(entry->matchResult);\n> +\t\t\t\t}\n> \t\t\t\tentry->matchIterator = NULL;\n> \t\t\t\ttbm_free(entry->matchBitmap);\n> \t\t\t\tentry->matchBitmap = NULL;\n> @@ -386,6 +389,7 @@ restartScanEntry:\n> \t\tif (entry->matchBitmap && !tbm_is_empty(entry->matchBitmap))\n> \t\t{\n> \t\t\tentry->matchIterator = tbm_begin_iterate(entry->matchBitmap);\n> +\t\t\tentry->matchResult = palloc0(TBM_ITERATE_RESULT_SIZE);\n\nDo we actually have to use palloc0? TBM_ITERATE_RESULT_SIZE ain't small, so\nzeroing all of it isn't free.\n\n\n> +static BlockNumber bitmapheap_pgsr_next_single(PgStreamingRead *pgsr, void *pgsr_private,\n> +\t\t\t\t\t\t\tvoid *per_buffer_data);\n\nIs it correct to have _single in the name here? Aren't we also using for\nparallel scans?\n\n\n> +static BlockNumber\n> +bitmapheap_pgsr_next_single(PgStreamingRead *pgsr, void *pgsr_private,\n> +\t\t\t\t\t\t\tvoid *per_buffer_data)\n> +{\n> +\tTBMIterateResult *tbmres = per_buffer_data;\n> +\tHeapScanDesc hdesc = (HeapScanDesc) pgsr_private;\n> +\n> +\tfor (;;)\n> +\t{\n> +\t\tif (hdesc->rs_base.shared_tbmiterator)\n> +\t\t\ttbm_shared_iterate(hdesc->rs_base.shared_tbmiterator, tbmres);\n> +\t\telse\n> +\t\t\ttbm_iterate(hdesc->rs_base.tbmiterator, tbmres);\n> +\n> +\t\t/* no more entries in the bitmap */\n> +\t\tif (!BlockNumberIsValid(tbmres->blockno))\n> +\t\t\treturn InvalidBlockNumber;\n> +\n> +\t\t/*\n> +\t\t * Ignore any claimed entries past what we think is the end of the\n> +\t\t * relation. It may have been extended after the start of our scan (we\n> +\t\t * only hold an AccessShareLock, and it could be inserts from this\n> +\t\t * backend). We don't take this optimization in SERIALIZABLE\n> +\t\t * isolation though, as we need to examine all invisible tuples\n> +\t\t * reachable by the index.\n> +\t\t */\n> +\t\tif (!IsolationIsSerializable() && tbmres->blockno >= hdesc->rs_nblocks)\n> +\t\t\tcontinue;\n> +\n> +\n> +\t\tif (hdesc->rs_base.rs_flags & SO_CAN_SKIP_FETCH &&\n> +\t\t\t!tbmres->recheck &&\n> +\t\t\tVM_ALL_VISIBLE(hdesc->rs_base.rs_rd, tbmres->blockno, &hdesc->vmbuffer))\n> +\t\t{\n> +\t\t\thdesc->empty_tuples += tbmres->ntuples;\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\treturn tbmres->blockno;\n> +\t}\n> +\n> +\t/* not reachable */\n> +\tAssert(false);\n> +}\n\nNeed to check for interrupts somewhere here.\n\n\n> @@ -124,15 +119,6 @@ BitmapHeapNext(BitmapHeapScanState *node)\n\nThere's still a comment in BitmapHeapNext talking about prefetching with two\niterators etc. That seems outdated now.\n\n\n> /*\n> * tbm_iterate - scan through next page of a TIDBitmap\n> *\n> - * Returns a TBMIterateResult representing one page, or NULL if there are\n> - * no more pages to scan. Pages are guaranteed to be delivered in numerical\n> - * order. If result->ntuples < 0, then the bitmap is \"lossy\" and failed to\n> - * remember the exact tuples to look at on this page --- the caller must\n> - * examine all tuples on the page and check if they meet the intended\n> - * condition. If result->recheck is true, only the indicated tuples need\n> - * be examined, but the condition must be rechecked anyway. (For ease of\n> - * testing, recheck is always set true when ntuples < 0.)\n> + * Caller must pass in a TBMIterateResult to be filled.\n> + *\n> + * Pages are guaranteed to be delivered in numerical order. tbmres->blockno is\n> + * set to InvalidBlockNumber when there are no more pages to scan. If\n> + * tbmres->ntuples < 0, then the bitmap is \"lossy\" and failed to remember the\n> + * exact tuples to look at on this page --- the caller must examine all tuples\n> + * on the page and check if they meet the intended condition. If\n> + * tbmres->recheck is true, only the indicated tuples need be examined, but the\n> + * condition must be rechecked anyway. (For ease of testing, recheck is always\n> + * set true when ntuples < 0.)\n> */\n> -TBMIterateResult *\n> -tbm_iterate(TBMIterator *iterator)\n> +void\n> +tbm_iterate(TBMIterator *iterator, TBMIterateResult *tbmres)\n\nHm - it seems a tad odd that we later have to find out if the scan is done\niterating by checking if blockno is valid, when tbm_iterate already knew. But\nI guess the code would be a bit uglier if we needed the result of\ntbm_[shared_]iterate(), due to the two functions.\n\n\nRight now ExecEndBitmapHeapScan() frees the tbm before it does table_endscan()\n- which seems problematic, as heap_endscan() will do stuff like\ntbm_end_iterate(), which imo shouldn't be called after the tbm has been freed,\neven if that works today.\n\n\nIt seems a bit confusing that your changes seem to treat\nBitmapHeapScanState->initialized as separate from ->scan, even though afaict\nscan should be NULL iff initialized is false and vice versa.\n\n\nIndependent of your patches, but brr, it's ugly that\nBitmapShouldInitializeSharedState() blocks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Feb 2024 11:42:28 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Thank you so much for this thorough review!!!!\n\nOn Wed, Feb 14, 2024 at 2:42 PM Andres Freund <[email protected]> wrote:\n>\n>\n> On 2024-02-13 18:11:25 -0500, Melanie Plageman wrote:\n>\n> > From d6dd6eb21dcfbc41208f87d1d81ffe3960130889 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 12 Feb 2024 18:50:29 -0500\n> > Subject: [PATCH v1 03/11] BitmapHeapScan begin scan after bitmap setup\n> >\n> > There is no reason for table_beginscan_bm() to begin the actual scan of\n> > the underlying table in ExecInitBitmapHeapScan(). We can begin the\n> > underlying table scan after the index scan has been completed and the\n> > bitmap built.\n> >\n> > The one use of the scan descriptor during initialization was\n> > ExecBitmapHeapInitializeWorker(), which set the scan descriptor snapshot\n> > with one from an array in the parallel state. This overwrote the\n> > snapshot set in table_beginscan_bm().\n> >\n> > By saving that worker snapshot as a member in the BitmapHeapScanState\n> > during initialization, it can be restored in table_beginscan_bm() after\n> > returning from the table AM specific begin scan function.\n>\n> I don't understand what the point of passing two different snapshots to\n> table_beginscan_bm() is. What does that even mean? Why can't we just use the\n> correct snapshot initially?\n\nIndeed. Honestly, it was an unlabeled TODO for me. I wasn't quite sure\nhow to get the same behavior as in master. Fixed in attached v2.\n\nNow the parallel worker still restores and registers that snapshot in\nExecBitmapHeapInitializeWorker() and then saves it in the\nBitmapHeapScanState. We then pass SO_TEMP_SNAPSHOT as an extra flag\n(to set rs_flags) to table_beginscan_bm() if there is a parallel\nworker snapshot saved in the BitmapHeapScanState.\n\n> > From a3f62e4299663d418531ae61bb16ea39f0836fac Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 12 Feb 2024 19:03:24 -0500\n> > Subject: [PATCH v1 04/11] BitmapPrefetch use prefetch block recheck for skip\n> > fetch\n> >\n> > Previously BitmapPrefetch() used the recheck flag for the current block\n> > to determine whether or not it could skip prefetching the proposed\n> > prefetch block. It makes more sense for it to use the recheck flag from\n> > the TBMIterateResult for the prefetch block instead.\n>\n> I'd mention the commit that introduced the current logic and link to the\n> the thread that you started about this.\n\nDone\n\n> > From d56be7741765d93002649ef912ef4b8256a5b9af Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 12 Feb 2024 19:04:48 -0500\n> > Subject: [PATCH v1 05/11] Update BitmapAdjustPrefetchIterator parameter type\n> > to BlockNumber\n> >\n> > BitmapAdjustPrefetchIterator() only used the blockno member of the\n> > passed in TBMIterateResult to ensure that the prefetch iterator and\n> > regular iterator stay in sync. Pass it the BlockNumber only. This will\n> > allow us to move away from using the TBMIterateResult outside of table\n> > AM specific code.\n>\n> Hm - I'm not convinced this is a good direction - doesn't that arguably\n> *increase* TAM awareness? Perhaps it doesn't make much sense to use bitmap\n> heap scans in a TAM without blocks, but still.\n\nThis is removed in later commits and is an intermediate state to try\nand move the TBMIterateResult out of BitmapHeapNext(). I can find\nanother way to achieve this if it is important.\n\n> > From 202b16d3a381210e8dbee69e68a8310be8ee11d2 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 12 Feb 2024 20:15:05 -0500\n> > Subject: [PATCH v1 06/11] Push BitmapHeapScan skip fetch optimization into\n> > table AM\n> >\n> > This resolves the long-standing FIXME in BitmapHeapNext() which said that\n> > the optmization to skip fetching blocks of the underlying table when\n> > none of the column data was needed should be pushed into the table AM\n> > specific code.\n>\n> Long-standing? Sure, it's old enough to walk, but we have FIXMEs that are old\n> enough to drink, at least in some countries. :)\n\n;) I've updated the commit message. Though it is longstanding in that\nit predates Melanie + Postgres.\n\n> > The table AM agnostic functions for prefetching still need to know if\n> > skipping fetching is permitted for this scan. However, this dependency\n> > will be removed when that prefetching code is removed in favor of the\n> > upcoming streaming read API.\n>\n> > ---\n> > src/backend/access/heap/heapam.c | 10 +++\n> > src/backend/access/heap/heapam_handler.c | 29 +++++++\n> > src/backend/executor/nodeBitmapHeapscan.c | 100 ++++++----------------\n> > src/include/access/heapam.h | 2 +\n> > src/include/access/tableam.h | 17 ++--\n> > src/include/nodes/execnodes.h | 6 --\n> > 6 files changed, 74 insertions(+), 90 deletions(-)\n> >\n> > diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> > index 707460a5364..7aae1ecf0a9 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -955,6 +955,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> > scan->rs_base.rs_flags = flags;\n> > scan->rs_base.rs_parallel = parallel_scan;\n> > scan->rs_strategy = NULL; /* set in initscan */\n> > + scan->vmbuffer = InvalidBuffer;\n> > + scan->empty_tuples = 0;\n>\n> These don't follow the existing naming pattern for HeapScanDescData. While I\n> explicitly dislike the practice of adding prefixes to struct members, I don't\n> think mixing conventions within a single struct improves things.\n\nI've updated the names. What does rs even stand for?\n\n> I also think it'd be good to note in comments that the vm buffer currently is\n> only used for bitmap heap scans, otherwise one might think they'd also be used\n> for normal scans, where we don't need them, because of the page level flag.\n\nDone.\n\n> Also, perhaps worth renaming \"empty_tuples\" to something indicating that it's\n> the number of empty tuples to be returned later? num_empty_tuples_pending or\n> such? Or the current \"return_empty_tuples\".\n\nDone.\n\n> > @@ -1043,6 +1045,10 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool set_params,\n> > if (BufferIsValid(scan->rs_cbuf))\n> > ReleaseBuffer(scan->rs_cbuf);\n> >\n> > + if (BufferIsValid(scan->vmbuffer))\n> > + ReleaseBuffer(scan->vmbuffer);\n> > + scan->vmbuffer = InvalidBuffer;\n>\n> It does not matter one iota here, but personally I prefer moving the write\n> inside the if, as dirtying the cacheline after we just figured out whe\n\nI've now followed this convention throughout my patchset in the places\nwhere I noticed it.\n\n> > diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c\n> > index 9372b49bfaa..c0fb06c9688 100644\n> > --- a/src/backend/executor/nodeBitmapHeapscan.c\n> > +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> > @@ -108,6 +108,7 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> > */\n> > if (!node->initialized)\n> > {\n> > + bool can_skip_fetch;\n> > /*\n> > * We can potentially skip fetching heap pages if we do not need any\n> > * columns of the table, either for checking non-indexable quals or\n>\n> Pretty sure pgindent will move this around.\n\nThis is gone now, but I have pgindented all the commits so it\nshouldn't be a problem again.\n\n> > +++ b/src/include/access/tableam.h\n> > @@ -62,6 +62,7 @@ typedef enum ScanOptions\n> >\n> > /* unregister snapshot at scan end? */\n> > SO_TEMP_SNAPSHOT = 1 << 9,\n> > + SO_CAN_SKIP_FETCH = 1 << 10,\n> > } ScanOptions;\n>\n> Would be nice to add a comment explaining what this flag means.\n\nDone.\n\n> > From 500c84019b982a1e6c8b8dd40240c8510d83c287 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Tue, 13 Feb 2024 10:05:04 -0500\n> > Subject: [PATCH v1 07/11] BitmapHeapScan scan desc counts lossy and exact\n> > pages\n> >\n> > Future commits will remove the TBMIterateResult from BitmapHeapNext(),\n> > pushing it into the table AM-specific code. So we will have to keep\n> > track of the number of lossy and exact pages in the scan descriptor.\n> > Doing this change to lossy/exact page counting in a separate commit just\n> > simplifies the diff.\n>\n> > ---\n> > src/backend/access/heap/heapam.c | 2 ++\n> > src/backend/access/heap/heapam_handler.c | 9 +++++++++\n> > src/backend/executor/nodeBitmapHeapscan.c | 18 +++++++++++++-----\n> > src/include/access/relscan.h | 4 ++++\n> > 4 files changed, 28 insertions(+), 5 deletions(-)\n> >\n> > diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> > index 7aae1ecf0a9..88b4aad5820 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -957,6 +957,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> > scan->rs_strategy = NULL; /* set in initscan */\n> > scan->vmbuffer = InvalidBuffer;\n> > scan->empty_tuples = 0;\n> > + scan->rs_base.lossy_pages = 0;\n> > + scan->rs_base.exact_pages = 0;\n> >\n> > /*\n> > * Disable page-at-a-time mode if it's not a MVCC-safe snapshot.\n> > diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\n> > index baba09c87c0..6e85ef7a946 100644\n> > --- a/src/backend/access/heap/heapam_handler.c\n> > +++ b/src/backend/access/heap/heapam_handler.c\n> > @@ -2242,6 +2242,15 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,\n> > Assert(ntup <= MaxHeapTuplesPerPage);\n> > hscan->rs_ntuples = ntup;\n> >\n> > + /* Only count exact and lossy pages with visible tuples */\n> > + if (ntup > 0)\n> > + {\n> > + if (tbmres->ntuples >= 0)\n> > + scan->exact_pages++;\n> > + else\n> > + scan->lossy_pages++;\n> > + }\n> > +\n> > return ntup > 0;\n> > }\n> >\n> > diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c\n> > index c0fb06c9688..19d115de06f 100644\n> > --- a/src/backend/executor/nodeBitmapHeapscan.c\n> > +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> > @@ -53,6 +53,8 @@\n> > #include \"utils/spccache.h\"\n> >\n> > static TupleTableSlot *BitmapHeapNext(BitmapHeapScanState *node);\n> > +static inline void BitmapAccumCounters(BitmapHeapScanState *node,\n> > + TableScanDesc scan);\n> > static inline void BitmapDoneInitializingSharedState(ParallelBitmapHeapState *pstate);\n> > static inline void BitmapAdjustPrefetchIterator(BitmapHeapScanState *node,\n> > BlockNumber blockno);\n> > @@ -234,11 +236,6 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> > continue;\n> > }\n> >\n> > - if (tbmres->ntuples >= 0)\n> > - node->exact_pages++;\n> > - else\n> > - node->lossy_pages++;\n> > -\n> > /* Adjust the prefetch target */\n> > BitmapAdjustPrefetchTarget(node);\n> > }\n> > @@ -315,9 +312,20 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> > /*\n> > * if we get here it means we are at the end of the scan..\n> > */\n> > + BitmapAccumCounters(node, scan);\n> > return ExecClearTuple(slot);\n> > }\n> >\n> > +static inline void\n> > +BitmapAccumCounters(BitmapHeapScanState *node,\n> > + TableScanDesc scan)\n> > +{\n> > + node->exact_pages += scan->exact_pages;\n> > + scan->exact_pages = 0;\n> > + node->lossy_pages += scan->lossy_pages;\n> > + scan->lossy_pages = 0;\n> > +}\n> > +\n>\n> I don't think this is quite right - you're calling BitmapAccumCounters() only\n> when the scan doesn't return anything anymore, but there's no guarantee\n> that'll ever be reached. E.g. a bitmap heap scan below a limit node. I think\n> this needs to be in a) ExecEndBitmapHeapScan() b) ExecReScanBitmapHeapScan()\n\nThe scan descriptor isn't available in ExecEnd/ReScanBitmapHeapScan().\nSo, if we count in the scan descriptor we can't accumulate into the\nBitmapHeapScanState there. The reason to count in the scan descriptor\nis that it is in the table AM where we know if we have a lossy or\nexact page -- and we only have the scan descriptor not the\nBitmapHeapScanState in the table AM.\n\nI added a call to BitmapAccumCounters before the tuple is returned for\ncorrectness in this version (not ideal, I realize). See below for\nthoughts about what we could do instead.\n\n> > /*\n> > * BitmapDoneInitializingSharedState - Shared state is initialized\n> > *\n> > diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> > index 521043304ab..b74e08dd745 100644\n> > --- a/src/include/access/relscan.h\n> > +++ b/src/include/access/relscan.h\n> > @@ -40,6 +40,10 @@ typedef struct TableScanDescData\n> > ItemPointerData rs_mintid;\n> > ItemPointerData rs_maxtid;\n> >\n> > + /* Only used for Bitmap table scans */\n> > + long exact_pages;\n> > + long lossy_pages;\n> > +\n> > /*\n> > * Information about type and behaviour of the scan, a bitmask of members\n> > * of the ScanOptions enum (see tableam.h).\n>\n> I wonder if this really is the best place for the data to be accumulated. This\n> requires the accounting to be implemented in each AM, which doesn't obviously\n> seem required. Why can't the accounting continue to live in\n> nodeBitmapHeapscan.c, to be done after each table_scan_bitmap_next_block()\n> call?\n\nYes, I would really prefer not to do it in the table AM. But, we only\ncount exact and lossy pages for which at least one or more tuples were\nvisible (change this and you'll see tests fail). So, we need to decide\nif we are going to increment the counters somewhere where we have\naccess to that information. In the case of heap, that is really only\nonce I have the value of ntup in heapam_scan_bitmap_next_block(). To\nget that information back out to BitmapHeapNext(), I considered adding\nanother parameter to heapam_scan_bitmap_next_block() -- maybe an enum\nlike this:\n\n/*\n * BitmapHeapScans's bitmaps can choose to store per page information in a\n * lossy or exact way. Exact pages in the bitmap have the individual tuple\n * offsets that need to be visited while lossy pages in the bitmap have only the\n * block number of the page.\n */\ntypedef enum BitmapBlockResolution\n{\n BITMAP_BLOCK_NO_VISIBLE,\n BITMAP_BLOCK_LOSSY,\n BITMAP_BLOCK_EXACT,\n} BitmapBlockResolution;\n\nwhich we then use to increment the counter. But while I was writing\nthis code, I found myself narrating in the comment that the reason\nthis had to be set inside of the table AM is that only the table AM\nknows if it wants to count the block as lossy, exact, or not count it.\nSo, that made me question if it really should be in the\nBitmapHeapScanState.\n\nI also explored passing the table scan descriptor to\nshow_tidbitmap_info() -- but that had its own problems.\n\n> > From 555743e4bc885609d20768f7f2990c6ba69b13a9 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Tue, 13 Feb 2024 10:57:07 -0500\n> > Subject: [PATCH v1 09/11] Make table_scan_bitmap_next_block() async friendly\n> >\n> > table_scan_bitmap_next_block() previously returned false if we did not\n> > wish to call table_scan_bitmap_next_tuple() on the tuples on the page.\n> > This could happen when there were no visible tuples on the page or, due\n> > to concurrent activity on the table, the block returned by the iterator\n> > is past the known end of the table.\n>\n> This sounds a bit like the block is actually past the end of the table,\n> but in reality this happens if the block is past the end of the table as it\n> was when the scan was started. Somehow that feels significant, but I don't\n> really know why I think that.\n\nI have tried to update the commit message to make it clearer. I was\nactually wondering: now that we do table_beginscan_bm() in\nBitmapHeapNext() instead of ExecInitBitmapHeapScan(), have we reduced\nor eliminated the opportunity for this to be true? initscan() sets\nrs_nblocks and that now happens much later.\n\n> > diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> > index 88b4aad5820..d8569373987 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -959,6 +959,8 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> > scan->empty_tuples = 0;\n> > scan->rs_base.lossy_pages = 0;\n> > scan->rs_base.exact_pages = 0;\n> > + scan->rs_base.shared_tbmiterator = NULL;\n> > + scan->rs_base.tbmiterator = NULL;\n> >\n> > /*\n> > * Disable page-at-a-time mode if it's not a MVCC-safe snapshot.\n> > @@ -1051,6 +1053,18 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool set_params,\n> > ReleaseBuffer(scan->vmbuffer);\n> > scan->vmbuffer = InvalidBuffer;\n> >\n> > + if (scan->rs_base.rs_flags & SO_TYPE_BITMAPSCAN)\n> > + {\n> > + if (scan->rs_base.shared_tbmiterator)\n> > + tbm_end_shared_iterate(scan->rs_base.shared_tbmiterator);\n> > +\n> > + if (scan->rs_base.tbmiterator)\n> > + tbm_end_iterate(scan->rs_base.tbmiterator);\n> > + }\n> > +\n> > + scan->rs_base.shared_tbmiterator = NULL;\n> > + scan->rs_base.tbmiterator = NULL;\n> > +\n> > /*\n> > * reinitialize scan descriptor\n> > */\n>\n> If every AM would need to implement this, perhaps this shouldn't be done here,\n> but in generic code?\n\nFixed.\n\n> > --- a/src/backend/access/heap/heapam_handler.c\n> > +++ b/src/backend/access/heap/heapam_handler.c\n> > @@ -2114,17 +2114,49 @@ heapam_estimate_rel_size(Relation rel, int32 *attr_widths,\n> >\n> > static bool\n> > heapam_scan_bitmap_next_block(TableScanDesc scan,\n> > - TBMIterateResult *tbmres)\n> > + bool *recheck, BlockNumber *blockno)\n> > {\n> > HeapScanDesc hscan = (HeapScanDesc) scan;\n> > - BlockNumber block = tbmres->blockno;\n> > + BlockNumber block;\n> > Buffer buffer;\n> > Snapshot snapshot;\n> > int ntup;\n> > + TBMIterateResult *tbmres;\n> >\n> > hscan->rs_cindex = 0;\n> > hscan->rs_ntuples = 0;\n> >\n> > + *blockno = InvalidBlockNumber;\n> > + *recheck = true;\n> > +\n> > + do\n> > + {\n> > + if (scan->shared_tbmiterator)\n> > + tbmres = tbm_shared_iterate(scan->shared_tbmiterator);\n> > + else\n> > + tbmres = tbm_iterate(scan->tbmiterator);\n> > +\n> > + if (tbmres == NULL)\n> > + {\n> > + /* no more entries in the bitmap */\n> > + Assert(hscan->empty_tuples == 0);\n> > + return false;\n> > + }\n> > +\n> > + /*\n> > + * Ignore any claimed entries past what we think is the end of the\n> > + * relation. It may have been extended after the start of our scan (we\n> > + * only hold an AccessShareLock, and it could be inserts from this\n> > + * backend). We don't take this optimization in SERIALIZABLE\n> > + * isolation though, as we need to examine all invisible tuples\n> > + * reachable by the index.\n> > + */\n> > + } while (!IsolationIsSerializable() && tbmres->blockno >= hscan->rs_nblocks);\n>\n> Hm. Isn't it a problem that we have no CHECK_FOR_INTERRUPTS() in this loop?\n\nYes. fixed.\n\n> > @@ -2251,7 +2274,14 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,\n> > scan->lossy_pages++;\n> > }\n> >\n> > - return ntup > 0;\n> > + /*\n> > + * Return true to indicate that a valid block was found and the bitmap is\n> > + * not exhausted. If there are no visible tuples on this page,\n> > + * hscan->rs_ntuples will be 0 and heapam_scan_bitmap_next_tuple() will\n> > + * return false returning control to this function to advance to the next\n> > + * block in the bitmap.\n> > + */\n> > + return true;\n> > }\n>\n> Why can't we fetch the next block immediately?\n\nWe don't know that we want another block until we've gone through this\npage and seen there were no visible tuples, so we'd somehow have to\njump back up to the top of the function to get the next block -- which\nis basically what is happening in my revised control flow. We call\nheapam_scan_bitmap_next_tuple() and rs_ntuples is 0, so we end up\ncalling heapam_scan_bitmap_next_block() right away.\n\n> > @@ -201,46 +197,23 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> > can_skip_fetch);\n> > }\n> >\n> > - node->tbmiterator = tbmiterator;\n> > - node->shared_tbmiterator = shared_tbmiterator;\n> > + scan->tbmiterator = tbmiterator;\n> > + scan->shared_tbmiterator = shared_tbmiterator;\n>\n> It seems a bit odd that this code modifies the scan descriptor, instead of\n> passing the iterator, or perhaps better the bitmap itself, to\n> table_beginscan_bm()?\n\nOn rescan we actually will have initialized = false and make new\niterators but have the old scan descriptor. So, we need to be able to\nset the iterator in the scan to the new iterator.\n\n> > diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> > index b74e08dd745..bf7ee044268 100644\n> > --- a/src/include/access/relscan.h\n> > +++ b/src/include/access/relscan.h\n> > @@ -16,6 +16,7 @@\n> >\n> > #include \"access/htup_details.h\"\n> > #include \"access/itup.h\"\n> > +#include \"nodes/tidbitmap.h\"\n>\n> I'd like to avoid exposing this to everything including relscan.h. I think we\n> could just forward declare the structs and use them here to avoid that?\n\nDone\n\n> > From aac60985d6bc70bfedf77a77ee3c512da87bfcb1 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Tue, 13 Feb 2024 14:27:57 -0500\n> > Subject: [PATCH v1 11/11] BitmapHeapScan uses streaming read API\n> >\n> > Remove all of the code to do prefetching from BitmapHeapScan code and\n> > rely on the streaming read API prefetching. Heap table AM implements a\n> > streaming read callback which uses the iterator to get the next valid\n> > block that needs to be fetched for the streaming read API.\n> > ---\n> > src/backend/access/gin/ginget.c | 15 +-\n> > src/backend/access/gin/ginscan.c | 7 +\n> > src/backend/access/heap/heapam.c | 71 +++++\n> > src/backend/access/heap/heapam_handler.c | 78 +++--\n> > src/backend/executor/nodeBitmapHeapscan.c | 328 +---------------------\n> > src/backend/nodes/tidbitmap.c | 80 +++---\n> > src/include/access/heapam.h | 2 +\n> > src/include/access/tableam.h | 14 +-\n> > src/include/nodes/execnodes.h | 19 --\n> > src/include/nodes/tidbitmap.h | 8 +-\n> > 10 files changed, 178 insertions(+), 444 deletions(-)\n> >\n> > diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c\n> > index 0b4f2ebadb6..3ce28078a6f 100644\n> > --- a/src/backend/access/gin/ginget.c\n> > +++ b/src/backend/access/gin/ginget.c\n> > @@ -373,7 +373,10 @@ restartScanEntry:\n> > if (entry->matchBitmap)\n> > {\n> > if (entry->matchIterator)\n> > + {\n> > tbm_end_iterate(entry->matchIterator);\n> > + pfree(entry->matchResult);\n> > + }\n> > entry->matchIterator = NULL;\n> > tbm_free(entry->matchBitmap);\n> > entry->matchBitmap = NULL;\n> > @@ -386,6 +389,7 @@ restartScanEntry:\n> > if (entry->matchBitmap && !tbm_is_empty(entry->matchBitmap))\n> > {\n> > entry->matchIterator = tbm_begin_iterate(entry->matchBitmap);\n> > + entry->matchResult = palloc0(TBM_ITERATE_RESULT_SIZE);\n>\n> Do we actually have to use palloc0? TBM_ITERATE_RESULT_SIZE ain't small, so\n> zeroing all of it isn't free.\n\nTests actually did fail when I didn't use palloc0.\n\nThis code is different now though. There are a few new patches in v2\nthat 1) make the offsets array in the TBMIterateResult fixed size and\nthen this makes it possible to 2) make matchResult an inline member of\nthe GinScanEntry. I have a TODO in the code asking if setting blockno\nin the TBMIterateResult to InvalidBlockNumber is sufficient\n\"resetting\".\n\n> > +static BlockNumber bitmapheap_pgsr_next_single(PgStreamingRead *pgsr, void *pgsr_private,\n> > + void *per_buffer_data);\n>\n> Is it correct to have _single in the name here? Aren't we also using for\n> parallel scans?\n\nRight. I had a separate parallel version and then deleted it. This is now fixed.\n\n> > +static BlockNumber\n> > +bitmapheap_pgsr_next_single(PgStreamingRead *pgsr, void *pgsr_private,\n> > + void *per_buffer_data)\n> > +{\n> > + TBMIterateResult *tbmres = per_buffer_data;\n> > + HeapScanDesc hdesc = (HeapScanDesc) pgsr_private;\n> > +\n> > + for (;;)\n> > + {\n> > + if (hdesc->rs_base.shared_tbmiterator)\n> > + tbm_shared_iterate(hdesc->rs_base.shared_tbmiterator, tbmres);\n> > + else\n> > + tbm_iterate(hdesc->rs_base.tbmiterator, tbmres);\n> > +\n> > + /* no more entries in the bitmap */\n> > + if (!BlockNumberIsValid(tbmres->blockno))\n> > + return InvalidBlockNumber;\n> > +\n> > + /*\n> > + * Ignore any claimed entries past what we think is the end of the\n> > + * relation. It may have been extended after the start of our scan (we\n> > + * only hold an AccessShareLock, and it could be inserts from this\n> > + * backend). We don't take this optimization in SERIALIZABLE\n> > + * isolation though, as we need to examine all invisible tuples\n> > + * reachable by the index.\n> > + */\n> > + if (!IsolationIsSerializable() && tbmres->blockno >= hdesc->rs_nblocks)\n> > + continue;\n> > +\n> > +\n> > + if (hdesc->rs_base.rs_flags & SO_CAN_SKIP_FETCH &&\n> > + !tbmres->recheck &&\n> > + VM_ALL_VISIBLE(hdesc->rs_base.rs_rd, tbmres->blockno, &hdesc->vmbuffer))\n> > + {\n> > + hdesc->empty_tuples += tbmres->ntuples;\n> > + continue;\n> > + }\n> > +\n> > + return tbmres->blockno;\n> > + }\n> > +\n> > + /* not reachable */\n> > + Assert(false);\n> > +}\n>\n> Need to check for interrupts somewhere here.\n\nDone.\n\n> > @@ -124,15 +119,6 @@ BitmapHeapNext(BitmapHeapScanState *node)\n>\n> There's still a comment in BitmapHeapNext talking about prefetching with two\n> iterators etc. That seems outdated now.\n\nFixed.\n\n> > /*\n> > * tbm_iterate - scan through next page of a TIDBitmap\n> > *\n> > - * Returns a TBMIterateResult representing one page, or NULL if there are\n> > - * no more pages to scan. Pages are guaranteed to be delivered in numerical\n> > - * order. If result->ntuples < 0, then the bitmap is \"lossy\" and failed to\n> > - * remember the exact tuples to look at on this page --- the caller must\n> > - * examine all tuples on the page and check if they meet the intended\n> > - * condition. If result->recheck is true, only the indicated tuples need\n> > - * be examined, but the condition must be rechecked anyway. (For ease of\n> > - * testing, recheck is always set true when ntuples < 0.)\n> > + * Caller must pass in a TBMIterateResult to be filled.\n> > + *\n> > + * Pages are guaranteed to be delivered in numerical order. tbmres->blockno is\n> > + * set to InvalidBlockNumber when there are no more pages to scan. If\n> > + * tbmres->ntuples < 0, then the bitmap is \"lossy\" and failed to remember the\n> > + * exact tuples to look at on this page --- the caller must examine all tuples\n> > + * on the page and check if they meet the intended condition. If\n> > + * tbmres->recheck is true, only the indicated tuples need be examined, but the\n> > + * condition must be rechecked anyway. (For ease of testing, recheck is always\n> > + * set true when ntuples < 0.)\n> > */\n> > -TBMIterateResult *\n> > -tbm_iterate(TBMIterator *iterator)\n> > +void\n> > +tbm_iterate(TBMIterator *iterator, TBMIterateResult *tbmres)\n>\n> Hm - it seems a tad odd that we later have to find out if the scan is done\n> iterating by checking if blockno is valid, when tbm_iterate already knew. But\n> I guess the code would be a bit uglier if we needed the result of\n> tbm_[shared_]iterate(), due to the two functions.\n\nYes.\n\n> Right now ExecEndBitmapHeapScan() frees the tbm before it does table_endscan()\n> - which seems problematic, as heap_endscan() will do stuff like\n> tbm_end_iterate(), which imo shouldn't be called after the tbm has been freed,\n> even if that works today.\n\nI've flipped the order -- I end the scan then free the bitmap.\n\n> It seems a bit confusing that your changes seem to treat\n> BitmapHeapScanState->initialized as separate from ->scan, even though afaict\n> scan should be NULL iff initialized is false and vice versa.\n\nI thought so too, but it seems on rescan that the node->initialized is\nset to false but the scan is reused. So, we want to only make a new\nscan descriptor if it is truly the beginning of a new scan.\n\n- Melanie",
"msg_date": "Thu, 15 Feb 2024 22:31:02 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "In the attached v3, I've reordered the commits, updated some errant\ncomments, and improved the commit messages.\n\nI've also made some updates to the TIDBitmap API that seem like a\nclarity improvement to the API in general. These also reduce the diff\nfor GIN when separating the TBMIterateResult from the\nTBM[Shared]Iterator. And these TIDBitmap API changes are now all in\ntheir own commits (previously those were in the same commit as adding\nthe BitmapHeapScan streaming read user).\n\nThe three outstanding issues I see in the patch set are:\n1) the lossy and exact page counters issue described in my previous\nemail\n2) the TODO in the TIDBitmap API changes about being sure that setting\nTBMIterateResult->blockno to InvalidBlockNumber is sufficient for\nindicating an invalid TBMIterateResult (and an exhausted bitmap)\n3) the streaming read API is not committed yet, so the last two patches\nare not \"done\"\n\n- Melanie",
"msg_date": "Fri, 16 Feb 2024 12:35:59 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 12:35:59PM -0500, Melanie Plageman wrote:\n> In the attached v3, I've reordered the commits, updated some errant\n> comments, and improved the commit messages.\n> \n> I've also made some updates to the TIDBitmap API that seem like a\n> clarity improvement to the API in general. These also reduce the diff\n> for GIN when separating the TBMIterateResult from the\n> TBM[Shared]Iterator. And these TIDBitmap API changes are now all in\n> their own commits (previously those were in the same commit as adding\n> the BitmapHeapScan streaming read user).\n> \n> The three outstanding issues I see in the patch set are:\n> 1) the lossy and exact page counters issue described in my previous\n> email\n\nI've resolved this. I added a new patch to the set which starts counting\neven pages with no visible tuples toward lossy and exact pages. After an\noff-list conversation with Andres, it seems that this omission in master\nmay not have been intentional.\n\nOnce we have only two types of pages to differentiate between (lossy and\nexact [no longer have to care about \"has no visible tuples\"]), it is\neasy enough to pass a \"lossy\" boolean paramater to\ntable_scan_bitmap_next_block(). I've done this in the attached v4.\n\n- Melanie",
"msg_date": "Mon, 26 Feb 2024 20:50:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 08:50:28PM -0500, Melanie Plageman wrote:\n> On Fri, Feb 16, 2024 at 12:35:59PM -0500, Melanie Plageman wrote:\n> > In the attached v3, I've reordered the commits, updated some errant\n> > comments, and improved the commit messages.\n> > \n> > I've also made some updates to the TIDBitmap API that seem like a\n> > clarity improvement to the API in general. These also reduce the diff\n> > for GIN when separating the TBMIterateResult from the\n> > TBM[Shared]Iterator. And these TIDBitmap API changes are now all in\n> > their own commits (previously those were in the same commit as adding\n> > the BitmapHeapScan streaming read user).\n> > \n> > The three outstanding issues I see in the patch set are:\n> > 1) the lossy and exact page counters issue described in my previous\n> > email\n> \n> I've resolved this. I added a new patch to the set which starts counting\n> even pages with no visible tuples toward lossy and exact pages. After an\n> off-list conversation with Andres, it seems that this omission in master\n> may not have been intentional.\n> \n> Once we have only two types of pages to differentiate between (lossy and\n> exact [no longer have to care about \"has no visible tuples\"]), it is\n> easy enough to pass a \"lossy\" boolean paramater to\n> table_scan_bitmap_next_block(). I've done this in the attached v4.\n\nThomas posted a new version of the Streaming Read API [1], so here is a\nrebased v5. This should make it easier to review as it can be applied on\ntop of master.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJtLyxcAEvLhVUhgD4fMQkOu3PDaj8Qb9SR_UsmzgsBpQ%40mail.gmail.com",
"msg_date": "Tue, 27 Feb 2024 09:22:30 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nI haven't looked at the code very closely yet, but I decided to do some\nbasic benchmarks to see if/how this refactoring affects behavior.\n\nAttached is a simple .sh script that\n\n1) creates a table with one of a couple basic data distributions\n(uniform, linear, ...), with an index on top\n\n2) runs a simple query with a where condition matching a known fraction\nof the table (0 - 100%), and measures duration\n\n3) the query is forced to use bitmapscan by disabling other options\n\n4) there's a couple parameters the script varies (work_mem, parallel\nworkers, ...), the script drops caches etc.\n\n5) I only have results for table with 1M rows, which is ~320MB, so not\nhuge. I'm running this for larger data set, but that will take time.\n\n\nI did this on my two \"usual\" machines - i5 and xeon. Both have flash\nstorage, although i5 is SATA and xeon has NVMe. I won't share the raw\nresults, because the CSV is like 5MB - ping me off-list if you need the\nfile, ofc.\n\nAttached is PDF summarizing the results as a pivot table, with results\nfor \"master\" and \"patched\" builds. The interesting bit is the last\ncolumn, which shows whether the patch makes it faster (green) or slower\n(red).\n\nThe results seem pretty mixed, on both machines. If you focus on the\nuncached results (pages 4 and 8-9), there's both runs that are much\nfaster (by a factor of 2-5x) and slower (similar factor).\n\nOf course, these results are with forced bitmap scans, so the question\nis if those regressions even matter - maybe we'd use a different scan\ntype, making these changes less severe. So I logged \"optimal plan\" for\neach run, tracking the scan type the optimizer would really pick without\nall the enable_* GUCs. And the -optimal.pdf shows only results for the\nruns where the optimal plan uses the bitmap scan. And yes, while the\nimpact of the changes (in either direction) is reduced, it's still very\nmuch there.\n\nWhat's a bit surprising to me is that these regressions affect runs with\neffective_io_concurrency=0 in particular, which traditionally meant to\nnot do any prefetching / async stuff. I've perceived the patch mostly as\nrefactoring, so have not really expected such massive impact on these cases.\n\nSo I wonder if the refactoring means that we're actually doing some sort\namount of prefetching even with e_i_c=0. I'm not sure that'd be great, I\nassume people have valid reasons to disable prefetching ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Feb 2024 14:22:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 8:22 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I haven't looked at the code very closely yet, but I decided to do some\n> basic benchmarks to see if/how this refactoring affects behavior.\n>\n> Attached is a simple .sh script that\n>\n> 1) creates a table with one of a couple basic data distributions\n> (uniform, linear, ...), with an index on top\n>\n> 2) runs a simple query with a where condition matching a known fraction\n> of the table (0 - 100%), and measures duration\n>\n> 3) the query is forced to use bitmapscan by disabling other options\n>\n> 4) there's a couple parameters the script varies (work_mem, parallel\n> workers, ...), the script drops caches etc.\n>\n> 5) I only have results for table with 1M rows, which is ~320MB, so not\n> huge. I'm running this for larger data set, but that will take time.\n>\n>\n> I did this on my two \"usual\" machines - i5 and xeon. Both have flash\n> storage, although i5 is SATA and xeon has NVMe. I won't share the raw\n> results, because the CSV is like 5MB - ping me off-list if you need the\n> file, ofc.\n\nI haven't looked at your results in detail yet. I plan to dig into\nthis more later today. But, I was wondering if it was easy for you to\nrun the shorter tests on just the commits before the last\nhttps://github.com/melanieplageman/postgres/tree/bhs_pgsr\ni.e. patches 0001-0013. Patch 0014 implements the streaming read user\nand removes all of the existing prefetch code. I would be interested\nto know if the behavior with just the preliminary refactoring differs\nat all.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 28 Feb 2024 09:38:54 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2/28/24 15:38, Melanie Plageman wrote:\n> On Wed, Feb 28, 2024 at 8:22 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I haven't looked at the code very closely yet, but I decided to do some\n>> basic benchmarks to see if/how this refactoring affects behavior.\n>>\n>> Attached is a simple .sh script that\n>>\n>> 1) creates a table with one of a couple basic data distributions\n>> (uniform, linear, ...), with an index on top\n>>\n>> 2) runs a simple query with a where condition matching a known fraction\n>> of the table (0 - 100%), and measures duration\n>>\n>> 3) the query is forced to use bitmapscan by disabling other options\n>>\n>> 4) there's a couple parameters the script varies (work_mem, parallel\n>> workers, ...), the script drops caches etc.\n>>\n>> 5) I only have results for table with 1M rows, which is ~320MB, so not\n>> huge. I'm running this for larger data set, but that will take time.\n>>\n>>\n>> I did this on my two \"usual\" machines - i5 and xeon. Both have flash\n>> storage, although i5 is SATA and xeon has NVMe. I won't share the raw\n>> results, because the CSV is like 5MB - ping me off-list if you need the\n>> file, ofc.\n> \n> I haven't looked at your results in detail yet. I plan to dig into\n> this more later today. But, I was wondering if it was easy for you to\n> run the shorter tests on just the commits before the last\n> https://github.com/melanieplageman/postgres/tree/bhs_pgsr\n> i.e. patches 0001-0013. Patch 0014 implements the streaming read user\n> and removes all of the existing prefetch code. I would be interested\n> to know if the behavior with just the preliminary refactoring differs\n> at all.\n> \n\nSure, I can do that. It'll take a couple hours to get the results, I'll\nshare them when I have them.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:56:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2/28/24 15:56, Tomas Vondra wrote:\n>> ...\n> \n> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> share them when I have them.\n> \n\nHere are the results with only patches 0001 - 0012 applied (i.e. without\nthe patch introducing the streaming read API, and the patch switching\nthe bitmap heap scan to use it).\n\nThe changes in performance don't disappear entirely, but the scale is\ncertainly much smaller - both in the complete results for all runs, and\nfor the \"optimal\" runs that would actually pick bitmapscan.\n\nFWIW I'm not implying the patch must 100% maintain the current behavior,\nor anything like that. At this point I'm more motivated to understand if\nthis change in behavior is expected and/or what this means for users.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Feb 2024 20:23:28 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/28/24 15:56, Tomas Vondra wrote:\n> >> ...\n> >\n> > Sure, I can do that. It'll take a couple hours to get the results, I'll\n> > share them when I have them.\n> >\n>\n> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> the patch introducing the streaming read API, and the patch switching\n> the bitmap heap scan to use it).\n>\n> The changes in performance don't disappear entirely, but the scale is\n> certainly much smaller - both in the complete results for all runs, and\n> for the \"optimal\" runs that would actually pick bitmapscan.\n\nHmm. I'm trying to think how my refactor could have had this impact.\nIt seems like all the most notable regressions are with 4 parallel\nworkers. What do the numeric column labels mean across the top\n(2,4,8,16...) -- are they related to \"matches\"? And if so, what does\nthat mean?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:06:08 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 2/28/24 21:06, Melanie Plageman wrote:\n> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 2/28/24 15:56, Tomas Vondra wrote:\n>>>> ...\n>>>\n>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n>>> share them when I have them.\n>>>\n>>\n>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n>> the patch introducing the streaming read API, and the patch switching\n>> the bitmap heap scan to use it).\n>>\n>> The changes in performance don't disappear entirely, but the scale is\n>> certainly much smaller - both in the complete results for all runs, and\n>> for the \"optimal\" runs that would actually pick bitmapscan.\n> \n> Hmm. I'm trying to think how my refactor could have had this impact.\n> It seems like all the most notable regressions are with 4 parallel\n> workers. What do the numeric column labels mean across the top\n> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> that mean?\n> \n\nThat's the number of distinct values matched by the query, which should\nbe an approximation of the number of matching rows. The number of\ndistinct values in the data set differs by data set, but for 1M rows\nit's roughly like this:\n\nuniform: 10k\nlinear: 10k\ncyclic: 100\n\nSo for example matches=128 means ~1% of rows for uniform/linear, and\n100% for cyclic data sets.\n\nAs for the possible cause, I think it's clear most of the difference\ncomes from the last patch that actually switches bitmap heap scan to the\nstreaming read API. That's mostly expected/understandable, although we\nprobably need to look into the regressions or cases with e_i_c=0.\n\nTo analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\nindividual patches. I can try doing that tomorrow. It'll have to be a\nlimited set of tests, to reduce the time, but might tell us whether it's\ndue to a single patch or multiple patches.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 29 Feb 2024 00:17:51 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 2/28/24 21:06, Melanie Plageman wrote:\n> > On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 2/28/24 15:56, Tomas Vondra wrote:\n> >>>> ...\n> >>>\n> >>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> >>> share them when I have them.\n> >>>\n> >>\n> >> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> >> the patch introducing the streaming read API, and the patch switching\n> >> the bitmap heap scan to use it).\n> >>\n> >> The changes in performance don't disappear entirely, but the scale is\n> >> certainly much smaller - both in the complete results for all runs, and\n> >> for the \"optimal\" runs that would actually pick bitmapscan.\n> >\n> > Hmm. I'm trying to think how my refactor could have had this impact.\n> > It seems like all the most notable regressions are with 4 parallel\n> > workers. What do the numeric column labels mean across the top\n> > (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> > that mean?\n> >\n>\n> That's the number of distinct values matched by the query, which should\n> be an approximation of the number of matching rows. The number of\n> distinct values in the data set differs by data set, but for 1M rows\n> it's roughly like this:\n>\n> uniform: 10k\n> linear: 10k\n> cyclic: 100\n>\n> So for example matches=128 means ~1% of rows for uniform/linear, and\n> 100% for cyclic data sets.\n\nAh, thank you for the explanation. I also looked at your script after\nhaving sent this email and saw that it is clear in your script what\n\"matches\" is.\n\n> As for the possible cause, I think it's clear most of the difference\n> comes from the last patch that actually switches bitmap heap scan to the\n> streaming read API. That's mostly expected/understandable, although we\n> probably need to look into the regressions or cases with e_i_c=0.\n\nRight, I'm mostly surprised about the regressions for patches 0001-0012.\n\nRe eic 0: Thomas Munro and I chatted off-list, and you bring up a\ngreat point about eic 0. In old bitmapheapscan code eic 0 basically\ndisabled prefetching but with the streaming read API, it will still\nissue fadvises when eic is 0. That is an easy one line fix. Thomas\nprefers to fix it by always avoiding an fadvise for the last buffer in\na range before issuing a read (since we are about to read it anyway,\nbest not fadvise it too). This will fix eic 0 and also cut one system\ncall from each invocation of the streaming read machinery.\n\n> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n> individual patches. I can try doing that tomorrow. It'll have to be a\n> limited set of tests, to reduce the time, but might tell us whether it's\n> due to a single patch or multiple patches.\n\nYes, tomorrow I planned to start trying to repro some of the \"red\"\ncases myself. Any one of the commits could cause a slight regression\nbut a 3.5x regression is quite surprising, so I might focus on trying\nto repro that locally and then narrow down which patch causes it.\n\nFor the non-cached regressions, perhaps the commit to use the correct\nrecheck flag (0004) when prefetching could be the culprit. And for the\ncached regressions, my money is on the commit which changes the whole\ncontrol flow of BitmapHeapNext() and the next_block() and next_tuple()\nfunctions (0010).\n\n- Melanie\n\n\n",
"msg_date": "Wed, 28 Feb 2024 18:40:47 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2/29/24 00:40, Melanie Plageman wrote:\n> On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>>\n>> On 2/28/24 21:06, Melanie Plageman wrote:\n>>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 2/28/24 15:56, Tomas Vondra wrote:\n>>>>>> ...\n>>>>>\n>>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n>>>>> share them when I have them.\n>>>>>\n>>>>\n>>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n>>>> the patch introducing the streaming read API, and the patch switching\n>>>> the bitmap heap scan to use it).\n>>>>\n>>>> The changes in performance don't disappear entirely, but the scale is\n>>>> certainly much smaller - both in the complete results for all runs, and\n>>>> for the \"optimal\" runs that would actually pick bitmapscan.\n>>>\n>>> Hmm. I'm trying to think how my refactor could have had this impact.\n>>> It seems like all the most notable regressions are with 4 parallel\n>>> workers. What do the numeric column labels mean across the top\n>>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n>>> that mean?\n>>>\n>>\n>> That's the number of distinct values matched by the query, which should\n>> be an approximation of the number of matching rows. The number of\n>> distinct values in the data set differs by data set, but for 1M rows\n>> it's roughly like this:\n>>\n>> uniform: 10k\n>> linear: 10k\n>> cyclic: 100\n>>\n>> So for example matches=128 means ~1% of rows for uniform/linear, and\n>> 100% for cyclic data sets.\n> \n> Ah, thank you for the explanation. I also looked at your script after\n> having sent this email and saw that it is clear in your script what\n> \"matches\" is.\n> \n>> As for the possible cause, I think it's clear most of the difference\n>> comes from the last patch that actually switches bitmap heap scan to the\n>> streaming read API. That's mostly expected/understandable, although we\n>> probably need to look into the regressions or cases with e_i_c=0.\n> \n> Right, I'm mostly surprised about the regressions for patches 0001-0012.\n> \n> Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n> great point about eic 0. In old bitmapheapscan code eic 0 basically\n> disabled prefetching but with the streaming read API, it will still\n> issue fadvises when eic is 0. That is an easy one line fix. Thomas\n> prefers to fix it by always avoiding an fadvise for the last buffer in\n> a range before issuing a read (since we are about to read it anyway,\n> best not fadvise it too). This will fix eic 0 and also cut one system\n> call from each invocation of the streaming read machinery.\n> \n>> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n>> individual patches. I can try doing that tomorrow. It'll have to be a\n>> limited set of tests, to reduce the time, but might tell us whether it's\n>> due to a single patch or multiple patches.\n> \n> Yes, tomorrow I planned to start trying to repro some of the \"red\"\n> cases myself. Any one of the commits could cause a slight regression\n> but a 3.5x regression is quite surprising, so I might focus on trying\n> to repro that locally and then narrow down which patch causes it.\n> \n> For the non-cached regressions, perhaps the commit to use the correct\n> recheck flag (0004) when prefetching could be the culprit. And for the\n> cached regressions, my money is on the commit which changes the whole\n> control flow of BitmapHeapNext() and the next_block() and next_tuple()\n> functions (0010).\n> \n\nI do have some partial results, comparing the patches. I only ran one of\nthe more affected workloads (cyclic) on the xeon, attached is a PDF\ncomparing master and the 0001-0014 patches. The percentages are timing\nvs. the preceding patch (green - faster, red - slower).\n\nThis suggests only patches 0010 and 0014 affect performance, the rest is\njust noise. I'll see if I can do more runs and get data from the other\nmachine (seems it's more significant on old SATA SSDs).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 29 Feb 2024 13:54:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 7:54 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 2/29/24 00:40, Melanie Plageman wrote:\n> > On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >>\n> >>\n> >> On 2/28/24 21:06, Melanie Plageman wrote:\n> >>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> >>> <[email protected]> wrote:\n> >>>>\n> >>>> On 2/28/24 15:56, Tomas Vondra wrote:\n> >>>>>> ...\n> >>>>>\n> >>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> >>>>> share them when I have them.\n> >>>>>\n> >>>>\n> >>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> >>>> the patch introducing the streaming read API, and the patch switching\n> >>>> the bitmap heap scan to use it).\n> >>>>\n> >>>> The changes in performance don't disappear entirely, but the scale is\n> >>>> certainly much smaller - both in the complete results for all runs, and\n> >>>> for the \"optimal\" runs that would actually pick bitmapscan.\n> >>>\n> >>> Hmm. I'm trying to think how my refactor could have had this impact.\n> >>> It seems like all the most notable regressions are with 4 parallel\n> >>> workers. What do the numeric column labels mean across the top\n> >>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> >>> that mean?\n> >>>\n> >>\n> >> That's the number of distinct values matched by the query, which should\n> >> be an approximation of the number of matching rows. The number of\n> >> distinct values in the data set differs by data set, but for 1M rows\n> >> it's roughly like this:\n> >>\n> >> uniform: 10k\n> >> linear: 10k\n> >> cyclic: 100\n> >>\n> >> So for example matches=128 means ~1% of rows for uniform/linear, and\n> >> 100% for cyclic data sets.\n> >\n> > Ah, thank you for the explanation. I also looked at your script after\n> > having sent this email and saw that it is clear in your script what\n> > \"matches\" is.\n> >\n> >> As for the possible cause, I think it's clear most of the difference\n> >> comes from the last patch that actually switches bitmap heap scan to the\n> >> streaming read API. That's mostly expected/understandable, although we\n> >> probably need to look into the regressions or cases with e_i_c=0.\n> >\n> > Right, I'm mostly surprised about the regressions for patches 0001-0012.\n> >\n> > Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n> > great point about eic 0. In old bitmapheapscan code eic 0 basically\n> > disabled prefetching but with the streaming read API, it will still\n> > issue fadvises when eic is 0. That is an easy one line fix. Thomas\n> > prefers to fix it by always avoiding an fadvise for the last buffer in\n> > a range before issuing a read (since we are about to read it anyway,\n> > best not fadvise it too). This will fix eic 0 and also cut one system\n> > call from each invocation of the streaming read machinery.\n> >\n> >> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n> >> individual patches. I can try doing that tomorrow. It'll have to be a\n> >> limited set of tests, to reduce the time, but might tell us whether it's\n> >> due to a single patch or multiple patches.\n> >\n> > Yes, tomorrow I planned to start trying to repro some of the \"red\"\n> > cases myself. Any one of the commits could cause a slight regression\n> > but a 3.5x regression is quite surprising, so I might focus on trying\n> > to repro that locally and then narrow down which patch causes it.\n> >\n> > For the non-cached regressions, perhaps the commit to use the correct\n> > recheck flag (0004) when prefetching could be the culprit. And for the\n> > cached regressions, my money is on the commit which changes the whole\n> > control flow of BitmapHeapNext() and the next_block() and next_tuple()\n> > functions (0010).\n> >\n>\n> I do have some partial results, comparing the patches. I only ran one of\n> the more affected workloads (cyclic) on the xeon, attached is a PDF\n> comparing master and the 0001-0014 patches. The percentages are timing\n> vs. the preceding patch (green - faster, red - slower).\n\nJust confirming: the results are for uncached?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 29 Feb 2024 16:19:43 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 2/29/24 22:19, Melanie Plageman wrote:\n> On Thu, Feb 29, 2024 at 7:54 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>>\n>> On 2/29/24 00:40, Melanie Plageman wrote:\n>>> On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2/28/24 21:06, Melanie Plageman wrote:\n>>>>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n>>>>> <[email protected]> wrote:\n>>>>>>\n>>>>>> On 2/28/24 15:56, Tomas Vondra wrote:\n>>>>>>>> ...\n>>>>>>>\n>>>>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n>>>>>>> share them when I have them.\n>>>>>>>\n>>>>>>\n>>>>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n>>>>>> the patch introducing the streaming read API, and the patch switching\n>>>>>> the bitmap heap scan to use it).\n>>>>>>\n>>>>>> The changes in performance don't disappear entirely, but the scale is\n>>>>>> certainly much smaller - both in the complete results for all runs, and\n>>>>>> for the \"optimal\" runs that would actually pick bitmapscan.\n>>>>>\n>>>>> Hmm. I'm trying to think how my refactor could have had this impact.\n>>>>> It seems like all the most notable regressions are with 4 parallel\n>>>>> workers. What do the numeric column labels mean across the top\n>>>>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n>>>>> that mean?\n>>>>>\n>>>>\n>>>> That's the number of distinct values matched by the query, which should\n>>>> be an approximation of the number of matching rows. The number of\n>>>> distinct values in the data set differs by data set, but for 1M rows\n>>>> it's roughly like this:\n>>>>\n>>>> uniform: 10k\n>>>> linear: 10k\n>>>> cyclic: 100\n>>>>\n>>>> So for example matches=128 means ~1% of rows for uniform/linear, and\n>>>> 100% for cyclic data sets.\n>>>\n>>> Ah, thank you for the explanation. I also looked at your script after\n>>> having sent this email and saw that it is clear in your script what\n>>> \"matches\" is.\n>>>\n>>>> As for the possible cause, I think it's clear most of the difference\n>>>> comes from the last patch that actually switches bitmap heap scan to the\n>>>> streaming read API. That's mostly expected/understandable, although we\n>>>> probably need to look into the regressions or cases with e_i_c=0.\n>>>\n>>> Right, I'm mostly surprised about the regressions for patches 0001-0012.\n>>>\n>>> Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n>>> great point about eic 0. In old bitmapheapscan code eic 0 basically\n>>> disabled prefetching but with the streaming read API, it will still\n>>> issue fadvises when eic is 0. That is an easy one line fix. Thomas\n>>> prefers to fix it by always avoiding an fadvise for the last buffer in\n>>> a range before issuing a read (since we are about to read it anyway,\n>>> best not fadvise it too). This will fix eic 0 and also cut one system\n>>> call from each invocation of the streaming read machinery.\n>>>\n>>>> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n>>>> individual patches. I can try doing that tomorrow. It'll have to be a\n>>>> limited set of tests, to reduce the time, but might tell us whether it's\n>>>> due to a single patch or multiple patches.\n>>>\n>>> Yes, tomorrow I planned to start trying to repro some of the \"red\"\n>>> cases myself. Any one of the commits could cause a slight regression\n>>> but a 3.5x regression is quite surprising, so I might focus on trying\n>>> to repro that locally and then narrow down which patch causes it.\n>>>\n>>> For the non-cached regressions, perhaps the commit to use the correct\n>>> recheck flag (0004) when prefetching could be the culprit. And for the\n>>> cached regressions, my money is on the commit which changes the whole\n>>> control flow of BitmapHeapNext() and the next_block() and next_tuple()\n>>> functions (0010).\n>>>\n>>\n>> I do have some partial results, comparing the patches. I only ran one of\n>> the more affected workloads (cyclic) on the xeon, attached is a PDF\n>> comparing master and the 0001-0014 patches. The percentages are timing\n>> vs. the preceding patch (green - faster, red - slower).\n> \n> Just confirming: the results are for uncached?\n> \n\nYes, cyclic data set, uncached case. I picked this because it seemed\nlike one of the most affected cases. Do you want me to test some other\ncases too?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 29 Feb 2024 23:44:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2/29/24 23:44, Tomas Vondra wrote:\n>\n> ...\n> \n>>>\n>>> I do have some partial results, comparing the patches. I only ran one of\n>>> the more affected workloads (cyclic) on the xeon, attached is a PDF\n>>> comparing master and the 0001-0014 patches. The percentages are timing\n>>> vs. the preceding patch (green - faster, red - slower).\n>>\n>> Just confirming: the results are for uncached?\n>>\n> \n> Yes, cyclic data set, uncached case. I picked this because it seemed\n> like one of the most affected cases. Do you want me to test some other\n> cases too?\n> \n\nBTW I decided to look at the data from a slightly different angle and\ncompare the behavior with increasing effective_io_concurrency. Attached\nare charts for three \"uncached\" cases:\n\n * uniform, work_mem=4MB, workers_per_gather=0\n * linear-fuzz, work_mem=4MB, workers_per_gather=0\n * uniform, work_mem=4MB, workers_per_gather=4\n\nEach page has charts for master and patched build (with all patches). I\nthink there's a pretty obvious difference in how increasing e_i_c\naffects the two builds:\n\n1) On master there's clear difference between eic=0 and eic=1 cases, but\non the patched build there's literally no difference - for example the\n\"uniform\" distribution is clearly not great for prefetching, but eic=0\nregresses to eic=1 poor behavior).\n\nNote: This is where the the \"red bands\" in the charts come from.\n\n\n2) For some reason, the prefetching with eic>1 perform much better with\nthe patches, except for with very low selectivity values (close to 0%).\nNot sure why this is happening - either the overhead is much lower\n(which would matter on these \"adversarial\" data distribution, but how\ncould that be when fadvise is not free), or it ends up not doing any\nprefetching (but then what about (1)?).\n\n\n3) I'm not sure about the linear-fuzz case, the only explanation I have\nwe're able to skip almost all of the prefetches (and read-ahead likely\nworks pretty well here).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 1 Mar 2024 00:44:32 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 5:44 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 2/29/24 22:19, Melanie Plageman wrote:\n> > On Thu, Feb 29, 2024 at 7:54 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >>\n> >>\n> >> On 2/29/24 00:40, Melanie Plageman wrote:\n> >>> On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n> >>> <[email protected]> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2/28/24 21:06, Melanie Plageman wrote:\n> >>>>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> >>>>> <[email protected]> wrote:\n> >>>>>>\n> >>>>>> On 2/28/24 15:56, Tomas Vondra wrote:\n> >>>>>>>> ...\n> >>>>>>>\n> >>>>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> >>>>>>> share them when I have them.\n> >>>>>>>\n> >>>>>>\n> >>>>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> >>>>>> the patch introducing the streaming read API, and the patch switching\n> >>>>>> the bitmap heap scan to use it).\n> >>>>>>\n> >>>>>> The changes in performance don't disappear entirely, but the scale is\n> >>>>>> certainly much smaller - both in the complete results for all runs, and\n> >>>>>> for the \"optimal\" runs that would actually pick bitmapscan.\n> >>>>>\n> >>>>> Hmm. I'm trying to think how my refactor could have had this impact.\n> >>>>> It seems like all the most notable regressions are with 4 parallel\n> >>>>> workers. What do the numeric column labels mean across the top\n> >>>>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> >>>>> that mean?\n> >>>>>\n> >>>>\n> >>>> That's the number of distinct values matched by the query, which should\n> >>>> be an approximation of the number of matching rows. The number of\n> >>>> distinct values in the data set differs by data set, but for 1M rows\n> >>>> it's roughly like this:\n> >>>>\n> >>>> uniform: 10k\n> >>>> linear: 10k\n> >>>> cyclic: 100\n> >>>>\n> >>>> So for example matches=128 means ~1% of rows for uniform/linear, and\n> >>>> 100% for cyclic data sets.\n> >>>\n> >>> Ah, thank you for the explanation. I also looked at your script after\n> >>> having sent this email and saw that it is clear in your script what\n> >>> \"matches\" is.\n> >>>\n> >>>> As for the possible cause, I think it's clear most of the difference\n> >>>> comes from the last patch that actually switches bitmap heap scan to the\n> >>>> streaming read API. That's mostly expected/understandable, although we\n> >>>> probably need to look into the regressions or cases with e_i_c=0.\n> >>>\n> >>> Right, I'm mostly surprised about the regressions for patches 0001-0012.\n> >>>\n> >>> Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n> >>> great point about eic 0. In old bitmapheapscan code eic 0 basically\n> >>> disabled prefetching but with the streaming read API, it will still\n> >>> issue fadvises when eic is 0. That is an easy one line fix. Thomas\n> >>> prefers to fix it by always avoiding an fadvise for the last buffer in\n> >>> a range before issuing a read (since we are about to read it anyway,\n> >>> best not fadvise it too). This will fix eic 0 and also cut one system\n> >>> call from each invocation of the streaming read machinery.\n> >>>\n> >>>> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n> >>>> individual patches. I can try doing that tomorrow. It'll have to be a\n> >>>> limited set of tests, to reduce the time, but might tell us whether it's\n> >>>> due to a single patch or multiple patches.\n> >>>\n> >>> Yes, tomorrow I planned to start trying to repro some of the \"red\"\n> >>> cases myself. Any one of the commits could cause a slight regression\n> >>> but a 3.5x regression is quite surprising, so I might focus on trying\n> >>> to repro that locally and then narrow down which patch causes it.\n> >>>\n> >>> For the non-cached regressions, perhaps the commit to use the correct\n> >>> recheck flag (0004) when prefetching could be the culprit. And for the\n> >>> cached regressions, my money is on the commit which changes the whole\n> >>> control flow of BitmapHeapNext() and the next_block() and next_tuple()\n> >>> functions (0010).\n> >>>\n> >>\n> >> I do have some partial results, comparing the patches. I only ran one of\n> >> the more affected workloads (cyclic) on the xeon, attached is a PDF\n> >> comparing master and the 0001-0014 patches. The percentages are timing\n> >> vs. the preceding patch (green - faster, red - slower).\n> >\n> > Just confirming: the results are for uncached?\n> >\n>\n> Yes, cyclic data set, uncached case. I picked this because it seemed\n> like one of the most affected cases. Do you want me to test some other\n> cases too?\n\nSo, I actually may have found the source of at least part of the\nregression with 0010. I was able to reproduce the regression with\npatch 0010 applied for the unached case with 4 workers and eic 8 and\n100000000 rows for the cyclic dataset. I see it for all number of\nmatches. The regression went away (for this specific example) when I\nmoved the BitmapAdjustPrefetchIterator call back up to before the call\nto table_scan_bitmap_next_block() like this:\n\ndiff --git a/src/backend/executor/nodeBitmapHeapscan.c\nb/src/backend/executor/nodeBitmapHeapscan.c\nindex f7ecc060317..268996bdeea 100644\n--- a/src/backend/executor/nodeBitmapHeapscan.c\n+++ b/src/backend/executor/nodeBitmapHeapscan.c\n@@ -279,6 +279,8 @@ BitmapHeapNext(BitmapHeapScanState *node)\n }\n\n new_page:\n+ BitmapAdjustPrefetchIterator(node, node->blockno);\n+\n if (!table_scan_bitmap_next_block(scan, &node->recheck,\n&lossy, &node->blockno))\n break;\n\n@@ -287,7 +289,6 @@ new_page:\n else\n node->exact_pages++;\n\n- BitmapAdjustPrefetchIterator(node, node->blockno);\n /* Adjust the prefetch target */\n BitmapAdjustPrefetchTarget(node);\n }\n\nIt makes sense this would fix it. I haven't tried all the combinations\nyou tried. Do you mind running your tests with the new code? I've\npushed it into this branch.\nhttps://github.com/melanieplageman/postgres/commits/bhs_pgsr/\n\nNote that this will fix none of the issues with 0014 because that has\nremoved all of the old prefetching code anyway.\n\nThank you sooo much for running these to begin with and then helping\nme figure out what is going on!\n\n- Melanie\n\n\n",
"msg_date": "Thu, 29 Feb 2024 19:29:45 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/29/24 23:44, Tomas Vondra wrote:\n> >\n> > ...\n> >\n> >>>\n> >>> I do have some partial results, comparing the patches. I only ran one of\n> >>> the more affected workloads (cyclic) on the xeon, attached is a PDF\n> >>> comparing master and the 0001-0014 patches. The percentages are timing\n> >>> vs. the preceding patch (green - faster, red - slower).\n> >>\n> >> Just confirming: the results are for uncached?\n> >>\n> >\n> > Yes, cyclic data set, uncached case. I picked this because it seemed\n> > like one of the most affected cases. Do you want me to test some other\n> > cases too?\n> >\n>\n> BTW I decided to look at the data from a slightly different angle and\n> compare the behavior with increasing effective_io_concurrency. Attached\n> are charts for three \"uncached\" cases:\n>\n> * uniform, work_mem=4MB, workers_per_gather=0\n> * linear-fuzz, work_mem=4MB, workers_per_gather=0\n> * uniform, work_mem=4MB, workers_per_gather=4\n>\n> Each page has charts for master and patched build (with all patches). I\n> think there's a pretty obvious difference in how increasing e_i_c\n> affects the two builds:\n\nWow! These visualizations make it exceptionally clear. I want to go to\nthe Vondra school of data visualizations for performance results!\n\n> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n> on the patched build there's literally no difference - for example the\n> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n> regresses to eic=1 poor behavior).\n\nYes, so eic=0 and eic=1 are identical with the streaming read API.\nThat is, eic 0 does not disable prefetching. Thomas is going to update\nthe streaming read API to avoid issuing an fadvise for the last block\nin a range before issuing a read -- which would mean no prefetching\nwith eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\nlike the right behavior -- which would be different than what master\nis doing, right?\n\nHopefully this fixes the clear difference between master and the\npatched version at eic 0.\n\n> 2) For some reason, the prefetching with eic>1 perform much better with\n> the patches, except for with very low selectivity values (close to 0%).\n> Not sure why this is happening - either the overhead is much lower\n> (which would matter on these \"adversarial\" data distribution, but how\n> could that be when fadvise is not free), or it ends up not doing any\n> prefetching (but then what about (1)?).\n\nFor the uniform with four parallel workers, eic == 0 being worse than\nmaster makes sense for the above reason. But I'm not totally sure why\neic == 1 would be worse with the patch than with master. Both are\ndoing a (somewhat useless) prefetch.\n\nWith very low selectivity, you are less likely to get readahead\n(right?) and similarly less likely to be able to build up > 8kB IOs --\nwhich is one of the main value propositions of the streaming read\ncode. I imagine that this larger read benefit is part of why the\nperformance is better at higher selectivities with the patch. This\nmight be a silly experiment, but we could try decreasing\nMAX_BUFFERS_PER_TRANSFER on the patched version and see if the\nperformance gains go away.\n\n> 3) I'm not sure about the linear-fuzz case, the only explanation I have\n> we're able to skip almost all of the prefetches (and read-ahead likely\n> works pretty well here).\n\nI started looking at the data generated by linear-fuzz to understand\nexactly what effect the fuzz was having but haven't had time to really\nunderstand the characteristics of this dataset. In the original\nresults, I thought uncached linear-fuzz and linear had similar results\n(performance improvement from master). What do you expect with linear\nvs linear-fuzz?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 29 Feb 2024 20:18:20 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/1/24 02:18, Melanie Plageman wrote:\n> On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 2/29/24 23:44, Tomas Vondra wrote:\n>>>\n>>> ...\n>>>\n>>>>>\n>>>>> I do have some partial results, comparing the patches. I only ran one of\n>>>>> the more affected workloads (cyclic) on the xeon, attached is a PDF\n>>>>> comparing master and the 0001-0014 patches. The percentages are timing\n>>>>> vs. the preceding patch (green - faster, red - slower).\n>>>>\n>>>> Just confirming: the results are for uncached?\n>>>>\n>>>\n>>> Yes, cyclic data set, uncached case. I picked this because it seemed\n>>> like one of the most affected cases. Do you want me to test some other\n>>> cases too?\n>>>\n>>\n>> BTW I decided to look at the data from a slightly different angle and\n>> compare the behavior with increasing effective_io_concurrency. Attached\n>> are charts for three \"uncached\" cases:\n>>\n>> * uniform, work_mem=4MB, workers_per_gather=0\n>> * linear-fuzz, work_mem=4MB, workers_per_gather=0\n>> * uniform, work_mem=4MB, workers_per_gather=4\n>>\n>> Each page has charts for master and patched build (with all patches). I\n>> think there's a pretty obvious difference in how increasing e_i_c\n>> affects the two builds:\n> \n> Wow! These visualizations make it exceptionally clear. I want to go to\n> the Vondra school of data visualizations for performance results!\n> \n\nWelcome to my lecture on how to visualize data. The process has about\nfour simple steps:\n\n1) collect data for a lot of potentially interesting cases\n2) load them into excel / google sheets / ...\n3) slice and dice them into charts that you understand / can explain\n4) every now and then there's something you can't understand / explain\n\nThank you for attending my lecture ;-) No homework today.\n\n>> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n>> on the patched build there's literally no difference - for example the\n>> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n>> regresses to eic=1 poor behavior).\n> \n> Yes, so eic=0 and eic=1 are identical with the streaming read API.\n> That is, eic 0 does not disable prefetching. Thomas is going to update\n> the streaming read API to avoid issuing an fadvise for the last block\n> in a range before issuing a read -- which would mean no prefetching\n> with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n> like the right behavior -- which would be different than what master\n> is doing, right?\n> \n\nI don't think we should stop doing prefetching for eic=1, or at least\nnot based just on these charts. I suspect these \"uniform\" charts are not\na great example for the prefetching, because it's about distribution of\nindividual rows, and even a small fraction of rows may match most of the\npages. It's great for finding strange behaviors / corner cases, but\nprobably not a sufficient reason to change the default.\n\nI think it makes sense to issue a prefetch one page ahead, before\nreading/processing the preceding one, and it's fairly conservative\nsetting, and I assume the default was chosen for a reason / after\ndiscussion.\n\nMy suggestion would be to keep the master behavior unless not practical,\nand then maybe discuss changing the details later. The patch is already\ncomplicated enough, better to leave that discussion for later.\n\n> Hopefully this fixes the clear difference between master and the\n> patched version at eic 0.\n> \n>> 2) For some reason, the prefetching with eic>1 perform much better with\n>> the patches, except for with very low selectivity values (close to 0%).\n>> Not sure why this is happening - either the overhead is much lower\n>> (which would matter on these \"adversarial\" data distribution, but how\n>> could that be when fadvise is not free), or it ends up not doing any\n>> prefetching (but then what about (1)?).\n> \n> For the uniform with four parallel workers, eic == 0 being worse than\n> master makes sense for the above reason. But I'm not totally sure why\n> eic == 1 would be worse with the patch than with master. Both are\n> doing a (somewhat useless) prefetch.\n> \n\nRight.\n\n> With very low selectivity, you are less likely to get readahead\n> (right?) and similarly less likely to be able to build up > 8kB IOs --\n> which is one of the main value propositions of the streaming read\n> code. I imagine that this larger read benefit is part of why the\n> performance is better at higher selectivities with the patch. This\n> might be a silly experiment, but we could try decreasing\n> MAX_BUFFERS_PER_TRANSFER on the patched version and see if the\n> performance gains go away.\n> \n\nSure, I can do that. Do you have any particular suggestion what value to\nuse for MAX_BUFFERS_PER_TRANSFER?\n\nI'll also try to add a better version of uniform, where the selectivity\nmatches more closely to pages, not rows.\n\n>> 3) I'm not sure about the linear-fuzz case, the only explanation I have\n>> we're able to skip almost all of the prefetches (and read-ahead likely\n>> works pretty well here).\n> \n> I started looking at the data generated by linear-fuzz to understand\n> exactly what effect the fuzz was having but haven't had time to really\n> understand the characteristics of this dataset. In the original\n> results, I thought uncached linear-fuzz and linear had similar results\n> (performance improvement from master). What do you expect with linear\n> vs linear-fuzz?\n> \n\nI don't know, TBH. My intent was to have a data set with correlated\ndata, either perfectly (linear) or with some noise (linear-fuzz). But\nit's not like I spent too much thinking about it. It's more a case of\nthrowing stuff at the wall, seeing what sticks.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:05:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 9:05 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/1/24 02:18, Melanie Plageman wrote:\n> > On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 2/29/24 23:44, Tomas Vondra wrote:\n> >> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n> >> on the patched build there's literally no difference - for example the\n> >> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n> >> regresses to eic=1 poor behavior).\n> >\n> > Yes, so eic=0 and eic=1 are identical with the streaming read API.\n> > That is, eic 0 does not disable prefetching. Thomas is going to update\n> > the streaming read API to avoid issuing an fadvise for the last block\n> > in a range before issuing a read -- which would mean no prefetching\n> > with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n> > like the right behavior -- which would be different than what master\n> > is doing, right?\n>\n> I don't think we should stop doing prefetching for eic=1, or at least\n> not based just on these charts. I suspect these \"uniform\" charts are not\n> a great example for the prefetching, because it's about distribution of\n> individual rows, and even a small fraction of rows may match most of the\n> pages. It's great for finding strange behaviors / corner cases, but\n> probably not a sufficient reason to change the default.\n\nYes, I would like to see results from a data set where selectivity is\nmore correlated to pages/heap fetches. But, I'm not sure I see how\nthat is related to prefetching when eic = 1.\n\n> I think it makes sense to issue a prefetch one page ahead, before\n> reading/processing the preceding one, and it's fairly conservative\n> setting, and I assume the default was chosen for a reason / after\n> discussion.\n\nYes, I suppose the overhead of an fadvise does not compare to the IO\nlatency of synchronously reading that block. Actually, I bet the\nregression I saw by accidentally moving BitmapAdjustPrefetchIterator()\nafter table_scan_bitmap_next_block() would be similar to the\nregression introduced by making eic = 1 not prefetch.\n\nWhen you think about IO concurrency = 1, it doesn't imply prefetching\nto me. But, I think we want to do the right thing and have parity with\nmaster.\n\n> My suggestion would be to keep the master behavior unless not practical,\n> and then maybe discuss changing the details later. The patch is already\n> complicated enough, better to leave that discussion for later.\n\nAgreed. Speaking of which, we need to add back use of tablespace IO\nconcurrency for the streaming read API (which is used by\nBitmapHeapScan in master).\n\n> > With very low selectivity, you are less likely to get readahead\n> > (right?) and similarly less likely to be able to build up > 8kB IOs --\n> > which is one of the main value propositions of the streaming read\n> > code. I imagine that this larger read benefit is part of why the\n> > performance is better at higher selectivities with the patch. This\n> > might be a silly experiment, but we could try decreasing\n> > MAX_BUFFERS_PER_TRANSFER on the patched version and see if the\n> > performance gains go away.\n>\n> Sure, I can do that. Do you have any particular suggestion what value to\n> use for MAX_BUFFERS_PER_TRANSFER?\n\nI think setting it to 1 would be the same as always master -- doing\nonly 8kB reads. The only thing about that is that I imagine the other\nstreaming read code has some overhead which might end up being a\nregression on balance even with the prefetching if we aren't actually\nusing the ranges/vectored capabilities of the streaming read\ninterface. Maybe if you just run it for one of the very obvious\nperformance improvement cases? I can also try this locally.\n\n> I'll also try to add a better version of uniform, where the selectivity\n> matches more closely to pages, not rows.\n\nThis would be great.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 1 Mar 2024 11:51:44 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\nOn 3/1/24 17:51, Melanie Plageman wrote:\n> On Fri, Mar 1, 2024 at 9:05 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/1/24 02:18, Melanie Plageman wrote:\n>>> On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 2/29/24 23:44, Tomas Vondra wrote:\n>>>> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n>>>> on the patched build there's literally no difference - for example the\n>>>> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n>>>> regresses to eic=1 poor behavior).\n>>>\n>>> Yes, so eic=0 and eic=1 are identical with the streaming read API.\n>>> That is, eic 0 does not disable prefetching. Thomas is going to update\n>>> the streaming read API to avoid issuing an fadvise for the last block\n>>> in a range before issuing a read -- which would mean no prefetching\n>>> with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n>>> like the right behavior -- which would be different than what master\n>>> is doing, right?\n>>\n>> I don't think we should stop doing prefetching for eic=1, or at least\n>> not based just on these charts. I suspect these \"uniform\" charts are not\n>> a great example for the prefetching, because it's about distribution of\n>> individual rows, and even a small fraction of rows may match most of the\n>> pages. It's great for finding strange behaviors / corner cases, but\n>> probably not a sufficient reason to change the default.\n> \n> Yes, I would like to see results from a data set where selectivity is\n> more correlated to pages/heap fetches. But, I'm not sure I see how\n> that is related to prefetching when eic = 1.\n> \n\nOK, I'll make that happen.\n\n>> I think it makes sense to issue a prefetch one page ahead, before\n>> reading/processing the preceding one, and it's fairly conservative\n>> setting, and I assume the default was chosen for a reason / after\n>> discussion.\n> \n> Yes, I suppose the overhead of an fadvise does not compare to the IO\n> latency of synchronously reading that block. Actually, I bet the\n> regression I saw by accidentally moving BitmapAdjustPrefetchIterator()\n> after table_scan_bitmap_next_block() would be similar to the\n> regression introduced by making eic = 1 not prefetch.\n> \n> When you think about IO concurrency = 1, it doesn't imply prefetching\n> to me. But, I think we want to do the right thing and have parity with\n> master.\n> \n\nJust to be sure we're on the same page regarding what eic=1 means,\nconsider a simple sequence of pages: A, B, C, D, E, ...\n\nWith the current \"master\" code, eic=1 means we'll issue a prefetch for B\nand then read+process A. And then issue prefetch for C and read+process\nB, and so on. It's always one page ahead.\n\nYes, if the page is already in memory, the fadvise is just overhead. It\nmay happen for various reasons (say, read-ahead). But it's just this one\ncase, I'd bet in other cases eic=1 would be a win.\n\n>> My suggestion would be to keep the master behavior unless not practical,\n>> and then maybe discuss changing the details later. The patch is already\n>> complicated enough, better to leave that discussion for later.\n> \n> Agreed. Speaking of which, we need to add back use of tablespace IO\n> concurrency for the streaming read API (which is used by\n> BitmapHeapScan in master).\n> \n\n+1\n\n>>> With very low selectivity, you are less likely to get readahead\n>>> (right?) and similarly less likely to be able to build up > 8kB IOs --\n>>> which is one of the main value propositions of the streaming read\n>>> code. I imagine that this larger read benefit is part of why the\n>>> performance is better at higher selectivities with the patch. This\n>>> might be a silly experiment, but we could try decreasing\n>>> MAX_BUFFERS_PER_TRANSFER on the patched version and see if the\n>>> performance gains go away.\n>>\n>> Sure, I can do that. Do you have any particular suggestion what value to\n>> use for MAX_BUFFERS_PER_TRANSFER?\n> \n> I think setting it to 1 would be the same as always master -- doing\n> only 8kB reads. The only thing about that is that I imagine the other\n> streaming read code has some overhead which might end up being a\n> regression on balance even with the prefetching if we aren't actually\n> using the ranges/vectored capabilities of the streaming read\n> interface. Maybe if you just run it for one of the very obvious\n> performance improvement cases? I can also try this locally.\n> \n\nOK, I'll try with 1, and then we can adjust.\n\n>> I'll also try to add a better version of uniform, where the selectivity\n>> matches more closely to pages, not rows.\n> \n> This would be great.\n> \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Mar 2024 18:08:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 7:29 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Feb 29, 2024 at 5:44 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> >\n> >\n> > On 2/29/24 22:19, Melanie Plageman wrote:\n> > > On Thu, Feb 29, 2024 at 7:54 AM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >>\n> > >>\n> > >>\n> > >> On 2/29/24 00:40, Melanie Plageman wrote:\n> > >>> On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n> > >>> <[email protected]> wrote:\n> > >>>>\n> > >>>>\n> > >>>>\n> > >>>> On 2/28/24 21:06, Melanie Plageman wrote:\n> > >>>>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> > >>>>> <[email protected]> wrote:\n> > >>>>>>\n> > >>>>>> On 2/28/24 15:56, Tomas Vondra wrote:\n> > >>>>>>>> ...\n> > >>>>>>>\n> > >>>>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> > >>>>>>> share them when I have them.\n> > >>>>>>>\n> > >>>>>>\n> > >>>>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> > >>>>>> the patch introducing the streaming read API, and the patch switching\n> > >>>>>> the bitmap heap scan to use it).\n> > >>>>>>\n> > >>>>>> The changes in performance don't disappear entirely, but the scale is\n> > >>>>>> certainly much smaller - both in the complete results for all runs, and\n> > >>>>>> for the \"optimal\" runs that would actually pick bitmapscan.\n> > >>>>>\n> > >>>>> Hmm. I'm trying to think how my refactor could have had this impact.\n> > >>>>> It seems like all the most notable regressions are with 4 parallel\n> > >>>>> workers. What do the numeric column labels mean across the top\n> > >>>>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> > >>>>> that mean?\n> > >>>>>\n> > >>>>\n> > >>>> That's the number of distinct values matched by the query, which should\n> > >>>> be an approximation of the number of matching rows. The number of\n> > >>>> distinct values in the data set differs by data set, but for 1M rows\n> > >>>> it's roughly like this:\n> > >>>>\n> > >>>> uniform: 10k\n> > >>>> linear: 10k\n> > >>>> cyclic: 100\n> > >>>>\n> > >>>> So for example matches=128 means ~1% of rows for uniform/linear, and\n> > >>>> 100% for cyclic data sets.\n> > >>>\n> > >>> Ah, thank you for the explanation. I also looked at your script after\n> > >>> having sent this email and saw that it is clear in your script what\n> > >>> \"matches\" is.\n> > >>>\n> > >>>> As for the possible cause, I think it's clear most of the difference\n> > >>>> comes from the last patch that actually switches bitmap heap scan to the\n> > >>>> streaming read API. That's mostly expected/understandable, although we\n> > >>>> probably need to look into the regressions or cases with e_i_c=0.\n> > >>>\n> > >>> Right, I'm mostly surprised about the regressions for patches 0001-0012.\n> > >>>\n> > >>> Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n> > >>> great point about eic 0. In old bitmapheapscan code eic 0 basically\n> > >>> disabled prefetching but with the streaming read API, it will still\n> > >>> issue fadvises when eic is 0. That is an easy one line fix. Thomas\n> > >>> prefers to fix it by always avoiding an fadvise for the last buffer in\n> > >>> a range before issuing a read (since we are about to read it anyway,\n> > >>> best not fadvise it too). This will fix eic 0 and also cut one system\n> > >>> call from each invocation of the streaming read machinery.\n> > >>>\n> > >>>> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n> > >>>> individual patches. I can try doing that tomorrow. It'll have to be a\n> > >>>> limited set of tests, to reduce the time, but might tell us whether it's\n> > >>>> due to a single patch or multiple patches.\n> > >>>\n> > >>> Yes, tomorrow I planned to start trying to repro some of the \"red\"\n> > >>> cases myself. Any one of the commits could cause a slight regression\n> > >>> but a 3.5x regression is quite surprising, so I might focus on trying\n> > >>> to repro that locally and then narrow down which patch causes it.\n> > >>>\n> > >>> For the non-cached regressions, perhaps the commit to use the correct\n> > >>> recheck flag (0004) when prefetching could be the culprit. And for the\n> > >>> cached regressions, my money is on the commit which changes the whole\n> > >>> control flow of BitmapHeapNext() and the next_block() and next_tuple()\n> > >>> functions (0010).\n> > >>>\n> > >>\n> > >> I do have some partial results, comparing the patches. I only ran one of\n> > >> the more affected workloads (cyclic) on the xeon, attached is a PDF\n> > >> comparing master and the 0001-0014 patches. The percentages are timing\n> > >> vs. the preceding patch (green - faster, red - slower).\n> > >\n> > > Just confirming: the results are for uncached?\n> > >\n> >\n> > Yes, cyclic data set, uncached case. I picked this because it seemed\n> > like one of the most affected cases. Do you want me to test some other\n> > cases too?\n>\n> So, I actually may have found the source of at least part of the\n> regression with 0010. I was able to reproduce the regression with\n> patch 0010 applied for the unached case with 4 workers and eic 8 and\n> 100000000 rows for the cyclic dataset. I see it for all number of\n> matches. The regression went away (for this specific example) when I\n> moved the BitmapAdjustPrefetchIterator call back up to before the call\n> to table_scan_bitmap_next_block() like this:\n>\n> diff --git a/src/backend/executor/nodeBitmapHeapscan.c\n> b/src/backend/executor/nodeBitmapHeapscan.c\n> index f7ecc060317..268996bdeea 100644\n> --- a/src/backend/executor/nodeBitmapHeapscan.c\n> +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> @@ -279,6 +279,8 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> }\n>\n> new_page:\n> + BitmapAdjustPrefetchIterator(node, node->blockno);\n> +\n> if (!table_scan_bitmap_next_block(scan, &node->recheck,\n> &lossy, &node->blockno))\n> break;\n>\n> @@ -287,7 +289,6 @@ new_page:\n> else\n> node->exact_pages++;\n>\n> - BitmapAdjustPrefetchIterator(node, node->blockno);\n> /* Adjust the prefetch target */\n> BitmapAdjustPrefetchTarget(node);\n> }\n>\n> It makes sense this would fix it. I haven't tried all the combinations\n> you tried. Do you mind running your tests with the new code? I've\n> pushed it into this branch.\n> https://github.com/melanieplageman/postgres/commits/bhs_pgsr/\n\nHold the phone on this one. I realized why I moved\nBitmapAdjustPrefetchIterator after table_scan_bitmap_next_block() in\nthe first place -- master calls BitmapAdjustPrefetchIterator after the\ntbm_iterate() for the current block -- otherwise with eic = 1, it\nconsiders the prefetch iterator behind the current block iterator. I'm\ngoing to go through and figure out what order this must be done in and\nfix it.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 1 Mar 2024 14:31:41 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/1/24 18:08, Tomas Vondra wrote:\n> \n> On 3/1/24 17:51, Melanie Plageman wrote:\n>> On Fri, Mar 1, 2024 at 9:05 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>>\n>>> On 3/1/24 02:18, Melanie Plageman wrote:\n>>>> On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n>>>> <[email protected]> wrote:\n>>>>>\n>>>>> On 2/29/24 23:44, Tomas Vondra wrote:\n>>>>> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n>>>>> on the patched build there's literally no difference - for example the\n>>>>> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n>>>>> regresses to eic=1 poor behavior).\n>>>>\n>>>> Yes, so eic=0 and eic=1 are identical with the streaming read API.\n>>>> That is, eic 0 does not disable prefetching. Thomas is going to update\n>>>> the streaming read API to avoid issuing an fadvise for the last block\n>>>> in a range before issuing a read -- which would mean no prefetching\n>>>> with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n>>>> like the right behavior -- which would be different than what master\n>>>> is doing, right?\n>>>\n>>> I don't think we should stop doing prefetching for eic=1, or at least\n>>> not based just on these charts. I suspect these \"uniform\" charts are not\n>>> a great example for the prefetching, because it's about distribution of\n>>> individual rows, and even a small fraction of rows may match most of the\n>>> pages. It's great for finding strange behaviors / corner cases, but\n>>> probably not a sufficient reason to change the default.\n>>\n>> Yes, I would like to see results from a data set where selectivity is\n>> more correlated to pages/heap fetches. But, I'm not sure I see how\n>> that is related to prefetching when eic = 1.\n>>\n> \n> OK, I'll make that happen.\n> \n\nHere's a PDF with charts for a dataset where the row selectivity is more\ncorrelated to selectivity of pages. I'm attaching the updated script,\nwith the SQL generating the data set. But the short story is all rows on\na single page have the same random value, so the selectivity of rows and\npages should be the same.\n\nThe first page has results for the original \"uniform\", the second page\nis the new \"uniform-pages\" data set. There are 4 charts, for\nmaster/patched and 0/4 parallel workers. Overall the behavior is the\nsame, but for the \"uniform-pages\" it's much more gradual (with respect\nto row selectivity). I think that's expected.\n\n\nAs for how this is related to eic=1 - I think my point was that these\nare \"adversary\" data sets, most likely to show regressions. This applies\nespecially to the \"uniform\" data set, because as the row selectivity\ngrows, it's more and more likely it's right after to the current one,\nand so a read-ahead would likely do the trick.\n\nAlso, this is forcing a bitmap scan plan - it's possible many of these\ncases would use some other scan type, making the regression somewhat\nirrelevant. Not entirely, because we make planning mistakes and for\nrobustness reasons it's good to keep the regression small.\n\nBut that's just how I think about it now. I don't think I have some\ngrand theory that'd dictate we have to do prefetching for eic=1.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 2 Mar 2024 16:05:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 2:31 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Feb 29, 2024 at 7:29 PM Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > On Thu, Feb 29, 2024 at 5:44 PM Tomas Vondra\n> > <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > On 2/29/24 22:19, Melanie Plageman wrote:\n> > > > On Thu, Feb 29, 2024 at 7:54 AM Tomas Vondra\n> > > > <[email protected]> wrote:\n> > > >>\n> > > >>\n> > > >>\n> > > >> On 2/29/24 00:40, Melanie Plageman wrote:\n> > > >>> On Wed, Feb 28, 2024 at 6:17 PM Tomas Vondra\n> > > >>> <[email protected]> wrote:\n> > > >>>>\n> > > >>>>\n> > > >>>>\n> > > >>>> On 2/28/24 21:06, Melanie Plageman wrote:\n> > > >>>>> On Wed, Feb 28, 2024 at 2:23 PM Tomas Vondra\n> > > >>>>> <[email protected]> wrote:\n> > > >>>>>>\n> > > >>>>>> On 2/28/24 15:56, Tomas Vondra wrote:\n> > > >>>>>>>> ...\n> > > >>>>>>>\n> > > >>>>>>> Sure, I can do that. It'll take a couple hours to get the results, I'll\n> > > >>>>>>> share them when I have them.\n> > > >>>>>>>\n> > > >>>>>>\n> > > >>>>>> Here are the results with only patches 0001 - 0012 applied (i.e. without\n> > > >>>>>> the patch introducing the streaming read API, and the patch switching\n> > > >>>>>> the bitmap heap scan to use it).\n> > > >>>>>>\n> > > >>>>>> The changes in performance don't disappear entirely, but the scale is\n> > > >>>>>> certainly much smaller - both in the complete results for all runs, and\n> > > >>>>>> for the \"optimal\" runs that would actually pick bitmapscan.\n> > > >>>>>\n> > > >>>>> Hmm. I'm trying to think how my refactor could have had this impact.\n> > > >>>>> It seems like all the most notable regressions are with 4 parallel\n> > > >>>>> workers. What do the numeric column labels mean across the top\n> > > >>>>> (2,4,8,16...) -- are they related to \"matches\"? And if so, what does\n> > > >>>>> that mean?\n> > > >>>>>\n> > > >>>>\n> > > >>>> That's the number of distinct values matched by the query, which should\n> > > >>>> be an approximation of the number of matching rows. The number of\n> > > >>>> distinct values in the data set differs by data set, but for 1M rows\n> > > >>>> it's roughly like this:\n> > > >>>>\n> > > >>>> uniform: 10k\n> > > >>>> linear: 10k\n> > > >>>> cyclic: 100\n> > > >>>>\n> > > >>>> So for example matches=128 means ~1% of rows for uniform/linear, and\n> > > >>>> 100% for cyclic data sets.\n> > > >>>\n> > > >>> Ah, thank you for the explanation. I also looked at your script after\n> > > >>> having sent this email and saw that it is clear in your script what\n> > > >>> \"matches\" is.\n> > > >>>\n> > > >>>> As for the possible cause, I think it's clear most of the difference\n> > > >>>> comes from the last patch that actually switches bitmap heap scan to the\n> > > >>>> streaming read API. That's mostly expected/understandable, although we\n> > > >>>> probably need to look into the regressions or cases with e_i_c=0.\n> > > >>>\n> > > >>> Right, I'm mostly surprised about the regressions for patches 0001-0012.\n> > > >>>\n> > > >>> Re eic 0: Thomas Munro and I chatted off-list, and you bring up a\n> > > >>> great point about eic 0. In old bitmapheapscan code eic 0 basically\n> > > >>> disabled prefetching but with the streaming read API, it will still\n> > > >>> issue fadvises when eic is 0. That is an easy one line fix. Thomas\n> > > >>> prefers to fix it by always avoiding an fadvise for the last buffer in\n> > > >>> a range before issuing a read (since we are about to read it anyway,\n> > > >>> best not fadvise it too). This will fix eic 0 and also cut one system\n> > > >>> call from each invocation of the streaming read machinery.\n> > > >>>\n> > > >>>> To analyze the 0001-0012 patches, maybe it'd be helpful to run tests for\n> > > >>>> individual patches. I can try doing that tomorrow. It'll have to be a\n> > > >>>> limited set of tests, to reduce the time, but might tell us whether it's\n> > > >>>> due to a single patch or multiple patches.\n> > > >>>\n> > > >>> Yes, tomorrow I planned to start trying to repro some of the \"red\"\n> > > >>> cases myself. Any one of the commits could cause a slight regression\n> > > >>> but a 3.5x regression is quite surprising, so I might focus on trying\n> > > >>> to repro that locally and then narrow down which patch causes it.\n> > > >>>\n> > > >>> For the non-cached regressions, perhaps the commit to use the correct\n> > > >>> recheck flag (0004) when prefetching could be the culprit. And for the\n> > > >>> cached regressions, my money is on the commit which changes the whole\n> > > >>> control flow of BitmapHeapNext() and the next_block() and next_tuple()\n> > > >>> functions (0010).\n> > > >>>\n> > > >>\n> > > >> I do have some partial results, comparing the patches. I only ran one of\n> > > >> the more affected workloads (cyclic) on the xeon, attached is a PDF\n> > > >> comparing master and the 0001-0014 patches. The percentages are timing\n> > > >> vs. the preceding patch (green - faster, red - slower).\n> > > >\n> > > > Just confirming: the results are for uncached?\n> > > >\n> > >\n> > > Yes, cyclic data set, uncached case. I picked this because it seemed\n> > > like one of the most affected cases. Do you want me to test some other\n> > > cases too?\n> >\n> > So, I actually may have found the source of at least part of the\n> > regression with 0010. I was able to reproduce the regression with\n> > patch 0010 applied for the unached case with 4 workers and eic 8 and\n> > 100000000 rows for the cyclic dataset. I see it for all number of\n> > matches. The regression went away (for this specific example) when I\n> > moved the BitmapAdjustPrefetchIterator call back up to before the call\n> > to table_scan_bitmap_next_block() like this:\n> >\n> > diff --git a/src/backend/executor/nodeBitmapHeapscan.c\n> > b/src/backend/executor/nodeBitmapHeapscan.c\n> > index f7ecc060317..268996bdeea 100644\n> > --- a/src/backend/executor/nodeBitmapHeapscan.c\n> > +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> > @@ -279,6 +279,8 @@ BitmapHeapNext(BitmapHeapScanState *node)\n> > }\n> >\n> > new_page:\n> > + BitmapAdjustPrefetchIterator(node, node->blockno);\n> > +\n> > if (!table_scan_bitmap_next_block(scan, &node->recheck,\n> > &lossy, &node->blockno))\n> > break;\n> >\n> > @@ -287,7 +289,6 @@ new_page:\n> > else\n> > node->exact_pages++;\n> >\n> > - BitmapAdjustPrefetchIterator(node, node->blockno);\n> > /* Adjust the prefetch target */\n> > BitmapAdjustPrefetchTarget(node);\n> > }\n> >\n> > It makes sense this would fix it. I haven't tried all the combinations\n> > you tried. Do you mind running your tests with the new code? I've\n> > pushed it into this branch.\n> > https://github.com/melanieplageman/postgres/commits/bhs_pgsr/\n>\n> Hold the phone on this one. I realized why I moved\n> BitmapAdjustPrefetchIterator after table_scan_bitmap_next_block() in\n> the first place -- master calls BitmapAdjustPrefetchIterator after the\n> tbm_iterate() for the current block -- otherwise with eic = 1, it\n> considers the prefetch iterator behind the current block iterator. I'm\n> going to go through and figure out what order this must be done in and\n> fix it.\n\nSo, I investigated this further, and, as far as I can tell, for\nparallel bitmapheapscan the timing around when workers decrement\nprefetch_pages causes the performance differences with patch 0010\napplied. It makes very little sense to me, but some of the queries I\nborrowed from your regression examples are up to 30% slower when this\ncode from BitmapAdjustPrefetchIterator() is after\ntable_scan_bitmap_next_block() instead of before it.\n\n SpinLockAcquire(&pstate->mutex);\n if (pstate->prefetch_pages > 0)\n pstate->prefetch_pages--;\n SpinLockRelease(&pstate->mutex);\n\nI did some stracing and did see much more time spent in futex/wait\nwith this code after the call to table_scan_bitmap_next_block() vs\nbefore it. (table_scan_bitmap_next_block()) calls ReadBuffer()).\n\nIn my branch, I've now moved only the parallel prefetch_pages-- code\nto before table_scan_bitmap_next_block().\nhttps://github.com/melanieplageman/postgres/tree/bhs_pgsr\nI'd be interested to know if you see the regressions go away with 0010\napplied (commit message \"Make table_scan_bitmap_next_block() async\nfriendly\" and sha bfdcbfee7be8e2c461).\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 17:11:11 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Here's a PDF with charts for a dataset where the row selectivity is more\n> correlated to selectivity of pages. I'm attaching the updated script,\n> with the SQL generating the data set. But the short story is all rows on\n> a single page have the same random value, so the selectivity of rows and\n> pages should be the same.\n>\n> The first page has results for the original \"uniform\", the second page\n> is the new \"uniform-pages\" data set. There are 4 charts, for\n> master/patched and 0/4 parallel workers. Overall the behavior is the\n> same, but for the \"uniform-pages\" it's much more gradual (with respect\n> to row selectivity). I think that's expected.\n\nCool! Thanks for doing this. I have convinced myself that Thomas'\nforthcoming patch which will eliminate prefetching with eic = 0 will\nfix the eic 0 blue line regressions. The eic = 1 with four parallel\nworkers is more confusing. And it seems more noticeably bad with your\nrandomized-pages dataset.\n\nRegarding your earlier question:\n\n> Just to be sure we're on the same page regarding what eic=1 means,\n> consider a simple sequence of pages: A, B, C, D, E, ...\n>\n> With the current \"master\" code, eic=1 means we'll issue a prefetch for B\n> and then read+process A. And then issue prefetch for C and read+process\n> B, and so on. It's always one page ahead.\n\nYes, that is what I mean for eic = 1\n\n> As for how this is related to eic=1 - I think my point was that these\n> are \"adversary\" data sets, most likely to show regressions. This applies\n> especially to the \"uniform\" data set, because as the row selectivity\n> grows, it's more and more likely it's right after to the current one,\n> and so a read-ahead would likely do the trick.\n\nNo, I think you're right that eic=1 should prefetch. As you say, with\nhigh selectivity, a bitmap plan is likely not the best one anyway, so\nnot prefetching in order to preserve the performance of those cases\nseems silly.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 17:28:03 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/2/24 23:28, Melanie Plageman wrote:\n> On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Here's a PDF with charts for a dataset where the row selectivity is more\n>> correlated to selectivity of pages. I'm attaching the updated script,\n>> with the SQL generating the data set. But the short story is all rows on\n>> a single page have the same random value, so the selectivity of rows and\n>> pages should be the same.\n>>\n>> The first page has results for the original \"uniform\", the second page\n>> is the new \"uniform-pages\" data set. There are 4 charts, for\n>> master/patched and 0/4 parallel workers. Overall the behavior is the\n>> same, but for the \"uniform-pages\" it's much more gradual (with respect\n>> to row selectivity). I think that's expected.\n> \n> Cool! Thanks for doing this. I have convinced myself that Thomas'\n> forthcoming patch which will eliminate prefetching with eic = 0 will\n> fix the eic 0 blue line regressions. The eic = 1 with four parallel\n> workers is more confusing. And it seems more noticeably bad with your\n> randomized-pages dataset.\n> \n> Regarding your earlier question:\n> \n>> Just to be sure we're on the same page regarding what eic=1 means,\n>> consider a simple sequence of pages: A, B, C, D, E, ...\n>>\n>> With the current \"master\" code, eic=1 means we'll issue a prefetch for B\n>> and then read+process A. And then issue prefetch for C and read+process\n>> B, and so on. It's always one page ahead.\n> \n> Yes, that is what I mean for eic = 1\n> \n>> As for how this is related to eic=1 - I think my point was that these\n>> are \"adversary\" data sets, most likely to show regressions. This applies\n>> especially to the \"uniform\" data set, because as the row selectivity\n>> grows, it's more and more likely it's right after to the current one,\n>> and so a read-ahead would likely do the trick.\n> \n> No, I think you're right that eic=1 should prefetch. As you say, with\n> high selectivity, a bitmap plan is likely not the best one anyway, so\n> not prefetching in order to preserve the performance of those cases\n> seems silly.\n> \n\nI was just trying to respond do this from an earlier message:\n\n> Yes, I would like to see results from a data set where selectivity is\n> more correlated to pages/heap fetches. But, I'm not sure I see how\n> that is related to prefetching when eic = 1.\n\nAnd in that same message you also said \"Not doing prefetching with eic 1\nactually seems like the right behavior\". Hence my argument we should not\nstop prefetching for eic=1.\n\nBut maybe I'm confused - it seems agree eic=1 should prefetch, and that\nuniform data set may not be a good argument against that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Mar 2024 23:41:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/2/24 23:11, Melanie Plageman wrote:\n> On Fri, Mar 1, 2024 at 2:31 PM Melanie Plageman\n> <[email protected]> wrote:\n>>\n>> ...\n>>\n>> Hold the phone on this one. I realized why I moved\n>> BitmapAdjustPrefetchIterator after table_scan_bitmap_next_block() in\n>> the first place -- master calls BitmapAdjustPrefetchIterator after the\n>> tbm_iterate() for the current block -- otherwise with eic = 1, it\n>> considers the prefetch iterator behind the current block iterator. I'm\n>> going to go through and figure out what order this must be done in and\n>> fix it.\n> \n> So, I investigated this further, and, as far as I can tell, for\n> parallel bitmapheapscan the timing around when workers decrement\n> prefetch_pages causes the performance differences with patch 0010\n> applied. It makes very little sense to me, but some of the queries I\n> borrowed from your regression examples are up to 30% slower when this\n> code from BitmapAdjustPrefetchIterator() is after\n> table_scan_bitmap_next_block() instead of before it.\n> \n> SpinLockAcquire(&pstate->mutex);\n> if (pstate->prefetch_pages > 0)\n> pstate->prefetch_pages--;\n> SpinLockRelease(&pstate->mutex);\n> \n> I did some stracing and did see much more time spent in futex/wait\n> with this code after the call to table_scan_bitmap_next_block() vs\n> before it. (table_scan_bitmap_next_block()) calls ReadBuffer()).\n> \n> In my branch, I've now moved only the parallel prefetch_pages-- code\n> to before table_scan_bitmap_next_block().\n> https://github.com/melanieplageman/postgres/tree/bhs_pgsr\n> I'd be interested to know if you see the regressions go away with 0010\n> applied (commit message \"Make table_scan_bitmap_next_block() async\n> friendly\" and sha bfdcbfee7be8e2c461).\n> \n\nI'll give this a try once the runs with MAX_BUFFERS_PER_TRANSFER=1\ncomplete. But it seems really bizarre that simply moving this code a\nlittle bit would cause such a regression ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Mar 2024 23:51:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 5:41 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 3/2/24 23:28, Melanie Plageman wrote:\n> > On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> Here's a PDF with charts for a dataset where the row selectivity is more\n> >> correlated to selectivity of pages. I'm attaching the updated script,\n> >> with the SQL generating the data set. But the short story is all rows on\n> >> a single page have the same random value, so the selectivity of rows and\n> >> pages should be the same.\n> >>\n> >> The first page has results for the original \"uniform\", the second page\n> >> is the new \"uniform-pages\" data set. There are 4 charts, for\n> >> master/patched and 0/4 parallel workers. Overall the behavior is the\n> >> same, but for the \"uniform-pages\" it's much more gradual (with respect\n> >> to row selectivity). I think that's expected.\n> >\n> > Cool! Thanks for doing this. I have convinced myself that Thomas'\n> > forthcoming patch which will eliminate prefetching with eic = 0 will\n> > fix the eic 0 blue line regressions. The eic = 1 with four parallel\n> > workers is more confusing. And it seems more noticeably bad with your\n> > randomized-pages dataset.\n> >\n> > Regarding your earlier question:\n> >\n> >> Just to be sure we're on the same page regarding what eic=1 means,\n> >> consider a simple sequence of pages: A, B, C, D, E, ...\n> >>\n> >> With the current \"master\" code, eic=1 means we'll issue a prefetch for B\n> >> and then read+process A. And then issue prefetch for C and read+process\n> >> B, and so on. It's always one page ahead.\n> >\n> > Yes, that is what I mean for eic = 1\n> >\n> >> As for how this is related to eic=1 - I think my point was that these\n> >> are \"adversary\" data sets, most likely to show regressions. This applies\n> >> especially to the \"uniform\" data set, because as the row selectivity\n> >> grows, it's more and more likely it's right after to the current one,\n> >> and so a read-ahead would likely do the trick.\n> >\n> > No, I think you're right that eic=1 should prefetch. As you say, with\n> > high selectivity, a bitmap plan is likely not the best one anyway, so\n> > not prefetching in order to preserve the performance of those cases\n> > seems silly.\n> >\n>\n> I was just trying to respond do this from an earlier message:\n>\n> > Yes, I would like to see results from a data set where selectivity is\n> > more correlated to pages/heap fetches. But, I'm not sure I see how\n> > that is related to prefetching when eic = 1.\n>\n> And in that same message you also said \"Not doing prefetching with eic 1\n> actually seems like the right behavior\". Hence my argument we should not\n> stop prefetching for eic=1.\n>\n> But maybe I'm confused - it seems agree eic=1 should prefetch, and that\n> uniform data set may not be a good argument against that.\n\nYep, we agree. I was being confusing and wrong :) I just wanted to\nmake sure the thread had a clear consensus that, yes, it is the right\nthing to do to prefetch blocks for bitmap heap scans when\neffective_io_concurrency = 1.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 17:52:49 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 5:51 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/2/24 23:11, Melanie Plageman wrote:\n> > On Fri, Mar 1, 2024 at 2:31 PM Melanie Plageman\n> > <[email protected]> wrote:\n> >>\n> >> ...\n> >>\n> >> Hold the phone on this one. I realized why I moved\n> >> BitmapAdjustPrefetchIterator after table_scan_bitmap_next_block() in\n> >> the first place -- master calls BitmapAdjustPrefetchIterator after the\n> >> tbm_iterate() for the current block -- otherwise with eic = 1, it\n> >> considers the prefetch iterator behind the current block iterator. I'm\n> >> going to go through and figure out what order this must be done in and\n> >> fix it.\n> >\n> > So, I investigated this further, and, as far as I can tell, for\n> > parallel bitmapheapscan the timing around when workers decrement\n> > prefetch_pages causes the performance differences with patch 0010\n> > applied. It makes very little sense to me, but some of the queries I\n> > borrowed from your regression examples are up to 30% slower when this\n> > code from BitmapAdjustPrefetchIterator() is after\n> > table_scan_bitmap_next_block() instead of before it.\n> >\n> > SpinLockAcquire(&pstate->mutex);\n> > if (pstate->prefetch_pages > 0)\n> > pstate->prefetch_pages--;\n> > SpinLockRelease(&pstate->mutex);\n> >\n> > I did some stracing and did see much more time spent in futex/wait\n> > with this code after the call to table_scan_bitmap_next_block() vs\n> > before it. (table_scan_bitmap_next_block()) calls ReadBuffer()).\n> >\n> > In my branch, I've now moved only the parallel prefetch_pages-- code\n> > to before table_scan_bitmap_next_block().\n> > https://github.com/melanieplageman/postgres/tree/bhs_pgsr\n> > I'd be interested to know if you see the regressions go away with 0010\n> > applied (commit message \"Make table_scan_bitmap_next_block() async\n> > friendly\" and sha bfdcbfee7be8e2c461).\n> >\n>\n> I'll give this a try once the runs with MAX_BUFFERS_PER_TRANSFER=1\n> complete. But it seems really bizarre that simply moving this code a\n> little bit would cause such a regression ...\n\nYes, it is bizarre. It also might not be a reproducible performance\ndifference on the cases besides the one I was testing (cyclic dataset,\nuncached, eic=8, matches 16+, distinct=100, rows=100000000, 4 parallel\nworkers). But even if it only affects that one case, it still had a\nmajor, reproducible performance impact to move those 5 lines before\nand after table_scan_bitmap_next_block().\n\nThe same number of reads and fadvises are being issued overall.\nHowever, I did notice that the pread calls are skewed when the those\nlines of code are after table_scan_bitmap_next_block() -- fewer of\nthe workers are doing more of the reads. Perhaps this explains what is\ntaking longer. Why those workers would end up doing more of the reads,\nI don't quite know.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 18:39:31 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/1/24 17:51, Melanie Plageman wrote:\n> On Fri, Mar 1, 2024 at 9:05 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/1/24 02:18, Melanie Plageman wrote:\n>>> On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 2/29/24 23:44, Tomas Vondra wrote:\n>>>> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n>>>> on the patched build there's literally no difference - for example the\n>>>> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n>>>> regresses to eic=1 poor behavior).\n>>>\n>>> Yes, so eic=0 and eic=1 are identical with the streaming read API.\n>>> That is, eic 0 does not disable prefetching. Thomas is going to update\n>>> the streaming read API to avoid issuing an fadvise for the last block\n>>> in a range before issuing a read -- which would mean no prefetching\n>>> with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n>>> like the right behavior -- which would be different than what master\n>>> is doing, right?\n>>\n>> I don't think we should stop doing prefetching for eic=1, or at least\n>> not based just on these charts. I suspect these \"uniform\" charts are not\n>> a great example for the prefetching, because it's about distribution of\n>> individual rows, and even a small fraction of rows may match most of the\n>> pages. It's great for finding strange behaviors / corner cases, but\n>> probably not a sufficient reason to change the default.\n> \n> Yes, I would like to see results from a data set where selectivity is\n> more correlated to pages/heap fetches. But, I'm not sure I see how\n> that is related to prefetching when eic = 1.\n> \n>> I think it makes sense to issue a prefetch one page ahead, before\n>> reading/processing the preceding one, and it's fairly conservative\n>> setting, and I assume the default was chosen for a reason / after\n>> discussion.\n> \n> Yes, I suppose the overhead of an fadvise does not compare to the IO\n> latency of synchronously reading that block. Actually, I bet the\n> regression I saw by accidentally moving BitmapAdjustPrefetchIterator()\n> after table_scan_bitmap_next_block() would be similar to the\n> regression introduced by making eic = 1 not prefetch.\n> \n> When you think about IO concurrency = 1, it doesn't imply prefetching\n> to me. But, I think we want to do the right thing and have parity with\n> master.\n> \n>> My suggestion would be to keep the master behavior unless not practical,\n>> and then maybe discuss changing the details later. The patch is already\n>> complicated enough, better to leave that discussion for later.\n> \n> Agreed. Speaking of which, we need to add back use of tablespace IO\n> concurrency for the streaming read API (which is used by\n> BitmapHeapScan in master).\n> \n>>> With very low selectivity, you are less likely to get readahead\n>>> (right?) and similarly less likely to be able to build up > 8kB IOs --\n>>> which is one of the main value propositions of the streaming read\n>>> code. I imagine that this larger read benefit is part of why the\n>>> performance is better at higher selectivities with the patch. This\n>>> might be a silly experiment, but we could try decreasing\n>>> MAX_BUFFERS_PER_TRANSFER on the patched version and see if the\n>>> performance gains go away.\n>>\n>> Sure, I can do that. Do you have any particular suggestion what value to\n>> use for MAX_BUFFERS_PER_TRANSFER?\n> \n> I think setting it to 1 would be the same as always master -- doing\n> only 8kB reads. The only thing about that is that I imagine the other\n> streaming read code has some overhead which might end up being a\n> regression on balance even with the prefetching if we aren't actually\n> using the ranges/vectored capabilities of the streaming read\n> interface. Maybe if you just run it for one of the very obvious\n> performance improvement cases? I can also try this locally.\n> \n\nHere's some results from a build with\n\n #define MAX_BUFFERS_PER_TRANSFER 1\n\nThere are three columns:\n\n- master\n- patched (original patches, with MAX_BUFFERS_PER_TRANSFER=128kB)\n- patched-single (MAX_BUFFERS_PER_TRANSFER=8kB)\n\nThe color scales are always branch compared to master.\n\nI think the expectation was that setting the transfer to 1 would make it\ncloser to master, reducing some of the regressions. But in practice the\neffect is the opposite.\n\n- In \"cached\" runs, this eliminates the small improvements (light\ngreen), but leaves the regressions behind.\n\n- In \"uncached\" runs, this exacerbates the regressions, particularly for\nlow selectivities (small values of matches).\n\n\nI don't have a good intuition on why this would be happening :-(\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Mar 2024 00:59:19 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 6:59 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 3/1/24 17:51, Melanie Plageman wrote:\n> > On Fri, Mar 1, 2024 at 9:05 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 3/1/24 02:18, Melanie Plageman wrote:\n> >>> On Thu, Feb 29, 2024 at 6:44 PM Tomas Vondra\n> >>> <[email protected]> wrote:\n> >>>>\n> >>>> On 2/29/24 23:44, Tomas Vondra wrote:\n> >>>> 1) On master there's clear difference between eic=0 and eic=1 cases, but\n> >>>> on the patched build there's literally no difference - for example the\n> >>>> \"uniform\" distribution is clearly not great for prefetching, but eic=0\n> >>>> regresses to eic=1 poor behavior).\n> >>>\n> >>> Yes, so eic=0 and eic=1 are identical with the streaming read API.\n> >>> That is, eic 0 does not disable prefetching. Thomas is going to update\n> >>> the streaming read API to avoid issuing an fadvise for the last block\n> >>> in a range before issuing a read -- which would mean no prefetching\n> >>> with eic 0 and eic 1. Not doing prefetching with eic 1 actually seems\n> >>> like the right behavior -- which would be different than what master\n> >>> is doing, right?\n> >>\n> >> I don't think we should stop doing prefetching for eic=1, or at least\n> >> not based just on these charts. I suspect these \"uniform\" charts are not\n> >> a great example for the prefetching, because it's about distribution of\n> >> individual rows, and even a small fraction of rows may match most of the\n> >> pages. It's great for finding strange behaviors / corner cases, but\n> >> probably not a sufficient reason to change the default.\n> >\n> > Yes, I would like to see results from a data set where selectivity is\n> > more correlated to pages/heap fetches. But, I'm not sure I see how\n> > that is related to prefetching when eic = 1.\n> >\n> >> I think it makes sense to issue a prefetch one page ahead, before\n> >> reading/processing the preceding one, and it's fairly conservative\n> >> setting, and I assume the default was chosen for a reason / after\n> >> discussion.\n> >\n> > Yes, I suppose the overhead of an fadvise does not compare to the IO\n> > latency of synchronously reading that block. Actually, I bet the\n> > regression I saw by accidentally moving BitmapAdjustPrefetchIterator()\n> > after table_scan_bitmap_next_block() would be similar to the\n> > regression introduced by making eic = 1 not prefetch.\n> >\n> > When you think about IO concurrency = 1, it doesn't imply prefetching\n> > to me. But, I think we want to do the right thing and have parity with\n> > master.\n> >\n> >> My suggestion would be to keep the master behavior unless not practical,\n> >> and then maybe discuss changing the details later. The patch is already\n> >> complicated enough, better to leave that discussion for later.\n> >\n> > Agreed. Speaking of which, we need to add back use of tablespace IO\n> > concurrency for the streaming read API (which is used by\n> > BitmapHeapScan in master).\n> >\n> >>> With very low selectivity, you are less likely to get readahead\n> >>> (right?) and similarly less likely to be able to build up > 8kB IOs --\n> >>> which is one of the main value propositions of the streaming read\n> >>> code. I imagine that this larger read benefit is part of why the\n> >>> performance is better at higher selectivities with the patch. This\n> >>> might be a silly experiment, but we could try decreasing\n> >>> MAX_BUFFERS_PER_TRANSFER on the patched version and see if the\n> >>> performance gains go away.\n> >>\n> >> Sure, I can do that. Do you have any particular suggestion what value to\n> >> use for MAX_BUFFERS_PER_TRANSFER?\n> >\n> > I think setting it to 1 would be the same as always master -- doing\n> > only 8kB reads. The only thing about that is that I imagine the other\n> > streaming read code has some overhead which might end up being a\n> > regression on balance even with the prefetching if we aren't actually\n> > using the ranges/vectored capabilities of the streaming read\n> > interface. Maybe if you just run it for one of the very obvious\n> > performance improvement cases? I can also try this locally.\n> >\n>\n> Here's some results from a build with\n>\n> #define MAX_BUFFERS_PER_TRANSFER 1\n>\n> There are three columns:\n>\n> - master\n> - patched (original patches, with MAX_BUFFERS_PER_TRANSFER=128kB)\n> - patched-single (MAX_BUFFERS_PER_TRANSFER=8kB)\n>\n> The color scales are always branch compared to master.\n>\n> I think the expectation was that setting the transfer to 1 would make it\n> closer to master, reducing some of the regressions. But in practice the\n> effect is the opposite.\n>\n> - In \"cached\" runs, this eliminates the small improvements (light\n> green), but leaves the regressions behind.\n\nFor cached runs, I actually would expect that MAX_BUFFERS_PER_TRANSFER\nwould eliminate the regressions. Pinning more buffers will only hurt\nus for cached workloads. This is evidence that we may need to control\nthe number of pinned buffers differently when there has been a run of\nfully cached blocks.\n\n> - In \"uncached\" runs, this exacerbates the regressions, particularly for\n> low selectivities (small values of matches).\n\nFor the uncached runs, I actually expected it to eliminate the\nperformance gains that we saw with the patches applied. With\nMAX_BUFFERS_PER_TRANSFER=1, we don't get the benefit of larger IOs and\nfewer system calls but we still have the overhead of the streaming\nread machinery. I was hoping to prove that the performance\nimprovements we saw with all the patches applied were due to\nMAX_BUFFERS_PER_TRANSFER being > 1 causing fewer, bigger reads.\n\nIt did eliminate some performance gains, however, primarily for\ncyclic-fuzz at lower selectivities. I am a little confused by this\npart because with lower selectivities there are likely fewer\nconsecutive blocks that can be combined into one IO.\n\nAnd, on average, we still see a lot of performance improvements that\nwere not eliminated by MAX_BUFFERS_PER_TRANSFER = 1. Hmm.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 19:15:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/3/24 00:39, Melanie Plageman wrote:\n> On Sat, Mar 2, 2024 at 5:51 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/2/24 23:11, Melanie Plageman wrote:\n>>> On Fri, Mar 1, 2024 at 2:31 PM Melanie Plageman\n>>> <[email protected]> wrote:\n>>>>\n>>>> ...\n>>>>\n>>>> Hold the phone on this one. I realized why I moved\n>>>> BitmapAdjustPrefetchIterator after table_scan_bitmap_next_block() in\n>>>> the first place -- master calls BitmapAdjustPrefetchIterator after the\n>>>> tbm_iterate() for the current block -- otherwise with eic = 1, it\n>>>> considers the prefetch iterator behind the current block iterator. I'm\n>>>> going to go through and figure out what order this must be done in and\n>>>> fix it.\n>>>\n>>> So, I investigated this further, and, as far as I can tell, for\n>>> parallel bitmapheapscan the timing around when workers decrement\n>>> prefetch_pages causes the performance differences with patch 0010\n>>> applied. It makes very little sense to me, but some of the queries I\n>>> borrowed from your regression examples are up to 30% slower when this\n>>> code from BitmapAdjustPrefetchIterator() is after\n>>> table_scan_bitmap_next_block() instead of before it.\n>>>\n>>> SpinLockAcquire(&pstate->mutex);\n>>> if (pstate->prefetch_pages > 0)\n>>> pstate->prefetch_pages--;\n>>> SpinLockRelease(&pstate->mutex);\n>>>\n>>> I did some stracing and did see much more time spent in futex/wait\n>>> with this code after the call to table_scan_bitmap_next_block() vs\n>>> before it. (table_scan_bitmap_next_block()) calls ReadBuffer()).\n>>>\n>>> In my branch, I've now moved only the parallel prefetch_pages-- code\n>>> to before table_scan_bitmap_next_block().\n>>> https://github.com/melanieplageman/postgres/tree/bhs_pgsr\n>>> I'd be interested to know if you see the regressions go away with 0010\n>>> applied (commit message \"Make table_scan_bitmap_next_block() async\n>>> friendly\" and sha bfdcbfee7be8e2c461).\n>>>\n>>\n>> I'll give this a try once the runs with MAX_BUFFERS_PER_TRANSFER=1\n>> complete. But it seems really bizarre that simply moving this code a\n>> little bit would cause such a regression ...\n> \n> Yes, it is bizarre. It also might not be a reproducible performance\n> difference on the cases besides the one I was testing (cyclic dataset,\n> uncached, eic=8, matches 16+, distinct=100, rows=100000000, 4 parallel\n> workers). But even if it only affects that one case, it still had a\n> major, reproducible performance impact to move those 5 lines before\n> and after table_scan_bitmap_next_block().\n> \n> The same number of reads and fadvises are being issued overall.\n> However, I did notice that the pread calls are skewed when the those\n> lines of code are after table_scan_bitmap_next_block() -- fewer of\n> the workers are doing more of the reads. Perhaps this explains what is\n> taking longer. Why those workers would end up doing more of the reads,\n> I don't quite know.\n> \n> - Melanie\n\n\nI do have some numbers with e44505ce179e442bd50664c85a31a1805e13514a,\nand I don't see any such effect - it performs pretty much exactly like\nthe v6 patches.\n\nI used a slightly different visualization, plotting the timings on a\nscatter plot, so values on diagonal mean \"same performance\" while values\nabove/below mean speedup/slowdown.\n\nThis is a bit more compact than the tables with color scales, and it\nmakes it harder (impossible) to see patterns (e.g. changes depending on\neic). But for evaluating if there's a shift overall it still works, and\nit also shows clusters. So more a complementary & simpler visualization.\n\nThere are three charts\n\n1) master-patched.png - master vs. v6 patches\n2) master-locks.png - master vs. e44505ce\n3) patched-locks.png - v6 patches vs. e44505ce\n\nThere's virtually no difference between (1) and (2) - same pattern of\nregressions and speedups, almost as a copy. That's confirmed by (3)\nwhere pretty much all values are exactly on the diagonal, with only a\ncouple outliers.\n\nI'm not sure why you see a 30% difference with the change. I wonder if\nthat might be due to some issue in the environment? Are you running in a\nVM, or something like that?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Mar 2024 15:36:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "(Adding Dilip, the original author of the parallel bitmap heap scan \npatch all those years ago, in case you remember anything about the \nsnapshot stuff below.)\n\nOn 27/02/2024 16:22, Melanie Plageman wrote:\n> On Mon, Feb 26, 2024 at 08:50:28PM -0500, Melanie Plageman wrote:\n>> On Fri, Feb 16, 2024 at 12:35:59PM -0500, Melanie Plageman wrote:\n>>> In the attached v3, I've reordered the commits, updated some errant\n>>> comments, and improved the commit messages.\n>>>\n>>> I've also made some updates to the TIDBitmap API that seem like a\n>>> clarity improvement to the API in general. These also reduce the diff\n>>> for GIN when separating the TBMIterateResult from the\n>>> TBM[Shared]Iterator. And these TIDBitmap API changes are now all in\n>>> their own commits (previously those were in the same commit as adding\n>>> the BitmapHeapScan streaming read user).\n>>>\n>>> The three outstanding issues I see in the patch set are:\n>>> 1) the lossy and exact page counters issue described in my previous\n>>> email\n>>\n>> I've resolved this. I added a new patch to the set which starts counting\n>> even pages with no visible tuples toward lossy and exact pages. After an\n>> off-list conversation with Andres, it seems that this omission in master\n>> may not have been intentional.\n>>\n>> Once we have only two types of pages to differentiate between (lossy and\n>> exact [no longer have to care about \"has no visible tuples\"]), it is\n>> easy enough to pass a \"lossy\" boolean paramater to\n>> table_scan_bitmap_next_block(). I've done this in the attached v4.\n> \n> Thomas posted a new version of the Streaming Read API [1], so here is a\n> rebased v5. This should make it easier to review as it can be applied on\n> top of master.\n\nLots of discussion happening on the performance results but it seems \nthat there is no performance impact with the preliminary patches up to \nv5-0013-Streaming-Read-API.patch. I'm focusing purely on those \npreliminary patches now, because I think they're worthwhile cleanups \nindependent of the streaming read API.\n\nAndres already commented on the snapshot stuff on an earlier patch \nversion, and that's much nicer with this version. However, I don't \nunderstand why a parallel bitmap heap scan needs to do anything at all \nwith the snapshot, even before these patches. The parallel worker \ninfrastructure already passes the active snapshot from the leader to the \nparallel worker. Why does bitmap heap scan code need to do that too?\n\nI disabled that with:\n\n> --- a/src/backend/executor/nodeBitmapHeapscan.c\n> +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> @@ -874,7 +874,9 @@ ExecBitmapHeapInitializeWorker(BitmapHeapScanState *node,\n> \tpstate = shm_toc_lookup(pwcxt->toc, node->ss.ps.plan->plan_node_id, false);\n> \tnode->pstate = pstate;\n> \n> +#if 0\n> \tnode->worker_snapshot = RestoreSnapshot(pstate->phs_snapshot_data);\n> \tAssert(IsMVCCSnapshot(node->worker_snapshot));\n> \tRegisterSnapshot(node->worker_snapshot);\n> +#endif\n> }\n\nand ran \"make check-world\". All the tests passed. To be even more sure, \nI added some code there to assert that the serialized version of \nnode->ss.ps.state->es_snapshot is equal to pstate->phs_snapshot_data, \nand all the tests passed with that too.\n\nI propose that we just remove the code in BitmapHeapScan to serialize \nthe snapshot, per attached patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 13 Mar 2024 15:34:15 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 7:04 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> (Adding Dilip, the original author of the parallel bitmap heap scan\n> patch all those years ago, in case you remember anything about the\n> snapshot stuff below.)\n>\n> On 27/02/2024 16:22, Melanie Plageman wrote:\n\n> Andres already commented on the snapshot stuff on an earlier patch\n> version, and that's much nicer with this version. However, I don't\n> understand why a parallel bitmap heap scan needs to do anything at all\n> with the snapshot, even before these patches. The parallel worker\n> infrastructure already passes the active snapshot from the leader to the\n> parallel worker. Why does bitmap heap scan code need to do that too?\n\nYeah thinking on this now it seems you are right that the parallel\ninfrastructure is already passing the active snapshot so why do we\nneed it again. Then I checked other low scan nodes like indexscan and\nseqscan and it seems we are doing the same things there as well.\nCheck for SerializeSnapshot() in table_parallelscan_initialize() and\nindex_parallelscan_initialize() which are being called from\nExecSeqScanInitializeDSM() and ExecIndexScanInitializeDSM()\nrespectively.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 21:09:04 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 11:39 AM Dilip Kumar <[email protected]> wrote:\n> > Andres already commented on the snapshot stuff on an earlier patch\n> > version, and that's much nicer with this version. However, I don't\n> > understand why a parallel bitmap heap scan needs to do anything at all\n> > with the snapshot, even before these patches. The parallel worker\n> > infrastructure already passes the active snapshot from the leader to the\n> > parallel worker. Why does bitmap heap scan code need to do that too?\n>\n> Yeah thinking on this now it seems you are right that the parallel\n> infrastructure is already passing the active snapshot so why do we\n> need it again. Then I checked other low scan nodes like indexscan and\n> seqscan and it seems we are doing the same things there as well.\n> Check for SerializeSnapshot() in table_parallelscan_initialize() and\n> index_parallelscan_initialize() which are being called from\n> ExecSeqScanInitializeDSM() and ExecIndexScanInitializeDSM()\n> respectively.\n\nI remember thinking about this when I was writing very early parallel\nquery code. It seemed to me that there must be some reason why the\nEState has a snapshot, as opposed to just using the active snapshot,\nand so I took care to propagate that snapshot, which is used for the\nleader's scans, to the worker scans also. Now, if the EState doesn't\nneed to contain a snapshot, then all of that mechanism is unnecessary,\nbut I don't see how it can be right for the leader to do\ntable_beginscan() using estate->es_snapshot and the worker to use the\nactive snapshot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 11:55:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/13/24 14:34, Heikki Linnakangas wrote:\n> ...\n> \n> Lots of discussion happening on the performance results but it seems\n> that there is no performance impact with the preliminary patches up to\n> v5-0013-Streaming-Read-API.patch. I'm focusing purely on those\n> preliminary patches now, because I think they're worthwhile cleanups\n> independent of the streaming read API.\n> \n\nNot quite true - the comparison I shared on 29/2 [1] shows a serious\nregression caused by the 0010 patch. We've been investigating this with\nMelanie off list, but we don't have any clear findings yet (except that\nit's clearly due to moving BitmapAdjustPrefetchIterator() a bit down.\n\nBut if we revert this (and move the BitmapAdjustPrefetchIterator back),\nthe regression should disappear, and we can merge these preparatory\npatches. We'll have to deal with the regression (or something very\nsimilar) when merging the remaining patches.\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/91090d58-7d3f-4447-9425-f24ba66e292a%40enterprisedb.com\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 13 Mar 2024 19:14:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 11:41 AM Tomas Vondra\n<[email protected]> wrote:\n> On 3/2/24 23:28, Melanie Plageman wrote:\n> > On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >> With the current \"master\" code, eic=1 means we'll issue a prefetch for B\n> >> and then read+process A. And then issue prefetch for C and read+process\n> >> B, and so on. It's always one page ahead.\n> >\n> > Yes, that is what I mean for eic = 1\n\nI spent quite a few days thinking about the meaning of eic=0 and eic=1\nfor streaming_read.c v7[1], to make it agree with the above and with\nmaster. Here's why I was confused:\n\nBoth eic=0 and eic=1 are expected to generate at most 1 physical I/O\nat a time, or I/O queue depth 1 if you want to put it that way. But\nthis isn't just about concurrency of I/O, it's also about computation.\nDuh.\n\neic=0 means that the I/O is not concurrent with executor computation.\nSo, to annotate an excerpt from [1]'s random.txt, we have:\n\neffective_io_concurrency = 0, range size = 1\nunpatched patched\n==============================================================================\npread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\n *** executor now has page at 0x58000 to work on ***\npread(43,...,8192,0xb0000) = 8192 pread(82,...,8192,0xb0000) = 8192\n *** executor now has page at 0xb0000 to work on ***\n\neic=1 means that a single I/O is started and then control is returned\nto the executor code to do useful work concurrently with the\nbackground read that we assume is happening:\n\neffective_io_concurrency = 1, range size = 1\nunpatched patched\n==============================================================================\npread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\nposix_fadvise(43,0xb0000,0x2000,...) posix_fadvise(82,0xb0000,0x2000,...)\n *** executor now has page at 0x58000 to work on ***\npread(43,...,8192,0xb0000) = 8192 pread(82,...,8192,0xb0000) = 8192\nposix_fadvise(43,0x108000,0x2000,...) posix_fadvise(82,0x108000,0x2000,...)\n *** executor now has page at 0xb0000 to work on ***\npread(43,...,8192,0x108000) = 8192 pread(82,...,8192,0x108000) = 8192\nposix_fadvise(43,0x160000,0x2000,...) posix_fadvise(82,0x160000,0x2000,...)\n\nIn other words, 'concurrency' doesn't mean 'number of I/Os running\nconcurrently with each other', it means 'number of I/Os running\nconcurrently with computation', and when you put it that way, 0 and 1\nare different.\n\nNote that the first read is a bit special: by the time the consumer is\nready to pull a buffer out of the stream when we don't have a buffer\nready yet, it is too late to issue useful advice, so we don't bother.\nFWIW I think even in the AIO future we would have a synchronous read\nin that specific place, at least when using io_method=worker, because\nit would be stupid to ask another process to read a block for us that\nwe want right now and then wait for it wake us up when it's done.\n\nNote that even when we aren't issuing any advice because eic=0 or\nbecause we detected sequential access and we believe the kernel can do\na better job than us, we still 'look ahead' (= call the callback to\nsee which block numbers are coming down the pipe), but only as far as\nwe need to coalesce neighbouring blocks. (I deliberately avoid using\nthe word \"prefetch\" except in very general discussions because it\nmeans different things to different layers of the code, hence talk of\n\"look ahead\" and \"advice\".) That's how we get this change:\n\neffective_io_concurrency = 0, range size = 4\nunpatched patched\n==============================================================================\npread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\npread(43,...,8192,0x5a000) = 8192 preadv(82,...,2,0x5a000) = 16384\npread(43,...,8192,0x5c000) = 8192 pread(82,...,8192,0x5e000) = 8192\npread(43,...,8192,0x5e000) = 8192 preadv(82,...,4,0xb0000) = 32768\npread(43,...,8192,0xb0000) = 8192 preadv(82,...,4,0x108000) = 32768\npread(43,...,8192,0xb2000) = 8192 preadv(82,...,4,0x160000) = 32768\n\nAnd then once we introduce eic > 0 to the picture with neighbouring\nblocks that can be coalesced, \"patched\" starts to diverge even more\nfrom \"unpatched\" because it tracks the number of wide I/Os in\nprogress, not the number of single blocks.\n\n[1] https://www.postgresql.org/message-id/CA+hUKGLJi+c5jB3j6UvkgMYHky-qu+LPCsiNahUGSa5Z4DvyVA@mail.gmail.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 11:38:38 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 9:25 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 11:39 AM Dilip Kumar <[email protected]> wrote:\n> > > Andres already commented on the snapshot stuff on an earlier patch\n> > > version, and that's much nicer with this version. However, I don't\n> > > understand why a parallel bitmap heap scan needs to do anything at all\n> > > with the snapshot, even before these patches. The parallel worker\n> > > infrastructure already passes the active snapshot from the leader to the\n> > > parallel worker. Why does bitmap heap scan code need to do that too?\n> >\n> > Yeah thinking on this now it seems you are right that the parallel\n> > infrastructure is already passing the active snapshot so why do we\n> > need it again. Then I checked other low scan nodes like indexscan and\n> > seqscan and it seems we are doing the same things there as well.\n> > Check for SerializeSnapshot() in table_parallelscan_initialize() and\n> > index_parallelscan_initialize() which are being called from\n> > ExecSeqScanInitializeDSM() and ExecIndexScanInitializeDSM()\n> > respectively.\n>\n> I remember thinking about this when I was writing very early parallel\n> query code. It seemed to me that there must be some reason why the\n> EState has a snapshot, as opposed to just using the active snapshot,\n> and so I took care to propagate that snapshot, which is used for the\n> leader's scans, to the worker scans also. Now, if the EState doesn't\n> need to contain a snapshot, then all of that mechanism is unnecessary,\n> but I don't see how it can be right for the leader to do\n> table_beginscan() using estate->es_snapshot and the worker to use the\n> active snapshot.\n\nYeah, that's a very valid point. So I think now Heikki/Melanie might\nhave got an answer to their question, about the thought process behind\nserializing the snapshot for each scan node. And the same thing is\nfollowed for BitmapHeapNode as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 10:24:14 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 14/03/2024 06:54, Dilip Kumar wrote:\n> On Wed, Mar 13, 2024 at 9:25 PM Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, Mar 13, 2024 at 11:39 AM Dilip Kumar <[email protected]> wrote:\n>>>> Andres already commented on the snapshot stuff on an earlier patch\n>>>> version, and that's much nicer with this version. However, I don't\n>>>> understand why a parallel bitmap heap scan needs to do anything at all\n>>>> with the snapshot, even before these patches. The parallel worker\n>>>> infrastructure already passes the active snapshot from the leader to the\n>>>> parallel worker. Why does bitmap heap scan code need to do that too?\n>>>\n>>> Yeah thinking on this now it seems you are right that the parallel\n>>> infrastructure is already passing the active snapshot so why do we\n>>> need it again. Then I checked other low scan nodes like indexscan and\n>>> seqscan and it seems we are doing the same things there as well.\n>>> Check for SerializeSnapshot() in table_parallelscan_initialize() and\n>>> index_parallelscan_initialize() which are being called from\n>>> ExecSeqScanInitializeDSM() and ExecIndexScanInitializeDSM()\n>>> respectively.\n>>\n>> I remember thinking about this when I was writing very early parallel\n>> query code. It seemed to me that there must be some reason why the\n>> EState has a snapshot, as opposed to just using the active snapshot,\n>> and so I took care to propagate that snapshot, which is used for the\n>> leader's scans, to the worker scans also. Now, if the EState doesn't\n>> need to contain a snapshot, then all of that mechanism is unnecessary,\n>> but I don't see how it can be right for the leader to do\n>> table_beginscan() using estate->es_snapshot and the worker to use the\n>> active snapshot.\n> \n> Yeah, that's a very valid point. So I think now Heikki/Melanie might\n> have got an answer to their question, about the thought process behind\n> serializing the snapshot for each scan node. And the same thing is\n> followed for BitmapHeapNode as well.\n\nI see. Thanks, understanding the thought process helps.\n\nSo when a parallel table or index scan runs in the executor as part of a \nquery, we could just use the active snapshot. But there are some other \ncallers of parallel table scans that don't use the executor, namely \nparallel index builds. For those it makes sense to pass the snapshot for \nthe scan independent of the active snapshot.\n\nA parallel bitmap heap scan isn't really a parallel scan as far as the \ntable AM is concerned, though. It's more like an independent bitmap heap \nscan in each worker process, nodeBitmapHeapscan.c does all the \ncoordination of which blocks to scan. So I think that \ntable_parallelscan_initialize() was the wrong role model, and we should \nstill remove the snapshot serialization code from nodeBitmapHeapscan.c.\n\n\nDigging deeper into the question of whether es_snapshot == \nGetActiveSnapshot() is a valid assumption:\n\n<deep dive>\n\nes_snapshot is copied from the QueryDesc in standard_ExecutorStart(). \nLooking at the callers of ExecutorStart(), they all get the QueryDesc by \ncalling CreateQueryDesc() with GetActiveSnapshot(). And I don't see any \ncallers changing the active snapshot between the ExecutorStart() and \nExecutorRun() calls either. In pquery.c, we explicitly \nPushActiveSnapshot(queryDesc->snapshot) before calling ExecutorRun(). So \nno live bug here AFAICS, es_snapshot == GetActiveSnapshot() holds.\n\n_SPI_execute_plan() has code to deal with the possibility that the \nactive snapshot is not set. That seems fishy; do we really support SPI \nwithout any snapshot? I'm inclined to turn that into an error. I ran the \nregression tests with an \"Assert(ActiveSnapshotSet())\" there, and \neverything worked.\n\nIf es_snapshot was different from the active snapshot, things would get \nweird, even without parallel query. The scans would use es_snapshot for \nthe visibility checks, but any functions you execute in quals would use \nthe active snapshot.\n\nWe could double down on that assumption, and remove es_snapshot \naltogether and use GetActiveSnapshot() instead. And perhaps add \n\"PushActiveSnapshot(queryDesc->snapshot)\" to ExecutorRun().\n\n</deep dive>\n\nIn summary, this es_snapshot stuff is a bit confusing and could use some \ncleanup. But for now, I'd like to just add some assertions and a \ncomments about this, and remove the snapshot serialization from bitmap \nheap scan node, to make it consistent with other non-parallel scan nodes \n(it's not really a parallel scan as far as the table AM is concerned). \nSee attached patch, which is the same as previous patch with some extra \nassertions.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 14 Mar 2024 12:37:02 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 4:07 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> > Yeah, that's a very valid point. So I think now Heikki/Melanie might\n> > have got an answer to their question, about the thought process behind\n> > serializing the snapshot for each scan node. And the same thing is\n> > followed for BitmapHeapNode as well.\n>\n> I see. Thanks, understanding the thought process helps.\n>\n> So when a parallel table or index scan runs in the executor as part of a\n> query, we could just use the active snapshot. But there are some other\n> callers of parallel table scans that don't use the executor, namely\n> parallel index builds. For those it makes sense to pass the snapshot for\n> the scan independent of the active snapshot.\n\nRight\n\n> A parallel bitmap heap scan isn't really a parallel scan as far as the\n> table AM is concerned, though. It's more like an independent bitmap heap\n> scan in each worker process, nodeBitmapHeapscan.c does all the\n> coordination of which blocks to scan. So I think that\n> table_parallelscan_initialize() was the wrong role model, and we should\n> still remove the snapshot serialization code from nodeBitmapHeapscan.c.\n\nI think that seems right.\n\n> Digging deeper into the question of whether es_snapshot ==\n> GetActiveSnapshot() is a valid assumption:\n>\n> <deep dive>\n>\n> es_snapshot is copied from the QueryDesc in standard_ExecutorStart().\n> Looking at the callers of ExecutorStart(), they all get the QueryDesc by\n> calling CreateQueryDesc() with GetActiveSnapshot(). And I don't see any\n> callers changing the active snapshot between the ExecutorStart() and\n> ExecutorRun() calls either. In pquery.c, we explicitly\n> PushActiveSnapshot(queryDesc->snapshot) before calling ExecutorRun(). So\n> no live bug here AFAICS, es_snapshot == GetActiveSnapshot() holds.\n>\n> _SPI_execute_plan() has code to deal with the possibility that the\n> active snapshot is not set. That seems fishy; do we really support SPI\n> without any snapshot? I'm inclined to turn that into an error. I ran the\n> regression tests with an \"Assert(ActiveSnapshotSet())\" there, and\n> everything worked.\n\nIMHO, we can call SPI_Connect() and SPI_Execute() from any C\nextension, so I don't think there we can guarantee that the snapshot\nmust be set, do we?\n\n> If es_snapshot was different from the active snapshot, things would get\n> weird, even without parallel query. The scans would use es_snapshot for\n> the visibility checks, but any functions you execute in quals would use\n> the active snapshot.\n>\n> We could double down on that assumption, and remove es_snapshot\n> altogether and use GetActiveSnapshot() instead. And perhaps add\n> \"PushActiveSnapshot(queryDesc->snapshot)\" to ExecutorRun().\n>\n> </deep dive>\n>\n> In summary, this es_snapshot stuff is a bit confusing and could use some\n> cleanup. But for now, I'd like to just add some assertions and a\n> comments about this, and remove the snapshot serialization from bitmap\n> heap scan node, to make it consistent with other non-parallel scan nodes\n> (it's not really a parallel scan as far as the table AM is concerned).\n> See attached patch, which is the same as previous patch with some extra\n> assertions.\n\nMaybe for now we can just handle this specific case to remove the\nsnapshot serializing for the BitmapHeapScan as you are doing in the\npatch. After looking into the code your theory seems correct that we\nare just copying the ActiveSnapshot while building the query\ndescriptor and from there we are copying into the Estate so logically\nthere should not be any reason for these two to be different.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:25:34 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 6:37 AM Heikki Linnakangas <[email protected]> wrote:\n> If es_snapshot was different from the active snapshot, things would get\n> weird, even without parallel query. The scans would use es_snapshot for\n> the visibility checks, but any functions you execute in quals would use\n> the active snapshot.\n\nHmm, that's an interesting point.\n\nThe case where the query is suspended and resumed - i.e. cursors are\nused - probably needs more analysis. In that case, perhaps there's\nmore room for the snapshots to diverge.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 08:34:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 14/03/2024 14:34, Robert Haas wrote:\n> On Thu, Mar 14, 2024 at 6:37 AM Heikki Linnakangas <[email protected]> wrote:\n>> If es_snapshot was different from the active snapshot, things would get\n>> weird, even without parallel query. The scans would use es_snapshot for\n>> the visibility checks, but any functions you execute in quals would use\n>> the active snapshot.\n> \n> Hmm, that's an interesting point.\n> \n> The case where the query is suspended and resumed - i.e. cursors are\n> used - probably needs more analysis. In that case, perhaps there's\n> more room for the snapshots to diverge.\n\nThe portal code is pretty explicit about it, the ExecutorRun() call in \nPortalRunSelect() looks like this:\n\n PushActiveSnapshot(queryDesc->snapshot);\n ExecutorRun(queryDesc, direction, (uint64) count,\n portal->run_once);\n nprocessed = queryDesc->estate->es_processed;\n PopActiveSnapshot();\n\nI looked at all the callers of ExecutorRun(), and they all have the \nactive snapshot equal to queryDesc->snapshot, either because they called \nCreateQueryDesc() with the active snapshot before ExecutorRun(), or they \nset the active snapshot like above.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:00:49 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 9:00 AM Heikki Linnakangas <[email protected]> wrote:\n> The portal code is pretty explicit about it, the ExecutorRun() call in\n> PortalRunSelect() looks like this:\n>\n> PushActiveSnapshot(queryDesc->snapshot);\n> ExecutorRun(queryDesc, direction, (uint64) count,\n> portal->run_once);\n> nprocessed = queryDesc->estate->es_processed;\n> PopActiveSnapshot();\n>\n> I looked at all the callers of ExecutorRun(), and they all have the\n> active snapshot equal to queryDesc->snapshot, either because they called\n> CreateQueryDesc() with the active snapshot before ExecutorRun(), or they\n> set the active snapshot like above.\n\nWell, maybe there's a bunch of code cleanup possible, then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 09:20:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 14/03/2024 12:55, Dilip Kumar wrote:\n> On Thu, Mar 14, 2024 at 4:07 PM Heikki Linnakangas <[email protected]> wrote:\n>> _SPI_execute_plan() has code to deal with the possibility that the\n>> active snapshot is not set. That seems fishy; do we really support SPI\n>> without any snapshot? I'm inclined to turn that into an error. I ran the\n>> regression tests with an \"Assert(ActiveSnapshotSet())\" there, and\n>> everything worked.\n> \n> IMHO, we can call SPI_Connect() and SPI_Execute() from any C\n> extension, so I don't think there we can guarantee that the snapshot\n> must be set, do we?\n\nI suppose, although the things you could do without a snapshot would be \npretty limited. The query couldn't access any tables. Could it even look \nup functions in the parser? Not sure.\n\n> Maybe for now we can just handle this specific case to remove the\n> snapshot serializing for the BitmapHeapScan as you are doing in the\n> patch. After looking into the code your theory seems correct that we\n> are just copying the ActiveSnapshot while building the query\n> descriptor and from there we are copying into the Estate so logically\n> there should not be any reason for these two to be different.\n\nOk, committed that for now. Thanks for looking!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:32:04 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/13/24 23:38, Thomas Munro wrote:\n> On Sun, Mar 3, 2024 at 11:41 AM Tomas Vondra\n> <[email protected]> wrote:\n>> On 3/2/24 23:28, Melanie Plageman wrote:\n>>> On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> With the current \"master\" code, eic=1 means we'll issue a prefetch for B\n>>>> and then read+process A. And then issue prefetch for C and read+process\n>>>> B, and so on. It's always one page ahead.\n>>>\n>>> Yes, that is what I mean for eic = 1\n> \n> I spent quite a few days thinking about the meaning of eic=0 and eic=1\n> for streaming_read.c v7[1], to make it agree with the above and with\n> master. Here's why I was confused:\n> \n> Both eic=0 and eic=1 are expected to generate at most 1 physical I/O\n> at a time, or I/O queue depth 1 if you want to put it that way. But\n> this isn't just about concurrency of I/O, it's also about computation.\n> Duh.\n> \n> eic=0 means that the I/O is not concurrent with executor computation.\n> So, to annotate an excerpt from [1]'s random.txt, we have:\n> \n> effective_io_concurrency = 0, range size = 1\n> unpatched patched\n> ==============================================================================\n> pread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\n> *** executor now has page at 0x58000 to work on ***\n> pread(43,...,8192,0xb0000) = 8192 pread(82,...,8192,0xb0000) = 8192\n> *** executor now has page at 0xb0000 to work on ***\n> \n> eic=1 means that a single I/O is started and then control is returned\n> to the executor code to do useful work concurrently with the\n> background read that we assume is happening:\n> \n> effective_io_concurrency = 1, range size = 1\n> unpatched patched\n> ==============================================================================\n> pread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\n> posix_fadvise(43,0xb0000,0x2000,...) posix_fadvise(82,0xb0000,0x2000,...)\n> *** executor now has page at 0x58000 to work on ***\n> pread(43,...,8192,0xb0000) = 8192 pread(82,...,8192,0xb0000) = 8192\n> posix_fadvise(43,0x108000,0x2000,...) posix_fadvise(82,0x108000,0x2000,...)\n> *** executor now has page at 0xb0000 to work on ***\n> pread(43,...,8192,0x108000) = 8192 pread(82,...,8192,0x108000) = 8192\n> posix_fadvise(43,0x160000,0x2000,...) posix_fadvise(82,0x160000,0x2000,...)\n> \n> In other words, 'concurrency' doesn't mean 'number of I/Os running\n> concurrently with each other', it means 'number of I/Os running\n> concurrently with computation', and when you put it that way, 0 and 1\n> are different.\n> \n\nInteresting. For some reason I thought with eic=1 we'd issue the fadvise\nfor page #2 before pread of page #1, so that there'd be 2 IO requests in\nflight at the same time for a bit of time ... it'd give the fadvise more\ntime to actually get the data into page cache.\n\n> Note that the first read is a bit special: by the time the consumer is\n> ready to pull a buffer out of the stream when we don't have a buffer\n> ready yet, it is too late to issue useful advice, so we don't bother.\n> FWIW I think even in the AIO future we would have a synchronous read\n> in that specific place, at least when using io_method=worker, because\n> it would be stupid to ask another process to read a block for us that\n> we want right now and then wait for it wake us up when it's done.\n> \n> Note that even when we aren't issuing any advice because eic=0 or\n> because we detected sequential access and we believe the kernel can do\n> a better job than us, we still 'look ahead' (= call the callback to\n> see which block numbers are coming down the pipe), but only as far as\n> we need to coalesce neighbouring blocks. (I deliberately avoid using\n> the word \"prefetch\" except in very general discussions because it\n> means different things to different layers of the code, hence talk of\n> \"look ahead\" and \"advice\".) That's how we get this change:\n> \n> effective_io_concurrency = 0, range size = 4\n> unpatched patched\n> ==============================================================================\n> pread(43,...,8192,0x58000) = 8192 pread(82,...,8192,0x58000) = 8192\n> pread(43,...,8192,0x5a000) = 8192 preadv(82,...,2,0x5a000) = 16384\n> pread(43,...,8192,0x5c000) = 8192 pread(82,...,8192,0x5e000) = 8192\n> pread(43,...,8192,0x5e000) = 8192 preadv(82,...,4,0xb0000) = 32768\n> pread(43,...,8192,0xb0000) = 8192 preadv(82,...,4,0x108000) = 32768\n> pread(43,...,8192,0xb2000) = 8192 preadv(82,...,4,0x160000) = 32768\n> \n> And then once we introduce eic > 0 to the picture with neighbouring\n> blocks that can be coalesced, \"patched\" starts to diverge even more\n> from \"unpatched\" because it tracks the number of wide I/Os in\n> progress, not the number of single blocks.\n> \n\nSo, IIUC this means (1) the patched code is more aggressive wrt\nprefetching (because we prefetch more data overall, because master would\nprefetch N pages and patched prefetches N ranges, each of which may be\nmultiple pages. And (2) it's not easy to quantify how much more\naggressive it is, because it depends on how we happen to coalesce the\npages into ranges.\n\nDo I understand this correctly?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:17:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n> On 14/03/2024 12:55, Dilip Kumar wrote:\n> > On Thu, Mar 14, 2024 at 4:07 PM Heikki Linnakangas <[email protected]> wrote:\n> > > _SPI_execute_plan() has code to deal with the possibility that the\n> > > active snapshot is not set. That seems fishy; do we really support SPI\n> > > without any snapshot? I'm inclined to turn that into an error. I ran the\n> > > regression tests with an \"Assert(ActiveSnapshotSet())\" there, and\n> > > everything worked.\n> > \n> > IMHO, we can call SPI_Connect() and SPI_Execute() from any C\n> > extension, so I don't think there we can guarantee that the snapshot\n> > must be set, do we?\n> \n> I suppose, although the things you could do without a snapshot would be\n> pretty limited. The query couldn't access any tables. Could it even look up\n> functions in the parser? Not sure.\n> \n> > Maybe for now we can just handle this specific case to remove the\n> > snapshot serializing for the BitmapHeapScan as you are doing in the\n> > patch. After looking into the code your theory seems correct that we\n> > are just copying the ActiveSnapshot while building the query\n> > descriptor and from there we are copying into the Estate so logically\n> > there should not be any reason for these two to be different.\n> \n> Ok, committed that for now. Thanks for looking!\n\nAttached v6 is rebased over your new commit. It also has the \"fix\" in\n0010 which moves BitmapAdjustPrefetchIterator() back above\ntable_scan_bitmap_next_block(). I've also updated the Streaming Read API\ncommit (0013) to Thomas' v7 version from [1]. This has the update that\nwe theorize should address some of the regressions in the bitmapheapscan\nstreaming read user in 0014.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLJi%2Bc5jB3j6UvkgMYHky-qu%2BLPCsiNahUGSa5Z4DvyVA%40mail.gmail.com",
"msg_date": "Thu, 14 Mar 2024 14:16:25 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 3:18 AM Tomas Vondra\n<[email protected]> wrote:\n> So, IIUC this means (1) the patched code is more aggressive wrt\n> prefetching (because we prefetch more data overall, because master would\n> prefetch N pages and patched prefetches N ranges, each of which may be\n> multiple pages. And (2) it's not easy to quantify how much more\n> aggressive it is, because it depends on how we happen to coalesce the\n> pages into ranges.\n>\n> Do I understand this correctly?\n\nYes.\n\nParallelism must prevent coalescing here though. Any parallel aware\nexecutor node that allocates block numbers to workers without trying\nto preserve ranges will. That not only hides the opportunity to\ncoalesce reads, it also makes (globally) sequential scans look random\n(ie locally they are more random), so that our logic to avoid issuing\nadvice for sequential scan won't work, and we'll inject extra useless\nor harmful (?) fadvise calls. I don't know what to do about that yet,\nbut it seems like a subject for future research. Should we recognise\nsequential scans with a window (like Linux does), instead of strictly\nnext-block detection (like some other OSes do)? Maybe a shared\nstreaming read that all workers pull blocks from, so it can see what's\ngoing on? I think the latter would be strictly more like what the ad\nhoc BHS prefetching code in master is doing, but I don't know if it'd\nbe over-engineering, or hard to do for some reason.\n\nAnother aspect of per-backend streaming reads in one parallel query\nthat don't know about each other is that they will all have their own\neffective_io_concurrency limit. That is a version of a problem that\ncomes up again and again in parallel query, to be solved by the grand\nunified resource control system of the future.\n\n\n",
"msg_date": "Fri, 15 Mar 2024 09:58:09 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/14/24 19:16, Melanie Plageman wrote:\n> On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n>> ...\n>>\n>> Ok, committed that for now. Thanks for looking!\n> \n> Attached v6 is rebased over your new commit. It also has the \"fix\" in\n> 0010 which moves BitmapAdjustPrefetchIterator() back above\n> table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n> commit (0013) to Thomas' v7 version from [1]. This has the update that\n> we theorize should address some of the regressions in the bitmapheapscan\n> streaming read user in 0014.\n> \n\nShould I rerun the benchmarks with these new patches, to see if it\nreally helps with the regressions?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 14 Mar 2024 22:26:28 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 5:26 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/14/24 19:16, Melanie Plageman wrote:\n> > On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n> >> ...\n> >>\n> >> Ok, committed that for now. Thanks for looking!\n> >\n> > Attached v6 is rebased over your new commit. It also has the \"fix\" in\n> > 0010 which moves BitmapAdjustPrefetchIterator() back above\n> > table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n> > commit (0013) to Thomas' v7 version from [1]. This has the update that\n> > we theorize should address some of the regressions in the bitmapheapscan\n> > streaming read user in 0014.\n> >\n>\n> Should I rerun the benchmarks with these new patches, to see if it\n> really helps with the regressions?\n\nThat would be awesome!\n\nI will soon send out a summary of what we investigated off-list about\n0010 (though we didn't end up concluding anything). My \"fix\" (leaving\nBitmapAdjustPrefetchIterator() above table_scan_bitmap_next_block())\neliminates the regression in 0010 on the one example that I repro'd\nupthread, but it would be good to know if it eliminates the\nregressions across some other tests.\n\nI think it would be worthwhile to run the subset of tests which seemed\nto fare the worst on 0010 against the patches 0001-0010-- cyclic\nuncached on your xeon machine with 4 parallel workers, IIRC -- even\nthe 1 million scale would do the trick, I think.\n\nAnd then separately run the subset of tests which seemed to do the\nworst on 0014. There were several groups of issues across the\ndifferent tests, but I think that the uniform pages data test would be\nrelevant to use. It showed the regressions with eic 0.\n\nAs for the other regressions showing with 0014, I think we would want\nto see at least one with fully-in-shared-buffers and one with fully\nuncached. Some of the fixes were around pinning fewer buffers when the\nblocks were already in shared buffers.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 14 Mar 2024 17:39:30 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:\n> I will soon send out a summary of what we investigated off-list about\n> 0010 (though we didn't end up concluding anything). My \"fix\" (leaving\n> BitmapAdjustPrefetchIterator() above table_scan_bitmap_next_block())\n> eliminates the regression in 0010 on the one example that I repro'd\n> upthread, but it would be good to know if it eliminates the\n> regressions across some other tests.\n\nI spent a good amount of time looking into this with Melanie. After a bunch of\nwrong paths I think I found the issue: We end up prefetching blocks we have\nalready read. Notably this happens even as-is on master - just not as\nfrequently as after moving BitmapAdjustPrefetchIterator().\n\n From what I can tell the prefetching in parallel bitmap heap scans is\nthoroughly broken. I added some tracking of the last block read, the last\nblock prefetched to ParallelBitmapHeapState and found that with a small\neffective_io_concurrency we end up with ~18% of prefetches being of blocks we\nalready read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,\nno wonder it's slower...\n\nThe race here seems fairly substantial - we're moving the two iterators\nindependently from each other, in multiple processes, without useful locking.\n\nI'm inclined to think this is a bug we ought to fix in the backbranches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Mar 2024 14:14:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 5:14 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:\n> > I will soon send out a summary of what we investigated off-list about\n> > 0010 (though we didn't end up concluding anything). My \"fix\" (leaving\n> > BitmapAdjustPrefetchIterator() above table_scan_bitmap_next_block())\n> > eliminates the regression in 0010 on the one example that I repro'd\n> > upthread, but it would be good to know if it eliminates the\n> > regressions across some other tests.\n>\n> I spent a good amount of time looking into this with Melanie. After a bunch of\n> wrong paths I think I found the issue: We end up prefetching blocks we have\n> already read. Notably this happens even as-is on master - just not as\n> frequently as after moving BitmapAdjustPrefetchIterator().\n>\n> From what I can tell the prefetching in parallel bitmap heap scans is\n> thoroughly broken. I added some tracking of the last block read, the last\n> block prefetched to ParallelBitmapHeapState and found that with a small\n> effective_io_concurrency we end up with ~18% of prefetches being of blocks we\n> already read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,\n> no wonder it's slower...\n>\n> The race here seems fairly substantial - we're moving the two iterators\n> independently from each other, in multiple processes, without useful locking.\n>\n> I'm inclined to think this is a bug we ought to fix in the backbranches.\n\nThinking about how to fix this, perhaps we could keep the current max\nblock number in the ParallelBitmapHeapState and then when prefetching,\nworkers could loop calling tbm_shared_iterate() until they've found a\nblock at least prefetch_pages ahead of the current block. They\nwouldn't need to read the current max value from the parallel state on\neach iteration. Even checking it once and storing that value in a\nlocal variable prevented prefetching blocks after reading them in my\nexample repro of the issue.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 15 Mar 2024 18:42:29 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-15 18:42:29 -0400, Melanie Plageman wrote:\n> On Fri, Mar 15, 2024 at 5:14 PM Andres Freund <[email protected]> wrote:\n> > On 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:\n> > I spent a good amount of time looking into this with Melanie. After a bunch of\n> > wrong paths I think I found the issue: We end up prefetching blocks we have\n> > already read. Notably this happens even as-is on master - just not as\n> > frequently as after moving BitmapAdjustPrefetchIterator().\n> >\n> > From what I can tell the prefetching in parallel bitmap heap scans is\n> > thoroughly broken. I added some tracking of the last block read, the last\n> > block prefetched to ParallelBitmapHeapState and found that with a small\n> > effective_io_concurrency we end up with ~18% of prefetches being of blocks we\n> > already read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,\n> > no wonder it's slower...\n> >\n> > The race here seems fairly substantial - we're moving the two iterators\n> > independently from each other, in multiple processes, without useful locking.\n> >\n> > I'm inclined to think this is a bug we ought to fix in the backbranches.\n> \n> Thinking about how to fix this, perhaps we could keep the current max\n> block number in the ParallelBitmapHeapState and then when prefetching,\n> workers could loop calling tbm_shared_iterate() until they've found a\n> block at least prefetch_pages ahead of the current block. They\n> wouldn't need to read the current max value from the parallel state on\n> each iteration. Even checking it once and storing that value in a\n> local variable prevented prefetching blocks after reading them in my\n> example repro of the issue.\n\nThat would address some of the worst behaviour, but it doesn't really seem to\naddress the underlying problem of the two iterators being modified\nindependently. ISTM the proper fix would be to protect the state of the\niterators with a single lock, rather than pushing down the locking into the\nbitmap code. OTOH, we'll only need one lock going forward, so being economic\nin the effort of fixing this is also important.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Mar 2024 12:12:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/16/24 20:12, Andres Freund wrote:\n> Hi,\n> \n> On 2024-03-15 18:42:29 -0400, Melanie Plageman wrote:\n>> On Fri, Mar 15, 2024 at 5:14 PM Andres Freund <[email protected]> wrote:\n>>> On 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:\n>>> I spent a good amount of time looking into this with Melanie. After a bunch of\n>>> wrong paths I think I found the issue: We end up prefetching blocks we have\n>>> already read. Notably this happens even as-is on master - just not as\n>>> frequently as after moving BitmapAdjustPrefetchIterator().\n>>>\n>>> From what I can tell the prefetching in parallel bitmap heap scans is\n>>> thoroughly broken. I added some tracking of the last block read, the last\n>>> block prefetched to ParallelBitmapHeapState and found that with a small\n>>> effective_io_concurrency we end up with ~18% of prefetches being of blocks we\n>>> already read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,\n>>> no wonder it's slower...\n>>>\n>>> The race here seems fairly substantial - we're moving the two iterators\n>>> independently from each other, in multiple processes, without useful locking.\n>>>\n>>> I'm inclined to think this is a bug we ought to fix in the backbranches.\n>>\n>> Thinking about how to fix this, perhaps we could keep the current max\n>> block number in the ParallelBitmapHeapState and then when prefetching,\n>> workers could loop calling tbm_shared_iterate() until they've found a\n>> block at least prefetch_pages ahead of the current block. They\n>> wouldn't need to read the current max value from the parallel state on\n>> each iteration. Even checking it once and storing that value in a\n>> local variable prevented prefetching blocks after reading them in my\n>> example repro of the issue.\n> \n> That would address some of the worst behaviour, but it doesn't really seem to\n> address the underlying problem of the two iterators being modified\n> independently. ISTM the proper fix would be to protect the state of the\n> iterators with a single lock, rather than pushing down the locking into the\n> bitmap code. OTOH, we'll only need one lock going forward, so being economic\n> in the effort of fixing this is also important.\n> \n\nCan you share some details about how you identified the problem, counted\nthe prefetches that happen too late, etc? I'd like to try to reproduce\nthis to understand the issue better.\n\nIf I understand correctly, what may happen is that a worker reads blocks\nfrom the \"prefetch\" iterator, but before it manages to issue the\nposix_fadvise, some other worker already did pread. Or can the iterators\nget \"out of sync\" in a more fundamental way?\n\nIf my understanding is correct, why would a single lock solve that? Yes,\nwe'd advance the iterators at the same time, but surely we'd not issue\nthe fadvise calls while holding the lock, and the prefetch/fadvise for a\nparticular block could still happen in different workers.\n\nI suppose a dirty PoC fix should not be too difficult, and it'd allow us\nto check if it works.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 16 Mar 2024 21:25:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-16 21:25:18 +0100, Tomas Vondra wrote:\n> On 3/16/24 20:12, Andres Freund wrote:\n> > That would address some of the worst behaviour, but it doesn't really seem to\n> > address the underlying problem of the two iterators being modified\n> > independently. ISTM the proper fix would be to protect the state of the\n> > iterators with a single lock, rather than pushing down the locking into the\n> > bitmap code. OTOH, we'll only need one lock going forward, so being economic\n> > in the effort of fixing this is also important.\n> >\n>\n> Can you share some details about how you identified the problem, counted\n> the prefetches that happen too late, etc? I'd like to try to reproduce\n> this to understand the issue better.\n\nThere's two aspects. Originally I couldn't reliably reproduce the regression\nwith Melanie's repro on my laptop. I finally was able to do so after I\na) changed the block device's read_ahead_kb to 0\nb) used effective_io_concurrency=1\n\nThat made the difference between the BitmapAdjustPrefetchIterator() locations\nvery significant, something like 2.3s vs 12s.\n\nBesides a lot of other things, I finally added debugging fprintfs printing the\npid, (prefetch, read), block number. Even looking at tiny excerpts of the\nlarge amount of output that generates shows that two iterators were out of\nsync.\n\n\n> If I understand correctly, what may happen is that a worker reads blocks\n> from the \"prefetch\" iterator, but before it manages to issue the\n> posix_fadvise, some other worker already did pread. Or can the iterators\n> get \"out of sync\" in a more fundamental way?\n\nI agree that the current scheme of two shared iterators being used has some\nfairly fundamental raciness. But I suspect there's more than that going on\nright now.\n\nMoving BitmapAdjustPrefetchIterator() to later drastically increases the\nraciness because it means table_scan_bitmap_next_block() happens between\nincreasing the \"real\" and the \"prefetch\" iterators.\n\nAn example scenario that, I think, leads to the iterators being out of sync,\nwithout there being races between iterator advancement and completing\nprefetching:\n\nstart:\n real -> block 0\n prefetch -> block 0\n prefetch_pages = 0\n prefetch_target = 1\n\nW1: tbm_shared_iterate(real) -> block 0\nW2: tbm_shared_iterate(real) -> block 1\nW1: BitmapAdjustPrefetchIterator() -> tbm_shared_iterate(prefetch) -> 0\nW2: BitmapAdjustPrefetchIterator() -> tbm_shared_iterate(prefetch) -> 1\nW1: read block 0\nW2: read block 1\nW1: BitmapPrefetch() -> prefetch_pages++ -> 1, tbm_shared_iterate(prefetch) -> 2, prefetch block 2\nW2: BitmapPrefetch() -> nothing, as prefetch_pages == prefetch_target\n\nW1: tbm_shared_iterate(real) -> block 2\nW2: tbm_shared_iterate(real) -> block 3\n\nW2: BitmapAdjustPrefetchIterator() -> prefetch_pages--\nW2: read block 3\nW2: BitmapPrefetch() -> prefetch_pages++, tbm_shared_iterate(prefetch) -> 3, prefetch block 3\n\nSo afaict here we end up prefetching a block that the *same process* just had\nread.\n\nISTM that the idea of somehow \"catching up\" in BitmapAdjustPrefetchIterator(),\nseparately from advancing the \"real\" iterator, is pretty ugly for non-parallel\nBHS and just straight up broken in the parallel case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Mar 2024 09:38:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/14/24 22:39, Melanie Plageman wrote:\n> On Thu, Mar 14, 2024 at 5:26 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/14/24 19:16, Melanie Plageman wrote:\n>>> On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n>>>> ...\n>>>>\n>>>> Ok, committed that for now. Thanks for looking!\n>>>\n>>> Attached v6 is rebased over your new commit. It also has the \"fix\" in\n>>> 0010 which moves BitmapAdjustPrefetchIterator() back above\n>>> table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n>>> commit (0013) to Thomas' v7 version from [1]. This has the update that\n>>> we theorize should address some of the regressions in the bitmapheapscan\n>>> streaming read user in 0014.\n>>>\n>>\n>> Should I rerun the benchmarks with these new patches, to see if it\n>> really helps with the regressions?\n> \n> That would be awesome!\n> \n\nOK, here's a couple charts comparing the effect of v6 patches to master.\nThese are from 1M and 10M data sets, same as the runs presented earlier\nin this thread (the 10M is still running, but should be good enough for\nthis kind of visual comparison).\n\nI have results for individual patches, but 0001-0013 behave virtually\nthe same, so the charts show only 0012 and 0014 (vs master).\n\nInstead of a table with color scale (used before), I used simple scatter\nplots as a more compact / concise visualization. It's impossible to\nidentify patterns (e.g. serial vs. parallel runs), but for the purpose\nof this comparison that does not matter.\n\nAnd then I'll use a chart plotting \"relative\" time compared to master (I\nfind it easier to judge the relative difference than with scatter plot).\n\n1) absolute-all - all runs (scatter plot)\n\n2) absolute-optimal - runs where the planner would pick bitmapscan\n\n3) relative-all - all runs (duration relative to master)\n\n4) relative-optimal - relative, runs where bitmapscan would be picked\n\n\nThe 0012 results are a pretty clear sign the \"refactoring patches\"\nbehave exactly the same as master. There are a couple outliers (in\neither direction), but I'd attribute those to random noise and too few\nruns to smooth it out for a particular combination (especially for 10M).\n\nWhat is even more obvious is that 0014 behaves *VERY* differently. I'm\nnot sure if this is a good thing or a problem is debatable/unclear. I'm\nsure we don't want to cause regressions, but perhaps those are due to\nthe prefetch issue discussed elsewhere in this thread (identified by\nAndres and Melanie). There are also many cases that got much faster, but\nthe question is whether this is due to better efficiency or maybe the\nnew code being more aggressive in some way (not sure).\n\nIt's however interesting the differences are way more significant (both\nin terms of frequency and scale) on the older machine with SATA SSDs.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 17 Mar 2024 20:21:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/17/24 17:38, Andres Freund wrote:\n> Hi,\n> \n> On 2024-03-16 21:25:18 +0100, Tomas Vondra wrote:\n>> On 3/16/24 20:12, Andres Freund wrote:\n>>> That would address some of the worst behaviour, but it doesn't really seem to\n>>> address the underlying problem of the two iterators being modified\n>>> independently. ISTM the proper fix would be to protect the state of the\n>>> iterators with a single lock, rather than pushing down the locking into the\n>>> bitmap code. OTOH, we'll only need one lock going forward, so being economic\n>>> in the effort of fixing this is also important.\n>>>\n>>\n>> Can you share some details about how you identified the problem, counted\n>> the prefetches that happen too late, etc? I'd like to try to reproduce\n>> this to understand the issue better.\n> \n> There's two aspects. Originally I couldn't reliably reproduce the regression\n> with Melanie's repro on my laptop. I finally was able to do so after I\n> a) changed the block device's read_ahead_kb to 0\n> b) used effective_io_concurrency=1\n> \n> That made the difference between the BitmapAdjustPrefetchIterator() locations\n> very significant, something like 2.3s vs 12s.\n> \n\nInteresting. I haven't thought about read_ahead_kb, but in hindsight it\nmakes sense it affects these cases. OTOH I did not set it to 0 on either\nmachine (the 6xSATA RAID0 has it at 12288, for example) and yet that's\nhow we found the regressions.\n\nFor eic it makes perfect sense that setting it to 1 is particularly\nvulnerable to this issue - it only takes a small \"desynchronization\" of\nthe two iterators for the prefetch to \"fall behind\" and frequently\nprefetch blocks we already read.\n\n> Besides a lot of other things, I finally added debugging fprintfs printing the\n> pid, (prefetch, read), block number. Even looking at tiny excerpts of the\n> large amount of output that generates shows that two iterators were out of\n> sync.\n> \n\nThanks. I did experiment with fprintf, but it's quite cumbersome, so I\nwas hoping you came up with some smart way to trace this king of stuff.\nFor example I was wondering if ebpf would be a more convenient way.\n\n> \n>> If I understand correctly, what may happen is that a worker reads blocks\n>> from the \"prefetch\" iterator, but before it manages to issue the\n>> posix_fadvise, some other worker already did pread. Or can the iterators\n>> get \"out of sync\" in a more fundamental way?\n> \n> I agree that the current scheme of two shared iterators being used has some\n> fairly fundamental raciness. But I suspect there's more than that going on\n> right now.\n> \n> Moving BitmapAdjustPrefetchIterator() to later drastically increases the\n> raciness because it means table_scan_bitmap_next_block() happens between\n> increasing the \"real\" and the \"prefetch\" iterators.\n> \n> An example scenario that, I think, leads to the iterators being out of sync,\n> without there being races between iterator advancement and completing\n> prefetching:\n> \n> start:\n> real -> block 0\n> prefetch -> block 0\n> prefetch_pages = 0\n> prefetch_target = 1\n> \n> W1: tbm_shared_iterate(real) -> block 0\n> W2: tbm_shared_iterate(real) -> block 1\n> W1: BitmapAdjustPrefetchIterator() -> tbm_shared_iterate(prefetch) -> 0\n> W2: BitmapAdjustPrefetchIterator() -> tbm_shared_iterate(prefetch) -> 1\n> W1: read block 0\n> W2: read block 1\n> W1: BitmapPrefetch() -> prefetch_pages++ -> 1, tbm_shared_iterate(prefetch) -> 2, prefetch block 2\n> W2: BitmapPrefetch() -> nothing, as prefetch_pages == prefetch_target\n> \n> W1: tbm_shared_iterate(real) -> block 2\n> W2: tbm_shared_iterate(real) -> block 3\n> \n> W2: BitmapAdjustPrefetchIterator() -> prefetch_pages--\n> W2: read block 3\n> W2: BitmapPrefetch() -> prefetch_pages++, tbm_shared_iterate(prefetch) -> 3, prefetch block 3\n> \n> So afaict here we end up prefetching a block that the *same process* just had\n> read.\n> \n\nUh, that's very weird. I'd understood if there's some cross-process\nissue, but if this happens in a single process ... strange.\n\n> ISTM that the idea of somehow \"catching up\" in BitmapAdjustPrefetchIterator(),\n> separately from advancing the \"real\" iterator, is pretty ugly for non-parallel\n> BHS and just straight up broken in the parallel case.\n> \n\nYeah, I agree with the feeling it's an ugly fix. Definitely seems more\nlike fixing symptoms than the actual problem.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Mar 2024 20:36:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/17/24 20:36, Tomas Vondra wrote:\n> \n> ...\n> \n>> Besides a lot of other things, I finally added debugging fprintfs printing the\n>> pid, (prefetch, read), block number. Even looking at tiny excerpts of the\n>> large amount of output that generates shows that two iterators were out of\n>> sync.\n>>\n> \n> Thanks. I did experiment with fprintf, but it's quite cumbersome, so I\n> was hoping you came up with some smart way to trace this king of stuff.\n> For example I was wondering if ebpf would be a more convenient way.\n> \n\nFWIW I just realized why I failed to identify this \"late prefetch\" issue\nduring my investigation. I was experimenting with instrumenting this by\nadding a LD_PRELOAD library, logging all pread/fadvise calls. But the\nFilePrefetch call is skipped in the page is already in shared buffers,\nso this case \"disappeared\" during processing which matched the two calls\nby doing an \"inner join\".\n\nThat being said, I think tracing this using LD_PRELOAD or perf may be\nmore convenient way to see what's happening. For example I ended up\ndoing this:\n\n perf record -a -e syscalls:sys_enter_fadvise64 \\\n -e syscalls:sys_exit_fadvise64 \\\n -e syscalls:sys_enter_pread64 \\\n -e syscalls:sys_exit_pread64\n\n perf script -ns\n\nAlternatively, perf-trace can be used and prints the filename too (but\ntime has ms resolution only). Processing this seems comparable to the\nfprintf approach.\n\nIt still has the issue that some of the fadvise calls may be absent if\nthe prefetch iterator gets too far behind, but I think that can be\ndetected / measured by simply counting the fadvise calls, and comparing\nthem to pread calls. We expect these to be about the same, so\n\n (#pread - #fadvise) / #fadvise\n\nis a measure of how many were \"late\" and skipped.\n\nIt also seems better than fprintf because it traces the actual syscalls,\nnot just calls to glibc wrappers. For example I saw this\n\npostgres 54769 [001] 33768.771524828:\n syscalls:sys_enter_pread64: ..., pos: 0x30d04000\n\npostgres 54769 [001] 33768.771526867:\n syscalls:sys_exit_pread64: 0x2000\n\npostgres 54820 [000] 33768.771527473:\n syscalls:sys_enter_fadvise64: ..., offset: 0x30d04000, ...\n\npostgres 54820 [000] 33768.771528320:\n syscalls:sys_exit_fadvise64: 0x0\n\nwhich is clearly a case where we issue fadvise after pread of the same\nblock already completed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Mar 2024 12:34:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 14/02/2024 21:42, Andres Freund wrote:\n> On 2024-02-13 18:11:25 -0500, Melanie Plageman wrote:\n>> patch 0004 is, I think, a bug fix. see [2].\n> \n> I'd not quite call it a bugfix, it's not like it leads to wrong\n> behaviour. Seems more like an optimization. But whatever :)\n\nIt sure looks like bug to me, albeit a very minor one. Certainly not an \noptimization, it doesn't affect performance in any way, only what \nEXPLAIN reports. So committed and backported that to all supported branches.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 14:10:28 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 17, 2024 at 3:21 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/14/24 22:39, Melanie Plageman wrote:\n> > On Thu, Mar 14, 2024 at 5:26 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 3/14/24 19:16, Melanie Plageman wrote:\n> >>> On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n> >>>> ...\n> >>>>\n> >>>> Ok, committed that for now. Thanks for looking!\n> >>>\n> >>> Attached v6 is rebased over your new commit. It also has the \"fix\" in\n> >>> 0010 which moves BitmapAdjustPrefetchIterator() back above\n> >>> table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n> >>> commit (0013) to Thomas' v7 version from [1]. This has the update that\n> >>> we theorize should address some of the regressions in the bitmapheapscan\n> >>> streaming read user in 0014.\n> >>>\n> >>\n> >> Should I rerun the benchmarks with these new patches, to see if it\n> >> really helps with the regressions?\n> >\n> > That would be awesome!\n> >\n>\n> OK, here's a couple charts comparing the effect of v6 patches to master.\n> These are from 1M and 10M data sets, same as the runs presented earlier\n> in this thread (the 10M is still running, but should be good enough for\n> this kind of visual comparison).\n\nThanks for doing this!\n\n> What is even more obvious is that 0014 behaves *VERY* differently. I'm\n> not sure if this is a good thing or a problem is debatable/unclear. I'm\n> sure we don't want to cause regressions, but perhaps those are due to\n> the prefetch issue discussed elsewhere in this thread (identified by\n> Andres and Melanie). There are also many cases that got much faster, but\n> the question is whether this is due to better efficiency or maybe the\n> new code being more aggressive in some way (not sure).\n\nAre these with the default effective_io_concurrency (1)? If so, the\n\"effective\" prefetch distance in many cases will be higher with the\nstreaming read code applied. With effective_io_concurrency 1,\n\"max_ios\" will always be 1, but the number of blocks prefetched may\nexceed this (up to MAX_BUFFERS_PER_TRANSFER) because the streaming\nread code is always trying to build bigger IOs. And, if prefetching,\nit will prefetch IOs not yet in shared buffers before reading them.\n\nIt's hard to tell without going into a specific repro why this would\ncause some queries to be much slower. In the forced bitmapheapscan, it\nwould make sense that more prefetching is worse -- which is why a\nbitmapheapscan plan wouldn't have been chosen. But in the optimal\ncases, it is unclear why it would be worse.\n\nI don't think there is any way it could be the issue Andres\nidentified, because there is only one iterator. Nothing to get out of\nsync. It could be that the fadvises are being issued too close to the\nreads and aren't effective enough at covering up read latency on\nslower, older hardware. But that doesn't explain why master would\nsometimes be faster.\n\nProbably the only thing we can do is get into a repro. It would, of\ncourse, be easiest to do this with a serial query. I can dig into the\nscripts you shared earlier and try to find a good repro. Because the\nregressions may have shifted with Thomas' new version, it would help\nif you shared a category (cyclic/uniform/etc, parallel or serial, eic\nvalue, work mem, etc) where you now see the most regressions.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 18 Mar 2024 10:47:01 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 02:10:28PM +0200, Heikki Linnakangas wrote:\n> On 14/02/2024 21:42, Andres Freund wrote:\n> > On 2024-02-13 18:11:25 -0500, Melanie Plageman wrote:\n> > > patch 0004 is, I think, a bug fix. see [2].\n> > \n> > I'd not quite call it a bugfix, it's not like it leads to wrong\n> > behaviour. Seems more like an optimization. But whatever :)\n> \n> It sure looks like bug to me, albeit a very minor one. Certainly not an\n> optimization, it doesn't affect performance in any way, only what EXPLAIN\n> reports. So committed and backported that to all supported branches.\n\nI've attached v7 rebased over this commit.\n\n- Melanie",
"msg_date": "Mon, 18 Mar 2024 11:19:47 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/18/24 15:47, Melanie Plageman wrote:\n> On Sun, Mar 17, 2024 at 3:21 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/14/24 22:39, Melanie Plageman wrote:\n>>> On Thu, Mar 14, 2024 at 5:26 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 3/14/24 19:16, Melanie Plageman wrote:\n>>>>> On Thu, Mar 14, 2024 at 03:32:04PM +0200, Heikki Linnakangas wrote:\n>>>>>> ...\n>>>>>>\n>>>>>> Ok, committed that for now. Thanks for looking!\n>>>>>\n>>>>> Attached v6 is rebased over your new commit. It also has the \"fix\" in\n>>>>> 0010 which moves BitmapAdjustPrefetchIterator() back above\n>>>>> table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n>>>>> commit (0013) to Thomas' v7 version from [1]. This has the update that\n>>>>> we theorize should address some of the regressions in the bitmapheapscan\n>>>>> streaming read user in 0014.\n>>>>>\n>>>>\n>>>> Should I rerun the benchmarks with these new patches, to see if it\n>>>> really helps with the regressions?\n>>>\n>>> That would be awesome!\n>>>\n>>\n>> OK, here's a couple charts comparing the effect of v6 patches to master.\n>> These are from 1M and 10M data sets, same as the runs presented earlier\n>> in this thread (the 10M is still running, but should be good enough for\n>> this kind of visual comparison).\n> \n> Thanks for doing this!\n> \n>> What is even more obvious is that 0014 behaves *VERY* differently. I'm\n>> not sure if this is a good thing or a problem is debatable/unclear. I'm\n>> sure we don't want to cause regressions, but perhaps those are due to\n>> the prefetch issue discussed elsewhere in this thread (identified by\n>> Andres and Melanie). There are also many cases that got much faster, but\n>> the question is whether this is due to better efficiency or maybe the\n>> new code being more aggressive in some way (not sure).\n> \n> Are these with the default effective_io_concurrency (1)? If so, the\n> \"effective\" prefetch distance in many cases will be higher with the\n> streaming read code applied. With effective_io_concurrency 1,\n> \"max_ios\" will always be 1, but the number of blocks prefetched may\n> exceed this (up to MAX_BUFFERS_PER_TRANSFER) because the streaming\n> read code is always trying to build bigger IOs. And, if prefetching,\n> it will prefetch IOs not yet in shared buffers before reading them.\n> \n\nNo, it's a mix of runs with random combinations of these parameters:\n\ndataset: uniform uniform_pages linear linear_fuzz cyclic cyclic_fuzz\nworkers: 0 4\nwork_mem: 128kB 4MB 64MB\neic: 0 1 8 16 32\nselectivity: 0-100%\n\nI can either share the data (~70MB of CSV) or generate charts for\nresults with some filter.\n\n> It's hard to tell without going into a specific repro why this would\n> cause some queries to be much slower. In the forced bitmapheapscan, it\n> would make sense that more prefetching is worse -- which is why a\n> bitmapheapscan plan wouldn't have been chosen. But in the optimal\n> cases, it is unclear why it would be worse.\n> \n\nYes, not sure about the optimal cases. I'll wait for the 10M runs to\ncomplete, and then we can look for some patterns.\n\n> I don't think there is any way it could be the issue Andres\n> identified, because there is only one iterator. Nothing to get out of\n> sync. It could be that the fadvises are being issued too close to the\n> reads and aren't effective enough at covering up read latency on\n> slower, older hardware. But that doesn't explain why master would\n> sometimes be faster.\n> \n\nAh, right, thanks for the clarification. I forgot the streaming read API\ndoes not use the two-iterator approach.\n\n> Probably the only thing we can do is get into a repro. It would, of\n> course, be easiest to do this with a serial query. I can dig into the\n> scripts you shared earlier and try to find a good repro. Because the\n> regressions may have shifted with Thomas' new version, it would help\n> if you shared a category (cyclic/uniform/etc, parallel or serial, eic\n> value, work mem, etc) where you now see the most regressions.\n> \n\nOK, I've restarted the tests for only 0012 and 0014 patches, and I'll\nwait for these to complete - I don't want to be looking for patterns\nuntil we have enough data to smooth this out.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:55:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 18/03/2024 17:19, Melanie Plageman wrote:\n> I've attached v7 rebased over this commit.\n\nThanks!\n\n> v7-0001-BitmapHeapScan-begin-scan-after-bitmap-creation.patch\n\nIf we delayed table_beginscan_bm() call further, after starting the TBM \niterator, we could skip it altogether when the iterator is empty.\n\nThat's a further improvement, doesn't need to be part of this patch set. \nJust caught my eye while reading this.\n\n> v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n\nI suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call \nthe flag e.g. SO_NEED_TUPLE.\n\n\nAs yet another preliminary patch before the streaming read API, it would \nbe nice to move the prefetching code to heapam.c too.\n\nWhat's the point of having separate table_scan_bitmap_next_block() and \ntable_scan_bitmap_next_tuple() functions anymore? The AM owns the TBM \niterator now. The executor node updates the lossy/exact page counts, but \nthat's the only per-page thing it does now.\n\n> \t\t/*\n> \t\t * If this is the first scan of the underlying table, create the table\n> \t\t * scan descriptor and begin the scan.\n> \t\t */\n> \t\tif (!scan)\n> \t\t{\n> \t\t\tuint32\t\textra_flags = 0;\n> \n> \t\t\t/*\n> \t\t\t * We can potentially skip fetching heap pages if we do not need\n> \t\t\t * any columns of the table, either for checking non-indexable\n> \t\t\t * quals or for returning data. This test is a bit simplistic, as\n> \t\t\t * it checks the stronger condition that there's no qual or return\n> \t\t\t * tlist at all. But in most cases it's probably not worth working\n> \t\t\t * harder than that.\n> \t\t\t */\n> \t\t\tif (node->ss.ps.plan->qual == NIL && node->ss.ps.plan->targetlist == NIL)\n> \t\t\t\textra_flags |= SO_CAN_SKIP_FETCH;\n> \n> \t\t\tscan = node->ss.ss_currentScanDesc = table_beginscan_bm(\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode->ss.ss_currentRelation,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode->ss.ps.state->es_snapshot,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t0,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\textra_flags);\n> \t\t}\n> \n> \t\tscan->tbmiterator = tbmiterator;\n> \t\tscan->shared_tbmiterator = shared_tbmiterator;\n\nHow about passing the iterator as an argument to table_beginscan_bm()? \nYou'd then need some other function to change the iterator on rescan, \nthough. Not sure what exactly to do here, but feels that this part of \nthe API is not fully thought-out. Needs comments at least, to explain \nwho sets tbmiterator / shared_tbmiterator and when. For comparison, for \na TID scan there's a separate scan_set_tidrange() table AM function. \nMaybe follow that example and introduce scan_set_tbm_iterator().\n\nIt's bit awkward to have separate tbmiterator and shared_tbmiterator \nfields. Could regular and shared iterators be merged, or wrapped under a \ncommon interface?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Mar 2024 14:33:35 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/18/24 16:55, Tomas Vondra wrote:\n>\n> ...\n> \n> OK, I've restarted the tests for only 0012 and 0014 patches, and I'll\n> wait for these to complete - I don't want to be looking for patterns\n> until we have enough data to smooth this out.\n> \n>\n\nI now have results for 1M and 10M runs on the two builds (0012 and\n0014), attached is a chart for relative performance plotting\n\n (0014 timing) / (0012 timing)\n\nfor \"optimal' runs that would pick bitmapscan on their own. There's\nnothing special about the config - I reduced the random_page_cost to\n1.5-2.0 to reflect both machines have flash storage, etc.\n\nOverall, the chart is pretty consistent with what I shared on Sunday.\nMost of the results are fine (0014 is close to 0012 or faster), but\nthere's a bunch of cases that are much slower. Interestingly enough,\nalmost all of them are on the i5 machine, almost none of the xeon. My\nguess is this is about the SSD type (SATA vs. NVMe).\n\nAttached if table of ~50 worst regressions (by the metric above), and\nit's interesting the worst regressions are with eic=0 and eic=1.\n\nI decided to look at the first case (eic=0), and the timings are quite\nstable - there are three runs for each build, with timings close to the\naverage (see below the table).\n\nAttached is a script that reproduces this on both machines, but the\ndifference is much more significant on i5 (~5x) compared to xeon (~2x).\n\nI haven't investigated what exactly is happening and why, hopefully the\nscript will allow you to reproduce this independently. I plan to take a\nlook, but I don't know when I'll have time for this.\n\nFWIW if the script does not reproduce this on your machines, I might be\nable to give you access to the i5 machine. Let me know.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 19 Mar 2024 21:34:53 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/18/24 16:19, Melanie Plageman wrote:\n> On Mon, Mar 18, 2024 at 02:10:28PM +0200, Heikki Linnakangas wrote:\n>> On 14/02/2024 21:42, Andres Freund wrote:\n>>> On 2024-02-13 18:11:25 -0500, Melanie Plageman wrote:\n>>>> patch 0004 is, I think, a bug fix. see [2].\n>>>\n>>> I'd not quite call it a bugfix, it's not like it leads to wrong\n>>> behaviour. Seems more like an optimization. But whatever :)\n>>\n>> It sure looks like bug to me, albeit a very minor one. Certainly not an\n>> optimization, it doesn't affect performance in any way, only what EXPLAIN\n>> reports. So committed and backported that to all supported branches.\n> \n> I've attached v7 rebased over this commit.\n>\n\nI've started a new set of benchmarks with v7 (on top of f69319f2f1), but\nunfortunately that results in about 15% of the queries failing with:\n\n ERROR: prefetch and main iterators are out of sync\n\nReproducing it is pretty simple (at least on my laptop). Simply apply\n0001-0011, and then do this:\n\n======================================================================\ncreate table test_table (a bigint, b bigint, c text) with (fillfactor = 25);\n\ninsert into test_table select 10000 * random(), i, md5(random()::text)\nfrom generate_series(1, 1000000) s(i);\n\ncreate index on test_table(a);\n\nvacuum analyze ;\ncheckpoint;\n\nset work_mem = '128kB';\nset effective_io_concurrency = 8;\nset random_page_cost = 2;\nset max_parallel_workers_per_gather = 0;\n\nexplain select * from test_table where a >= 1 and a <= 512;\n\nexplain analyze select * from test_table where a >= 1 and a <= 512;\n\nERROR: prefetch and main iterators are out of sync\n======================================================================\n\nI haven't investigated this, but it seems to get broken by this patch:\n\n v7-0009-Make-table_scan_bitmap_next_block-async-friendly.patch\n\nI wonder if there are some additional changes aside from the rebase.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Mar 2024 19:13:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/14/24 19:16, Melanie Plageman wrote:\n> ...\n> \n> Attached v6 is rebased over your new commit. It also has the \"fix\" in\n> 0010 which moves BitmapAdjustPrefetchIterator() back above\n> table_scan_bitmap_next_block(). I've also updated the Streaming Read API\n> commit (0013) to Thomas' v7 version from [1]. This has the update that\n> we theorize should address some of the regressions in the bitmapheapscan\n> streaming read user in 0014.\n> \n\nBased on the recent discussions in this thread I've been wondering how\ndoes the readahead setting for the device affect the behavior, so I've\nmodified the script to test different values for this parameter too.\n\nConsidering the bug in the v7 patch (reported yesterday elsewhere in\nthis thread), I had to use the v6 version for now. I don't think it\nmakes much difference, the important parts of the patch do not change.\n\nThe complete results are far too large to include here (multiple MBs),\nso I'll only include results for a small subset of parameters (one\ndataset on i5), and some scatter charts with all results to show the\noverall behavior. (I only have results for 1M rows so far).\n\nComplete results (including the raw CSV etc.) are available in a git\nrepo, along with jupyter notebooks that I started using for experiments\nand building the other charts:\n\n https://github.com/tvondra/jupyterlab-projects/tree/master\n\nIf you look at the attached PDF table, the first half is for serial\nexecution (no parallelism), the second half is with 4 workers. And there\nare 3 different readahead settings 0, 1536 and 12288 (and then different\neic values for each readahead value). The readahead values are chosen as\n\"disabled\", 6x128kB and the default that was set by the kernel (or\nwherever it comes from).\n\nThere are pretty clear patterns:\n\n* serial runs with disabled readahead - The patch causes fairly serious\nregressions (compared to master), if eic>0.\n\n* serial runs with enabled readahead - there's still some regression for\nlower matches values. Presumably, at higher values (which means larger\nfraction of the table matches) the readahead kicks in, leaving the lower\nvalues as if readahead was not enabled.\n\n* parallel runs - The regression is much smaller, either because the\nparallel workers issue requests almost as if there was readahead, or\nmaybe it implicitly disrupts the readahead. Not sure.\n\nThe other datasets are quite similar, feel free to check the git repo\nfor complete results.\n\nOne possible caveat is that maybe this affects only cases that would not\nactually use bitmap scans? But if you check the attached scatter charts\n(PNG), that only show results for cases where the planner would actually\npick bitmap scans on it's own, there are plenty such cases.\n\nFor the 0012 patch (chart on left), there's almost no such problem - the\nresults are very close to master. Similarly, there are regressions even\non the chart with readahead, but it's far less frequent/significant.\n\nThe question is whether readahead=0 is even worth worrying about? If\ndisabling readahead causes serious regressions even on master (clearly\nvisible in the PDF table), would anyone actually run with it disabled?\n\nBut I'm not sure that argument is very sound. Surely there are cases\nwhere readahead may not detect a pattern, or where it's not supported\nfor some arbitrary reason (e.g. I didn't have much luck with this on\nZFS, perhaps other filesystems have similar limitations). But also what\nabout direct I/O? Surely that won't have readahead by kernel, right?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 21 Mar 2024 15:55:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 02:33:35PM +0200, Heikki Linnakangas wrote:\n> On 18/03/2024 17:19, Melanie Plageman wrote:\n> > I've attached v7 rebased over this commit.\n> \n> If we delayed table_beginscan_bm() call further, after starting the TBM\n> iterator, we could skip it altogether when the iterator is empty.\n> \n> That's a further improvement, doesn't need to be part of this patch set.\n> Just caught my eye while reading this.\n\nHmm. You mean like until after the first call to tbm_[shared]_iterate()?\nAFAICT, tbm_begin_iterate() doesn't tell us anything about whether or\nnot the iterator is \"empty\". Do you mean cases when the bitmap has no\nblocks in it? It seems like we should be able to tell that from the\nTIDBitmap.\n\n> \n> > v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n> \n> I suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call the\n> flag e.g. SO_NEED_TUPLE.\n\nAgreed. Done in attached v8. Though I wondered if it was a bit weird\nthat the flag is set in the common case and not set in the uncommon\ncase...\n\n> As yet another preliminary patch before the streaming read API, it would be\n> nice to move the prefetching code to heapam.c too.\n\nI've done this, but I can say it is not very pretty. see 0013. I had to\nadd a bunch of stuff to TableScanDescData and HeapScanDescData which are\nonly used for bitmapheapscans. I don't know if it makes the BHS\nstreaming read user patch easier to review, but I don't think what I\nhave in 0013 is committable to Postgres. Maybe there was another way I\ncould have approached it. Let me know what you think.\n\nIn addition to bloating the table descriptors, note that it was\ndifficult to avoid one semantic change -- with 0013, we no longer\nprefetch or adjust prefetch target when emitting each empty tuple --\nthough I think this is could actually be desirable.\n\n> What's the point of having separate table_scan_bitmap_next_block() and\n> table_scan_bitmap_next_tuple() functions anymore? The AM owns the TBM\n> iterator now. The executor node updates the lossy/exact page counts, but\n> that's the only per-page thing it does now.\n\nOh, interesting. Good point. I've done this in 0015. If you like the way\nit turned out, I can probably rebase this back into an earlier point in\nthe set and end up dropping some of the other incremental changes (e.g.\n0008).\n\n> > \t\t/*\n> > \t\t * If this is the first scan of the underlying table, create the table\n> > \t\t * scan descriptor and begin the scan.\n> > \t\t */\n> > \t\tif (!scan)\n> > \t\t{\n> > \t\t\tuint32\t\textra_flags = 0;\n> > \n> > \t\t\t/*\n> > \t\t\t * We can potentially skip fetching heap pages if we do not need\n> > \t\t\t * any columns of the table, either for checking non-indexable\n> > \t\t\t * quals or for returning data. This test is a bit simplistic, as\n> > \t\t\t * it checks the stronger condition that there's no qual or return\n> > \t\t\t * tlist at all. But in most cases it's probably not worth working\n> > \t\t\t * harder than that.\n> > \t\t\t */\n> > \t\t\tif (node->ss.ps.plan->qual == NIL && node->ss.ps.plan->targetlist == NIL)\n> > \t\t\t\textra_flags |= SO_CAN_SKIP_FETCH;\n> > \n> > \t\t\tscan = node->ss.ss_currentScanDesc = table_beginscan_bm(\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode->ss.ss_currentRelation,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode->ss.ps.state->es_snapshot,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t0,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\textra_flags);\n> > \t\t}\n> > \n> > \t\tscan->tbmiterator = tbmiterator;\n> > \t\tscan->shared_tbmiterator = shared_tbmiterator;\n> \n> How about passing the iterator as an argument to table_beginscan_bm()? You'd\n> then need some other function to change the iterator on rescan, though. Not\n> sure what exactly to do here, but feels that this part of the API is not\n> fully thought-out. Needs comments at least, to explain who sets tbmiterator\n> / shared_tbmiterator and when. For comparison, for a TID scan there's a\n> separate scan_set_tidrange() table AM function. Maybe follow that example\n> and introduce scan_set_tbm_iterator().\n\nI've spent quite a bit of time playing around with the code trying to\nmake it less terrible than what I had before.\n\nOn rescan, we have to actually make the whole bitmap and iterator. And,\nwe don't have what we need to do that in table_rescan()/heap_rescan().\n From what I can tell, scan_set_tidrange() is useful because it can be\ncalled from both the beginscan and rescan functions without invoking it\ndirectly from TidNext().\n\nIn our case, any wrapper function we wrote would basically just assign\nthe iterator to the scan in BitmapHeapNext().\n\nI've reorganized this code structure a bit, so see if you like it more\nnow. I rebased the changes into some of the other patches, so you'll\njust have to look at the result and see what you think.\n\n> It's bit awkward to have separate tbmiterator and shared_tbmiterator fields.\n> Could regular and shared iterators be merged, or wrapped under a common\n> interface?\n\nThis is a good idea. I've done that in 0014. It made the code nicer, but\nI just wonder if it will add too much overhead.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 22 Mar 2024 20:22:11 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 08:22:11PM -0400, Melanie Plageman wrote:\n> On Tue, Mar 19, 2024 at 02:33:35PM +0200, Heikki Linnakangas wrote:\n> > On 18/03/2024 17:19, Melanie Plageman wrote:\n> > > I've attached v7 rebased over this commit.\n> > \n> > If we delayed table_beginscan_bm() call further, after starting the TBM\n> > iterator, we could skip it altogether when the iterator is empty.\n> > \n> > That's a further improvement, doesn't need to be part of this patch set.\n> > Just caught my eye while reading this.\n> \n> Hmm. You mean like until after the first call to tbm_[shared]_iterate()?\n> AFAICT, tbm_begin_iterate() doesn't tell us anything about whether or\n> not the iterator is \"empty\". Do you mean cases when the bitmap has no\n> blocks in it? It seems like we should be able to tell that from the\n> TIDBitmap.\n> \n> > \n> > > v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n> > \n> > I suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call the\n> > flag e.g. SO_NEED_TUPLE.\n> \n> Agreed. Done in attached v8. Though I wondered if it was a bit weird\n> that the flag is set in the common case and not set in the uncommon\n> case...\n\nv8 actually attached this time",
"msg_date": "Fri, 22 Mar 2024 20:26:07 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/23/24 01:26, Melanie Plageman wrote:\n> On Fri, Mar 22, 2024 at 08:22:11PM -0400, Melanie Plageman wrote:\n>> On Tue, Mar 19, 2024 at 02:33:35PM +0200, Heikki Linnakangas wrote:\n>>> On 18/03/2024 17:19, Melanie Plageman wrote:\n>>>> I've attached v7 rebased over this commit.\n>>>\n>>> If we delayed table_beginscan_bm() call further, after starting the TBM\n>>> iterator, we could skip it altogether when the iterator is empty.\n>>>\n>>> That's a further improvement, doesn't need to be part of this patch set.\n>>> Just caught my eye while reading this.\n>>\n>> Hmm. You mean like until after the first call to tbm_[shared]_iterate()?\n>> AFAICT, tbm_begin_iterate() doesn't tell us anything about whether or\n>> not the iterator is \"empty\". Do you mean cases when the bitmap has no\n>> blocks in it? It seems like we should be able to tell that from the\n>> TIDBitmap.\n>>\n>>>\n>>>> v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n>>>\n>>> I suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call the\n>>> flag e.g. SO_NEED_TUPLE.\n>>\n>> Agreed. Done in attached v8. Though I wondered if it was a bit weird\n>> that the flag is set in the common case and not set in the uncommon\n>> case...\n> \n> v8 actually attached this time\n\nI tried to run the benchmarks with v8, but unfortunately it crashes for\nme very quickly (I've only seen 0015 to crash, so I guess the bug is in\nthat patch).\n\nThe backtrace attached, this doesn't seem right:\n\n(gdb) p hscan->rs_cindex\n$1 = 543516018\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 24 Mar 2024 13:36:19 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 01:36:19PM +0100, Tomas Vondra wrote:\n> \n> \n> On 3/23/24 01:26, Melanie Plageman wrote:\n> > On Fri, Mar 22, 2024 at 08:22:11PM -0400, Melanie Plageman wrote:\n> >> On Tue, Mar 19, 2024 at 02:33:35PM +0200, Heikki Linnakangas wrote:\n> >>> On 18/03/2024 17:19, Melanie Plageman wrote:\n> >>>> I've attached v7 rebased over this commit.\n> >>>\n> >>> If we delayed table_beginscan_bm() call further, after starting the TBM\n> >>> iterator, we could skip it altogether when the iterator is empty.\n> >>>\n> >>> That's a further improvement, doesn't need to be part of this patch set.\n> >>> Just caught my eye while reading this.\n> >>\n> >> Hmm. You mean like until after the first call to tbm_[shared]_iterate()?\n> >> AFAICT, tbm_begin_iterate() doesn't tell us anything about whether or\n> >> not the iterator is \"empty\". Do you mean cases when the bitmap has no\n> >> blocks in it? It seems like we should be able to tell that from the\n> >> TIDBitmap.\n> >>\n> >>>\n> >>>> v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n> >>>\n> >>> I suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call the\n> >>> flag e.g. SO_NEED_TUPLE.\n> >>\n> >> Agreed. Done in attached v8. Though I wondered if it was a bit weird\n> >> that the flag is set in the common case and not set in the uncommon\n> >> case...\n> > \n> > v8 actually attached this time\n> \n> I tried to run the benchmarks with v8, but unfortunately it crashes for\n> me very quickly (I've only seen 0015 to crash, so I guess the bug is in\n> that patch).\n> \n> The backtrace attached, this doesn't seem right:\n> \n> (gdb) p hscan->rs_cindex\n> $1 = 543516018\n\nThanks for reporting this! I hadn't seen it crash on my machine, so I\ndidn't realize that I was no longer initializing rs_cindex and\nrs_ntuples on the first call to heapam_bitmap_next_tuple() (since\nheapam_bitmap_next_block() wasn't being called first). I've done this in\nattached v9.\n\nI haven't had a chance yet to reproduce the regressions you saw in the\nstreaming read user patch or to look closely at the performance results.\nI don't anticipate the streaming read user will have any performance\ndifferences in this v9 from v6, since I haven't yet rebased in Thomas'\nlatest streaming read API changes nor addressed any other potential\nregression sources.\n\nI tried rebasing in Thomas' latest version today and something is\ncausing a crash that I have yet to figure out. v10 of this patchset will\nhave his latest version once I get that fixed. I wanted to share this\nversion with what I think is a bug fix for the crash you saw first.\n\n- Melanie",
"msg_date": "Sun, 24 Mar 2024 13:38:33 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/24/24 18:38, Melanie Plageman wrote:\n> On Sun, Mar 24, 2024 at 01:36:19PM +0100, Tomas Vondra wrote:\n>>\n>>\n>> On 3/23/24 01:26, Melanie Plageman wrote:\n>>> On Fri, Mar 22, 2024 at 08:22:11PM -0400, Melanie Plageman wrote:\n>>>> On Tue, Mar 19, 2024 at 02:33:35PM +0200, Heikki Linnakangas wrote:\n>>>>> On 18/03/2024 17:19, Melanie Plageman wrote:\n>>>>>> I've attached v7 rebased over this commit.\n>>>>>\n>>>>> If we delayed table_beginscan_bm() call further, after starting the TBM\n>>>>> iterator, we could skip it altogether when the iterator is empty.\n>>>>>\n>>>>> That's a further improvement, doesn't need to be part of this patch set.\n>>>>> Just caught my eye while reading this.\n>>>>\n>>>> Hmm. You mean like until after the first call to tbm_[shared]_iterate()?\n>>>> AFAICT, tbm_begin_iterate() doesn't tell us anything about whether or\n>>>> not the iterator is \"empty\". Do you mean cases when the bitmap has no\n>>>> blocks in it? It seems like we should be able to tell that from the\n>>>> TIDBitmap.\n>>>>\n>>>>>\n>>>>>> v7-0003-Push-BitmapHeapScan-skip-fetch-optimization-into-.patch\n>>>>>\n>>>>> I suggest to avoid the double negative with SO_CAN_SKIP_FETCH, and call the\n>>>>> flag e.g. SO_NEED_TUPLE.\n>>>>\n>>>> Agreed. Done in attached v8. Though I wondered if it was a bit weird\n>>>> that the flag is set in the common case and not set in the uncommon\n>>>> case...\n>>>\n>>> v8 actually attached this time\n>>\n>> I tried to run the benchmarks with v8, but unfortunately it crashes for\n>> me very quickly (I've only seen 0015 to crash, so I guess the bug is in\n>> that patch).\n>>\n>> The backtrace attached, this doesn't seem right:\n>>\n>> (gdb) p hscan->rs_cindex\n>> $1 = 543516018\n> \n> Thanks for reporting this! I hadn't seen it crash on my machine, so I\n> didn't realize that I was no longer initializing rs_cindex and\n> rs_ntuples on the first call to heapam_bitmap_next_tuple() (since\n> heapam_bitmap_next_block() wasn't being called first). I've done this in\n> attached v9.\n> \n\nOK, I've restarted the tests with v9.\n\n> I haven't had a chance yet to reproduce the regressions you saw in the\n> streaming read user patch or to look closely at the performance results.\n\nSo you tried to reproduce it and didn't hit the issue? Or didn't have\ntime to look into that yet? FWIW with v7 it failed almost immediately\n(only a couple queries until hitting one triggering the issue), but v9\nthat's not the case (hundreds of queries without an error).\n\n> I don't anticipate the streaming read user will have any performance\n> differences in this v9 from v6, since I haven't yet rebased in Thomas'\n> latest streaming read API changes nor addressed any other potential\n> regression sources.\n> \n\nOK, understood. It'll be interesting to see the behavior with the new\nversion of Thomas' patch.\n\nI however wonder what the plan with these patches is - do we still plan\nto get some of this into v17? It seems to me we're getting uncomfortably\nclose to the end of the cycle, with a fairly incomplete idea of how it\naffects performance.\n\nWhich is why I've been focusing more on the refactoring patches (up to\n0015), to make sure those don't cause regressions if committed. And I\nthink that's generally true.\n\nBut for the main StreamingRead API the situation is very different.\n\n> I tried rebasing in Thomas' latest version today and something is\n> causing a crash that I have yet to figure out. v10 of this patchset will\n> have his latest version once I get that fixed. I wanted to share this\n> version with what I think is a bug fix for the crash you saw first.\n> \n\nUnderstood. I'll let the tests with v9 run for now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 24 Mar 2024 19:22:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 2:22 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 3/24/24 18:38, Melanie Plageman wrote:\n> > I haven't had a chance yet to reproduce the regressions you saw in the\n> > streaming read user patch or to look closely at the performance results.\n>\n> So you tried to reproduce it and didn't hit the issue? Or didn't have\n> time to look into that yet? FWIW with v7 it failed almost immediately\n> (only a couple queries until hitting one triggering the issue), but v9\n> that's not the case (hundreds of queries without an error).\n\nI haven't started trying to reproduce it yet.\n\n> I however wonder what the plan with these patches is - do we still plan\n> to get some of this into v17? It seems to me we're getting uncomfortably\n> close to the end of the cycle, with a fairly incomplete idea of how it\n> affects performance.\n>\n> Which is why I've been focusing more on the refactoring patches (up to\n> 0015), to make sure those don't cause regressions if committed. And I\n> think that's generally true.\n\nThank you for testing the refactoring patches with this in mind! Out\nof the refactoring patches, I think there is a subset of them that\nhave independent value without the streaming read user. I think it is\nworth committing the first few patches because they remove a table AM\nlayering violation. IMHO, all of the patches up to \"Make\ntable_scan_bitmap_next_block() async friendly\" make the code nicer and\nbetter. And, if folks like the patch \"Remove\ntable_scan_bitmap_next_block()\", then I think I could rebase that back\nin on top of \"Make table_scan_bitmap_next_block() async friendly\".\nThis would mean table AMs would only have to implement one callback\n(table_scan_bitmap_next_tuple()) which I also think is a net\nimprovement and simplification.\n\nThe other refactoring patches may not be interesting without the\nstreaming read user.\n\n> But for the main StreamingRead API the situation is very different.\n\nMy intent for the bitmapheapscan streaming read user was to get it\ninto 17, but I'm not sure that looks likely. The main issues Thomas is\nlooking into right now are related to regressions for a fully cached\nscan (noticeable with the pg_prewarm streaming read user). With all of\nthese fixed, I anticipate we will still see enough behavioral\ndifferences with the bitmapheap scan streaming read user that it may\nnot be committable in time. Though, I have yet to work on reproducing\nthe regressions with the BHS streaming read user mostly because I was\nfocused on getting the refactoring ready and not as much because the\nstreaming read API is unstable.\n\n- Melanie\n\n\n",
"msg_date": "Sun, 24 Mar 2024 16:12:17 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/24/24 21:12, Melanie Plageman wrote:\n> On Sun, Mar 24, 2024 at 2:22 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 3/24/24 18:38, Melanie Plageman wrote:\n>>> I haven't had a chance yet to reproduce the regressions you saw in the\n>>> streaming read user patch or to look closely at the performance results.\n>>\n>> So you tried to reproduce it and didn't hit the issue? Or didn't have\n>> time to look into that yet? FWIW with v7 it failed almost immediately\n>> (only a couple queries until hitting one triggering the issue), but v9\n>> that's not the case (hundreds of queries without an error).\n> \n> I haven't started trying to reproduce it yet.\n> \n>> I however wonder what the plan with these patches is - do we still plan\n>> to get some of this into v17? It seems to me we're getting uncomfortably\n>> close to the end of the cycle, with a fairly incomplete idea of how it\n>> affects performance.\n>>\n>> Which is why I've been focusing more on the refactoring patches (up to\n>> 0015), to make sure those don't cause regressions if committed. And I\n>> think that's generally true.\n> \n> Thank you for testing the refactoring patches with this in mind! Out\n> of the refactoring patches, I think there is a subset of them that\n> have independent value without the streaming read user. I think it is\n> worth committing the first few patches because they remove a table AM\n> layering violation. IMHO, all of the patches up to \"Make\n> table_scan_bitmap_next_block() async friendly\" make the code nicer and\n> better. And, if folks like the patch \"Remove\n> table_scan_bitmap_next_block()\", then I think I could rebase that back\n> in on top of \"Make table_scan_bitmap_next_block() async friendly\".\n> This would mean table AMs would only have to implement one callback\n> (table_scan_bitmap_next_tuple()) which I also think is a net\n> improvement and simplification.\n> \n> The other refactoring patches may not be interesting without the\n> streaming read user.\n> \n\nI admit not reviewing the individual patches very closely yet, but this\nmatches how I understood them - that at least some are likely an\nimprovement on their own, not just as a refactoring preparing for the\nswitch to streaming reads.\n\nWe only have ~2 weeks left, so it's probably time to focus on getting at\nleast those improvements committed. I see Heikki was paying way more\nattention to the patches than me, though ...\n\nBTW when you say \"up to 'Make table_scan_bitmap_next_block() async\nfriendly'\" do you mean including that patch, or that this is the first\npatch that is not one of the independently useful patches.\n\n(I took a quick look at the first couple patches and I appreciate that\nyou keep separate patches with small cosmetic changes to keep the actual\npatch smaller and easier to understand.)\n\n>> But for the main StreamingRead API the situation is very different.\n> \n> My intent for the bitmapheapscan streaming read user was to get it\n> into 17, but I'm not sure that looks likely. The main issues Thomas is\n> looking into right now are related to regressions for a fully cached\n> scan (noticeable with the pg_prewarm streaming read user). With all of\n> these fixed, I anticipate we will still see enough behavioral\n> differences with the bitmapheap scan streaming read user that it may\n> not be committable in time. Though, I have yet to work on reproducing\n> the regressions with the BHS streaming read user mostly because I was\n> focused on getting the refactoring ready and not as much because the\n> streaming read API is unstable.\n> \n\nI don't have a very good intuition regarding impact of the streaming API\npatch on performance. I haven't been following that thread very closely,\nbut AFAICS there wasn't much discussion about that - perhaps it happened\nofflist, not sure. So who knows, really?\n\nWhich is why I started looking at this patch instead - it seemed easier\nto benchmark with a somewhat realistic workload.\n\nBut yeah, there certainly were significant behavior changes, and it's\nunlikely that whatever Thomas did in v8 made them go away.\n\nFWIW I certainly am *not* suggesting there must be no behavior changes,\nthat's simply not possible. I'm not even suggesting no queries must get\nslower - given the dependence on storage, I think some regressions are\npretty much inevitable. But it's still be good to know the regressions\nare reasonably rare exceptions rather than the common case, and that's\nnot what I'm seeing ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 24 Mar 2024 22:59:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 5:59 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> BTW when you say \"up to 'Make table_scan_bitmap_next_block() async\n> friendly'\" do you mean including that patch, or that this is the first\n> patch that is not one of the independently useful patches.\n\nI think the code is easier to understand with \"Make\ntable_scan_bitmap_next_block() async friendly\". Prior to that commit,\ntable_scan_bitmap_next_block() could return false even when the bitmap\nhas more blocks and expects the caller to handle this and invoke it\nagain. I think that interface is very confusing. The downside of the\ncode in that state is that the code for prefetching is still in the\nBitmapHeapNext() code and the code for getting the current block is in\nthe heap AM-specific code. I took a stab at fixing this in v9's 0013,\nbut the outcome wasn't very attractive.\n\nWhat I will do tomorrow is reorder and group the commits such that all\nof the commits that are useful independent of streaming read are first\n(I think 0014 and 0015 are independently valuable but they are on top\nof some things that are only useful to streaming read because they are\nmore recently requested changes). I think I can actually do a bit of\nsimplification in terms of how many commits there are and what is in\neach. Just to be clear, v9 is still reviewable. I am just going to go\nback and change what is included in each commit.\n\n> (I took a quick look at the first couple patches and I appreciate that\n> you keep separate patches with small cosmetic changes to keep the actual\n> patch smaller and easier to understand.)\n\nThanks!\n\n\n",
"msg_date": "Sun, 24 Mar 2024 18:37:20 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 06:37:20PM -0400, Melanie Plageman wrote:\n> On Sun, Mar 24, 2024 at 5:59 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > BTW when you say \"up to 'Make table_scan_bitmap_next_block() async\n> > friendly'\" do you mean including that patch, or that this is the first\n> > patch that is not one of the independently useful patches.\n> \n> I think the code is easier to understand with \"Make\n> table_scan_bitmap_next_block() async friendly\". Prior to that commit,\n> table_scan_bitmap_next_block() could return false even when the bitmap\n> has more blocks and expects the caller to handle this and invoke it\n> again. I think that interface is very confusing. The downside of the\n> code in that state is that the code for prefetching is still in the\n> BitmapHeapNext() code and the code for getting the current block is in\n> the heap AM-specific code. I took a stab at fixing this in v9's 0013,\n> but the outcome wasn't very attractive.\n> \n> What I will do tomorrow is reorder and group the commits such that all\n> of the commits that are useful independent of streaming read are first\n> (I think 0014 and 0015 are independently valuable but they are on top\n> of some things that are only useful to streaming read because they are\n> more recently requested changes). I think I can actually do a bit of\n> simplification in terms of how many commits there are and what is in\n> each. Just to be clear, v9 is still reviewable. I am just going to go\n> back and change what is included in each commit.\n\nSo, attached v10 does not include the new version of streaming read API.\nI focused instead on the refactoring patches commit regrouping I\nmentioned here.\n\nI realized \"Remove table_scan_bitmap_next_block()\" can't easily be moved\ndown below \"Push BitmapHeapScan prefetch code into heapam.c\" because we\nhave to do BitmapAdjustPrefetchTarget() and\nBitmapAdjustPrefetchIterator() on either side of getting the next block\n(via table_scan_bitmap_next_block()).\n\n\"Push BitmapHeapScan prefetch code into heapam.c\" isn't very nice\nbecause it adds a lot of bitmapheapscan specific members to\nTableScanDescData and HeapScanDescData. I thought about wrapping all of\nthose members in some kind of BitmapHeapScanTableState struct -- but I\ndon't like that because the members are spread out across\nHeapScanDescData and TableScanDescData so not all of them would go in\nBitmapHeapScanTableState. I could move the ones I put in\nHeapScanDescData back into TableScanDescData and then wrap that in a\nBitmapHeapScanTableState. I haven't done that in this version.\n\nI did manage to move \"Unify parallel and serial BitmapHeapScan iterator\ninterfaces\" down below the line of patches which are only useful if\nthe streaming read user also goes in.\n\nIn attached v10, all patches up to and including \"Unify parallel and\nserial BitmapHeapScan iterator interfaces\" (0010) are proposed for\nmaster with or without the streaming read API.\n\n0010 does add additional indirection and thus pointer dereferencing for\naccessing the iterators, which doesn't feel good. But, it does simplify\nthe code.\n\nPerhaps it is worth renaming the existing TableScanDescData->rs_parallel\n(a ParallelTableScanDescData) to something like rs_seq_parallel. It is\nonly for sequential scans and scans of tables when building indexes but\nthe comments say it is for parallel scans in general. There is a similar\nmember in HeapScanDescData called rs_parallelworkerdata.\n\n- Melanie",
"msg_date": "Mon, 25 Mar 2024 12:07:09 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 12:07:09PM -0400, Melanie Plageman wrote:\n> On Sun, Mar 24, 2024 at 06:37:20PM -0400, Melanie Plageman wrote:\n> > On Sun, Mar 24, 2024 at 5:59 PM Tomas Vondra\n> > <[email protected]> wrote:\n> > >\n> > > BTW when you say \"up to 'Make table_scan_bitmap_next_block() async\n> > > friendly'\" do you mean including that patch, or that this is the first\n> > > patch that is not one of the independently useful patches.\n> > \n> > I think the code is easier to understand with \"Make\n> > table_scan_bitmap_next_block() async friendly\". Prior to that commit,\n> > table_scan_bitmap_next_block() could return false even when the bitmap\n> > has more blocks and expects the caller to handle this and invoke it\n> > again. I think that interface is very confusing. The downside of the\n> > code in that state is that the code for prefetching is still in the\n> > BitmapHeapNext() code and the code for getting the current block is in\n> > the heap AM-specific code. I took a stab at fixing this in v9's 0013,\n> > but the outcome wasn't very attractive.\n> > \n> > What I will do tomorrow is reorder and group the commits such that all\n> > of the commits that are useful independent of streaming read are first\n> > (I think 0014 and 0015 are independently valuable but they are on top\n> > of some things that are only useful to streaming read because they are\n> > more recently requested changes). I think I can actually do a bit of\n> > simplification in terms of how many commits there are and what is in\n> > each. Just to be clear, v9 is still reviewable. I am just going to go\n> > back and change what is included in each commit.\n> \n> So, attached v10 does not include the new version of streaming read API.\n> I focused instead on the refactoring patches commit regrouping I\n> mentioned here.\n\nAttached v11 has the updated Read Stream API Thomas sent this morning\n[1]. No other changes.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJTwrS7F%3DuJPx3SeigMiQiW%2BLJaOkjGyZdCntwyMR%3DuAw%40mail.gmail.com",
"msg_date": "Wed, 27 Mar 2024 15:37:50 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "With the unexplained but apparently somewhat systematic regression\npatterns on certain tests and settings, I wonder if they might be due\nto read_stream.c trying to form larger reads, making it a bit lazier.\nIt tries to see what the next block will be before issuing the\nfadvise. I think that means that with small I/O concurrency settings,\nthere might be contrived access patterns where it loses, and needs\neffective_io_concurrency to be set one notch higher to keep up, or\nsomething like that. One way to test that idea would be to run the\ntests with io_combine_limit = 1 (meaning 1 block). It issues advise\neagerly when io_combine_limit is reached, so I suppose it should be\nexactly as eager as master. The only difference then should be that\nit automatically suppresses sequential fadvise calls.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 18:20:27 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/28/24 06:20, Thomas Munro wrote:\n> With the unexplained but apparently somewhat systematic regression\n> patterns on certain tests and settings, I wonder if they might be due\n> to read_stream.c trying to form larger reads, making it a bit lazier.\n> It tries to see what the next block will be before issuing the\n> fadvise. I think that means that with small I/O concurrency settings,\n> there might be contrived access patterns where it loses, and needs\n> effective_io_concurrency to be set one notch higher to keep up, or\n> something like that.\n\nYes, I think we've speculated this might be the root cause before, but\nIIRC we didn't manage to verify it actually is the problem.\n\nFWIW I don't think the tests use synthetic data, but I don't think it's\nparticularly contrived.\n\n> One way to test that idea would be to run the\n> tests with io_combine_limit = 1 (meaning 1 block). It issues advise\n> eagerly when io_combine_limit is reached, so I suppose it should be\n> exactly as eager as master. The only difference then should be that\n> it automatically suppresses sequential fadvise calls.\n\nSure, I'll give that a try. What are some good values to test? Perhaps\n32 and 1, i.e. the default and \"no coalescing\"?\n\nIf this turns out to be the problem, does that mean we would consider\nusing a more conservative default value? Is there some \"auto tuning\" we\ncould do? For example, could we reduce the value combine limit if we\nstart not finding buffers in memory, or something like that?\n\nI recognize this may not be possible with buffered I/O, due to not\nhaving any insight into page cache. And maybe it's misguided anyway,\nbecause how would we know if the right response is to increase or reduce\nthe combine limit?\n\nAnyway, doesn't the combine limit work against the idea that\neffective_io_concurrency is \"prefetch distance\"? With eic=32 I'd expect\nwe issue prefetch 32 pages ahead, i.e. if we prefetch page X, we should\nthen process 32 pages before we actually need X (and we expect the page\nto already be in memory, thanks to the gap). But with the combine limit\nset to 32, is this still true?\n\nI've tried going through read_stream_* to determine how this will\nbehave, but read_stream_look_ahead/read_stream_start_pending_read does\nnot make this very clear. I'll have to experiment with some tracing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 28 Mar 2024 19:01:33 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 7:01 AM Tomas Vondra\n<[email protected]> wrote:\n> On 3/28/24 06:20, Thomas Munro wrote:\n> > With the unexplained but apparently somewhat systematic regression\n> > patterns on certain tests and settings, I wonder if they might be due\n> > to read_stream.c trying to form larger reads, making it a bit lazier.\n> > It tries to see what the next block will be before issuing the\n> > fadvise. I think that means that with small I/O concurrency settings,\n> > there might be contrived access patterns where it loses, and needs\n> > effective_io_concurrency to be set one notch higher to keep up, or\n> > something like that.\n>\n> Yes, I think we've speculated this might be the root cause before, but\n> IIRC we didn't manage to verify it actually is the problem.\n\nAnother factor could be the bug in master that allows it to get out of\nsync -- can it allow *more* concurrency than it intended to? Or fewer\nhints, but somehow that goes faster because of the\nstepping-on-kernel-toes problem?\n\n> > One way to test that idea would be to run the\n> > tests with io_combine_limit = 1 (meaning 1 block). It issues advise\n> > eagerly when io_combine_limit is reached, so I suppose it should be\n> > exactly as eager as master. The only difference then should be that\n> > it automatically suppresses sequential fadvise calls.\n>\n> Sure, I'll give that a try. What are some good values to test? Perhaps\n> 32 and 1, i.e. the default and \"no coalescing\"?\n\nThanks! Yeah. The default is actually 16, computed backwards from\n128kB. (Explanation: POSIX requires 16 as minimum IOV_MAX, ie number\nof vectors acceptable to writev/readv and related functions, though\nactual acceptable number is usually much higher, and it also seems to\nbe a conservative acceptable number for hardware scatter/gather lists\nin various protocols, ie if doing direct I/O, the transfer won't be\nchopped up into more than one physical I/O command because the disk\nand DMA engine can handle it as a single I/O in theory at least.\nActual limit on random SSDs might be more like 33, a weird number but\nthat's what I'm seeing; Mr Axboe wrote a nice short article[1] to get\nsome starting points for terminology on that topic on Linux. Also,\njust anecdotally, returns seem to diminish after that with huge\ntransfers of buffered I/O so it seems like an OK number if you have to\npick one; but IDK, YMMV, subject for future research as direct I/O\ngrows in relevance, hence GUC.)\n\n> If this turns out to be the problem, does that mean we would consider\n> using a more conservative default value? Is there some \"auto tuning\" we\n> could do? For example, could we reduce the value combine limit if we\n> start not finding buffers in memory, or something like that?\n\nHmm, not sure... I like that number for seq scans. I also like\nauto-tuning. But it seems to me that *if* the problem is that we're\nnot allowing ourselves as many concurrent I/Os as master BHS because\nwe're waiting to see if the next block is consecutive, that might\nindicate that the distance needs to be higher so that we can have a\nbetter chance to see the 'edge' (the non-contiguous next block) and\nstart the I/O, not that the io_combine_limit needs to be lower. But I\ncould be way off, given the fuzziness on this problem so far...\n\n> Anyway, doesn't the combine limit work against the idea that\n> effective_io_concurrency is \"prefetch distance\"? With eic=32 I'd expect\n> we issue prefetch 32 pages ahead, i.e. if we prefetch page X, we should\n> then process 32 pages before we actually need X (and we expect the page\n> to already be in memory, thanks to the gap). But with the combine limit\n> set to 32, is this still true?\n\nHmm. It's different. (1) Master BHS has prefetch_maximum, which is\nindeed directly taken from the eic setting, while read_stream.c is\nprepared to look much ahead further than that (potentially as far as\nmax_pinned_buffers) if it's been useful recently, to find\nopportunities to coalesce and start I/O. (2) Master BHS has\nprefetch_target to control the look-ahead window, which starts small\nand ramps up until it hits prefetch_maximum, while read_stream.c has\ndistance which goes up and down according to a more complex algorithm\ndescribed at the top.\n\n> I've tried going through read_stream_* to determine how this will\n> behave, but read_stream_look_ahead/read_stream_start_pending_read does\n> not make this very clear. I'll have to experiment with some tracing.\n\nI'm going to try to set up something like your experiment here too,\nand figure out some way to visualise or trace what's going on...\n\nThe differences come from (1) multi-block I/Os, requiring two separate\nnumbers: how many blocks ahead we're looking, and how many I/Os are\nrunning, and (2) being more aggressive about trying to reach the\ndesired I/O level. Let me try to describe the approach again.\n\"distance\" is the size of a window that we're searching for\nopportunities to start I/Os. read_stream_look_ahead() will keep\nlooking ahead until we already have max_ios I/Os running, or we hit\nthe end of that window. That's the two conditions in the while loop\nat the top:\n\n while (stream->ios_in_progress < stream->max_ios &&\n stream->pinned_buffers + stream->pending_read_nblocks <\nstream->distance)\n\nIf that window is not large enough, we won't be able to find enough\nI/Os to reach max_ios. So, every time we finish up starting a random\n(non-sequential) I/O, we increase the distance, widening the window\nuntil it can hopefully reach the I/O goal. (I call that behaviour C\nin the comments and code.) In other words, master BHS can only find\nopportunities to start I/Os in a smaller window, and can only reach\nthe full I/O concurrency target if they are right next to each other\nin that window, but read_stream.c will look much further ahead, but\nonly if that has recently proven to be useful.\n\nIf we find I/Os that need doing, but they're all sequential, the\nwindow size moves towards io_combine_limit, because we know that\nissuing advice won't help, so there is no point in making the window\nwider than one maximum-sized I/O. For example, sequential scans or\nbitmap heapscans with lots of consecutive page bits fall into this\npattern. (Behaviour B in the code comments.) This is a pattern that\nmaster BHS doesn't have anything like.\n\nIf we find that we don't need to do any I/O, we slowly move the window\nsize towards 1 (also the initial value) as there is no point in doing\nanything special as it can't help. In contrast, master BHS never\nshrinks its prefetch_target, it only goes up until it hits eic.\n\n[1] https://kernel.dk/when-2mb-turns-into-512k.pdf\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:19:08 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/27/24 20:37, Melanie Plageman wrote:\n> On Mon, Mar 25, 2024 at 12:07:09PM -0400, Melanie Plageman wrote:\n>> On Sun, Mar 24, 2024 at 06:37:20PM -0400, Melanie Plageman wrote:\n>>> On Sun, Mar 24, 2024 at 5:59 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> BTW when you say \"up to 'Make table_scan_bitmap_next_block() async\n>>>> friendly'\" do you mean including that patch, or that this is the first\n>>>> patch that is not one of the independently useful patches.\n>>>\n>>> I think the code is easier to understand with \"Make\n>>> table_scan_bitmap_next_block() async friendly\". Prior to that commit,\n>>> table_scan_bitmap_next_block() could return false even when the bitmap\n>>> has more blocks and expects the caller to handle this and invoke it\n>>> again. I think that interface is very confusing. The downside of the\n>>> code in that state is that the code for prefetching is still in the\n>>> BitmapHeapNext() code and the code for getting the current block is in\n>>> the heap AM-specific code. I took a stab at fixing this in v9's 0013,\n>>> but the outcome wasn't very attractive.\n>>>\n>>> What I will do tomorrow is reorder and group the commits such that all\n>>> of the commits that are useful independent of streaming read are first\n>>> (I think 0014 and 0015 are independently valuable but they are on top\n>>> of some things that are only useful to streaming read because they are\n>>> more recently requested changes). I think I can actually do a bit of\n>>> simplification in terms of how many commits there are and what is in\n>>> each. Just to be clear, v9 is still reviewable. I am just going to go\n>>> back and change what is included in each commit.\n>>\n>> So, attached v10 does not include the new version of streaming read API.\n>> I focused instead on the refactoring patches commit regrouping I\n>> mentioned here.\n> \n> Attached v11 has the updated Read Stream API Thomas sent this morning\n> [1]. No other changes.\n> \n\nI think there's some sort of bug, triggering this assert in heapam\n\n Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n\nI haven't looked for the root cause, and it's not exactly deterministic,\nbut try this:\n\n create table t (a int, b text);\n\n insert into t select 10000 * random(), md5(i::text)\n from generate_series(1,10000000) s(i);^C\n\n create index on t (a);\n\n explain analyze select * from t where a = 200;\n explain analyze select * from t where a < 200;\n\nand then vary the condition a bit (different values, inequalities,\netc.). For me it hits the assert in a couple tries.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 28 Mar 2024 22:43:42 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n<[email protected]> wrote:\n> I think there's some sort of bug, triggering this assert in heapam\n>\n> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n\nThanks for the repro. I can't seem to reproduce it (still trying) but\nI assume this is with Melanie's v11 patch set which had\nv11-0016-v10-Read-Stream-API.patch.\n\nWould you mind removing that commit and instead applying the v13\nstream_read.c patches[1]? v10 stream_read.c was a little confused\nabout random I/O combining, which I fixed with a small adjustment to\nthe conditions for the \"if\" statement right at the end of\nread_stream_look_ahead(). Sorry about that. The fixed version, with\neic=4, with your test query using WHERE a < a, ends its scan with:\n\n...\nposix_fadvise(32,0x28aee000,0x4000,POSIX_FADV_WILLNEED) = 0 (0x0)\npread(32,\"\\0\\0\\0\\0@4\\M-5:\\0\\0\\^D\\0\\M-x\\^A\"...,40960,0x28acc000) = 40960 (0xa000)\nposix_fadvise(32,0x28af4000,0x4000,POSIX_FADV_WILLNEED) = 0 (0x0)\npread(32,\"\\0\\0\\0\\0\\^XC\\M-6:\\0\\0\\^D\\0\\M-x\"...,32768,0x28ad8000) = 32768 (0x8000)\nposix_fadvise(32,0x28afc000,0x4000,POSIX_FADV_WILLNEED) = 0 (0x0)\npread(32,\"\\0\\0\\0\\0\\M-XQ\\M-7:\\0\\0\\^D\\0\\M-x\"...,24576,0x28ae4000) = 24576 (0x6000)\nposix_fadvise(32,0x28b02000,0x8000,POSIX_FADV_WILLNEED) = 0 (0x0)\npread(32,\"\\0\\0\\0\\0\\M^@3\\M-8:\\0\\0\\^D\\0\\M-x\"...,16384,0x28aee000) = 16384 (0x4000)\npread(32,\"\\0\\0\\0\\0\\M-`\\M-:\\M-8:\\0\\0\\^D\\0\"...,16384,0x28af4000) = 16384 (0x4000)\npread(32,\"\\0\\0\\0\\0po\\M-9:\\0\\0\\^D\\0\\M-x\\^A\"...,16384,0x28afc000) = 16384 (0x4000)\npread(32,\"\\0\\0\\0\\0\\M-P\\M-v\\M-9:\\0\\0\\^D\\0\"...,32768,0x28b02000) = 32768 (0x8000)\n\nIn other words it's able to coalesce, but v10 was a bit b0rked in that\nrespect and wouldn't do as well at that. Then if you set\nio_combine_limit = 1, it looks more like master, eg lots of little\nreads, but not as many fadvises as master because of sequential\naccess:\n\n...\nposix_fadvise(32,0x28af4000,0x2000,POSIX_FADV_WILLNEED) = 0 (0x0) -+\npread(32,...,8192,0x28ae8000) = 8192 (0x2000) |\npread(32,...,8192,0x28aee000) = 8192 (0x2000) |\nposix_fadvise(32,0x28afc000,0x2000,POSIX_FADV_WILLNEED) = 0 (0x0) ---+\npread(32,...,8192,0x28af0000) = 8192 (0x2000) | |\npread(32,...,8192,0x28af4000) = 8192 (0x2000) <--------------------+ |\nposix_fadvise(32,0x28b02000,0x2000,POSIX_FADV_WILLNEED) = 0 (0x0) -----+\npread(32,...,8192,0x28af6000) = 8192 (0x2000) | |\npread(32,...,8192,0x28afc000) = 8192 (0x2000) <----------------------+ |\npread(32,...,8192,0x28afe000) = 8192 (0x2000) }-- no advice |\npread(32,...,8192,0x28b02000) = 8192 (0x2000) <------------------------+\npread(32,...,8192,0x28b04000) = 8192 (0x2000) }\npread(32,...,8192,0x28b06000) = 8192 (0x2000) }-- no advice\npread(32,...,8192,0x28b08000) = 8192 (0x2000) }\n\nIt becomes slightly less eager to start I/Os as soon as\nio_combine_limit > 1, because when it has hit max_ios, if ... <thinks>\nyeah if the average block that it can combine is bigger than 4, an\narbitrary number from:\n\n max_pinned_buffers = Max(max_ios * 4, io_combine_limit);\n\n.... then it can run out of look ahead window before it can reach\nmax_ios (aka eic), so that's a kind of arbitrary/bogus I/O depth\nconstraint, which is another way of saying what I was saying earlier:\nmaybe it just needs more distance. So let's see the average combined\nI/O length in your test query... for me it works out to 27,169 bytes.\nBut I think there must be times when it runs out of window due to\nclustering. So you could also try increasing that 4->8 to see what\nhappens to performance.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2B5UofvseJWv6YqKmuc_%3Drguc7VqKcNEG1eawKh3MzHXQ%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 14:12:48 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 3/29/24 02:12, Thomas Munro wrote:\n> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n> <[email protected]> wrote:\n>> I think there's some sort of bug, triggering this assert in heapam\n>>\n>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n> \n> Thanks for the repro. I can't seem to reproduce it (still trying) but\n> I assume this is with Melanie's v11 patch set which had\n> v11-0016-v10-Read-Stream-API.patch.\n> \n> Would you mind removing that commit and instead applying the v13\n> stream_read.c patches[1]? v10 stream_read.c was a little confused\n> about random I/O combining, which I fixed with a small adjustment to\n> the conditions for the \"if\" statement right at the end of\n> read_stream_look_ahead(). Sorry about that. The fixed version, with\n> eic=4, with your test query using WHERE a < a, ends its scan with:\n> \n\nI'll give that a try. Unfortunately unfortunately the v11 still has the\nproblem I reported about a week ago:\n\n ERROR: prefetch and main iterators are out of sync\n\nSo I can't run the full benchmarks :-( but master vs. streaming read API\nshould work, I think.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Mar 2024 12:05:15 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "I spent a bit of time today testing Melanie's v11, except with\nread_stream.c v13, on Linux, ext4, and 3000 IOPS cloud storage. I\nthink I now know roughly what's going on. Here are some numbers,\nusing your random table from above and a simple SELECT * FROM t WHERE\na < 100 OR a = 123456. I'll keep parallelism out of this for now.\nThese are milliseconds:\n\neic unpatched patched\n0 4172 9572\n1 30846 10376\n2 18435 5562\n4 18980 3503\n8 18980 2680\n16 18976 3233\n\nSo with eic=0, unpatched wins. The reason is that Linux readahead\nwakes up and scans the table at 150MB/s, because there are enough\nclusters to trigger it. But patched doesn't look quite so sequential\nbecause we removed the sequential accesses by I/O combining...\n\nAt eic=1, unpatched completely collapses. I'm not sure why exactly.\n\nOnce you go above eic=1, Linux seems to get out of the way and just do\nwhat we asked it to do: iostat shows exactly 3000 IOPS, exactly 8KB\navg read size, and (therefore) throughput of 24MB/sec, though you can\nsee the queue depth being exactly what we asked it to do,eg 7.9 or\nwhatever for eic=8, while patched eats it for breakfast because it\nissues wide requests, averaging around 27KB.\n\nIt seems more informative to look at the absolute numbers rather than\nthe A/B ratios, because then you can see how the numbers themselves\nare already completely nuts, sort of interference patterns from\ninteraction with kernel heuristics.\n\nOn the other hand this might be a pretty unusual data distribution.\nPeople who store random numbers or hashes or whatever probably don't\nreally search for ranges of them (unless they're trying to mine\nbitcoins in SQL). I dunno. Maybe we need more realistic tests, or\nmaybe we're just discovering all the things that are bad about the\npre-existing code.\n\n\n",
"msg_date": "Sat, 30 Mar 2024 00:17:13 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 12:17 AM Thomas Munro <[email protected]> wrote:\n> eic unpatched patched\n> 0 4172 9572\n> 1 30846 10376\n> 2 18435 5562\n> 4 18980 3503\n> 8 18980 2680\n> 16 18976 3233\n\n... but the patched version gets down to a low number for eic=0 too if\nyou turn up the blockdev --setra so that it also gets Linux RA\ntreatment, making it the clear winner on all eic settings. Patched\ndoesn't improve. So, for low IOPS storage at least, when you're on\nthe borderline between random and sequential, ie bitmap with a lot of\n1s in it, it seems there are cases where patched doesn't trigger Linux\nRA but unpatched does, and you can tune your way out of that, and then\nthere are cases where the IOPS limit is reached due to small reads,\nbut patched does better because of larger I/Os that are likely under\nthe same circumstances. Does that make sense?\n\n\n",
"msg_date": "Sat, 30 Mar 2024 02:36:08 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/29/24 14:36, Thomas Munro wrote:\n> On Sat, Mar 30, 2024 at 12:17 AM Thomas Munro <[email protected]> wrote:\n>> eic unpatched patched\n>> 0 4172 9572\n>> 1 30846 10376\n>> 2 18435 5562\n>> 4 18980 3503\n>> 8 18980 2680\n>> 16 18976 3233\n> \n> ... but the patched version gets down to a low number for eic=0 too if\n> you turn up the blockdev --setra so that it also gets Linux RA\n> treatment, making it the clear winner on all eic settings. Patched\n> doesn't improve. So, for low IOPS storage at least, when you're on\n> the borderline between random and sequential, ie bitmap with a lot of\n> 1s in it, it seems there are cases where patched doesn't trigger Linux\n> RA but unpatched does, and you can tune your way out of that, and then\n> there are cases where the IOPS limit is reached due to small reads,\n> but patched does better because of larger I/Os that are likely under\n> the same circumstances. Does that make sense?\n\nI think you meant \"unpatched version gets down\" in the first sentence,\nright? Still, it seems clear this changes the interaction with readahead\ndone by the kernel.\n\nHowever, you seem to focus only on eic=0/eic=1 cases, but IIRC that was\njust an example. There are regression with higher eic values too.\n\nI do have some early results from the benchmarks - it's just from the\nNVMe machine, with 1M tables (~300MB), and it's just one incomplete run\n(so there might be some noise etc.).\n\nAttached is a PDF with charts for different subsets of the runs:\n\n- optimal (would optimizer pick bitmapscan or not)\n- readahead (yes/no)\n- workers (serial vs. 4 workers)\n- combine limit (8kB / 128kB)\n\nThe most interesting cases are first two rows, i.e. optimal plans.\nEither with readahead enabled (first row) or disabled (second row).\n\nTwo observations:\n\n* The combine limit seems to have negligible impact. There's no visible\ndifference between combine_limit=8kB and 128kB.\n\n* Parallel queries seem to work about the same as master (especially for\noptimal cases, but even for not optimal ones).\n\n\nThe optimal plans with kernel readahead (two charts in the first row)\nlook fairly good. There are a couple regressed cases, but a bunch of\nfaster ones too.\n\nThe optimal plans without kernel read ahead (two charts in the second\nrow) perform pretty poorly - there are massive regressions. But I think\nthe obvious reason is that the streaming read API skips prefetches for\nsequential access patterns, relying on kernel to do the readahead. But\nif the kernel readahead is disabled for the device, that obviously can't\nhappen ...\n\nI think the question is how much we can (want to) rely on the readahead\nto be done by the kernel. Maybe there should be some flag to force\nissuing fadvise even for sequential patterns, perhaps at the tablespace\nlevel? I don't recall seeing a system with disabled readahead, but I'm\nsure there are cases where it may not really work - it clearly can't\nwork with direct I/O, but I've also not been very successful with\nprefetching on ZFS.\n\nThe non-optimal plans (second half of the charts) shows about the same\nbehavior, but the regressions are more frequent / significant.\n\nI'm also attaching results for the 5k \"optimal\" runs, showing the timing\nfor master and patched build, sorted by (patched/master). The most\nsignificant regressions are with readahead=0, but if you filter that out\nyou'll see the regressions affect a mix of data sets, not just the\nuniformly random data used as example before.\n\nOn 3/29/24 12:17, Thomas Munro wrote:\n> ...\n> On the other hand this might be a pretty unusual data distribution.\n> People who store random numbers or hashes or whatever probably don't\n> really search for ranges of them (unless they're trying to mine\n> bitcoins in SQL). I dunno. Maybe we need more realistic tests, or\n> maybe we're just discovering all the things that are bad about the\n> pre-existing code.\n\nI certainly admit the data sets are synthetic and perhaps adversarial.\nMy intent was to cover a wide range of data sets, to trigger even less\ncommon cases. It's certainly up to debate how serious the regressions on\nthose data sets are in practice, I'm not suggesting \"this strange data\nset makes it slower than master, so we can't commit this\".\n\nBut I'd also point that what matters is the access pattern, not the\nexact query generating it. I agree people probably don't do random\nnumbers or hashes with range conditions, but that's irrelevant - what\nit's all about the page access pattern. If you have IPv4 addresses and\nquery that, that's likely going to be pretty random, for example.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 29 Mar 2024 16:52:58 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 4:53 AM Tomas Vondra\n<[email protected]> wrote:\n> Two observations:\n>\n> * The combine limit seems to have negligible impact. There's no visible\n> difference between combine_limit=8kB and 128kB.\n>\n> * Parallel queries seem to work about the same as master (especially for\n> optimal cases, but even for not optimal ones).\n>\n>\n> The optimal plans with kernel readahead (two charts in the first row)\n> look fairly good. There are a couple regressed cases, but a bunch of\n> faster ones too.\n\nThanks for doing this!\n\n> The optimal plans without kernel read ahead (two charts in the second\n> row) perform pretty poorly - there are massive regressions. But I think\n> the obvious reason is that the streaming read API skips prefetches for\n> sequential access patterns, relying on kernel to do the readahead. But\n> if the kernel readahead is disabled for the device, that obviously can't\n> happen ...\n\nRight, it does seem that this whole concept is sensitive on the\n'borderline' between sequential and random, and this patch changes\nthat a bit and we lose some. It's becoming much clearer to me that\nmaster is already exposing weird kinks, and the streaming version is\nmostly better, certainly on low IOPS systems. I suspect that there\nmust be queries in the wild that would run much faster with eic=0 than\neic=1 today due to that, and while the streaming version also loses in\nsome cases, it seems that it mostly loses because of not triggering\nRA, which can at least be improved by increasing the RA window. On\nthe flip side, master is more prone to running out of IOPS and there\nis no way to tune your way out of that.\n\n> I think the question is how much we can (want to) rely on the readahead\n> to be done by the kernel. ...\n\nWe already rely on it everywhere, for basic things like sequential scan.\n\n> ... Maybe there should be some flag to force\n> issuing fadvise even for sequential patterns, perhaps at the tablespace\n> level? ...\n\nYeah, I've wondered about trying harder to \"second guess\" the Linux\nRA. At the moment, read_stream.c detects *exactly* sequential reads\n(see seq_blocknum) to suppress advice, but if we knew/guessed the RA\nwindow size, we could (1) detect it with the same window that Linux\nwill use to detect it, and (2) [new realisation from yesterday's\ntesting] we could even \"tickle\" it to wake it up in certain cases\nwhere it otherwise wouldn't, by temporarily using a smaller\nio_combine_limit if certain patterns come along. I think that sounds\nlike madness (I suspect that any place where the latter would help is\na place where you could turn RA up a bit higher for the same effect\nwithout weird kludges), or another way to put it would be to call it\n\"overfitting\" to the pre-existing quirks; but maybe it's a future\nresearch idea...\n\n> I don't recall seeing a system with disabled readahead, but I'm\n> sure there are cases where it may not really work - it clearly can't\n> work with direct I/O, ...\n\nRight, for direct I/O everything is slow right now including seq scan.\nWe need to start asynchronous reads in the background (imagine\nliterally just a bunch of background \"I/O workers\" running preadv() on\nyour behalf to get your future buffers ready for you, or equivalently\nLinux io_uring). That's the real goal of this project: restructuring\nso we have the information we need to do that, ie teach every part of\nPostgreSQL to predict the future in a standard and centralised way.\nShould work out better than RA heuristics, because we're not just\ndriving in a straight line, we can turn corners too.\n\n> ... but I've also not been very successful with\n> prefetching on ZFS.\n\nposix_favise() did not do anything in OpenZFS before 2.2, maybe you\nhave an older version?\n\n> I certainly admit the data sets are synthetic and perhaps adversarial.\n> My intent was to cover a wide range of data sets, to trigger even less\n> common cases. It's certainly up to debate how serious the regressions on\n> those data sets are in practice, I'm not suggesting \"this strange data\n> set makes it slower than master, so we can't commit this\".\n\nRight, yeah. Thanks! Your initial results seemed discouraging, but\nlooking closer I'm starting to feel a lot more positive about\nstreaming BHS.\n\n\n",
"msg_date": "Sat, 30 Mar 2024 10:39:05 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 10:39 AM Thomas Munro <[email protected]> wrote:\n> On Sat, Mar 30, 2024 at 4:53 AM Tomas Vondra\n> <[email protected]> wrote:\n> > ... Maybe there should be some flag to force\n> > issuing fadvise even for sequential patterns, perhaps at the tablespace\n> > level? ...\n>\n> Yeah, I've wondered about trying harder to \"second guess\" the Linux\n> RA. At the moment, read_stream.c detects *exactly* sequential reads\n> (see seq_blocknum) to suppress advice, but if we knew/guessed the RA\n> window size, we could (1) detect it with the same window that Linux\n> will use to detect it, and (2) [new realisation from yesterday's\n> testing] we could even \"tickle\" it to wake it up in certain cases\n> where it otherwise wouldn't, by temporarily using a smaller\n> io_combine_limit if certain patterns come along. I think that sounds\n> like madness (I suspect that any place where the latter would help is\n> a place where you could turn RA up a bit higher for the same effect\n> without weird kludges), or another way to put it would be to call it\n> \"overfitting\" to the pre-existing quirks; but maybe it's a future\n> research idea...\n\nI guess I missed a step when responding that suggestion: I don't think\nwe could have an \"issue advice always\" flag, because it doesn't seem\nto work out as well as letting the kernel do it, and a global flag\nlike that would affect everything else including sequential scans\n(once the streaming seq scan patch goes in). But suppose we could do\nthat, maybe even just for BHS. In my little test yesterday had to\nissue a lot of them, patched eic=4, to beat the kernel's RA with\nunpatched eic=0:\n\neic unpatched patched\n0 4172 9572\n1 30846 10376\n2 18435 5562\n4 18980 3503\n\nSo if we forced fadvise to be issued with a GUC, it still wouldn't be\ngood enough in this case. So we might need to try to understand what\nexactly is waking the RA up for unpatched but not patched, and try to\ntickle it by doing a little less I/O combining (for example just\nsetting io_combine_limit=1 gives the same number for eic=0, a major\nclue), but that seems to be going down a weird path, and tuning such a\ncopying algorithm seems too hard.\n\n\n",
"msg_date": "Sat, 30 Mar 2024 11:03:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/29/24 23:03, Thomas Munro wrote:\n> On Sat, Mar 30, 2024 at 10:39 AM Thomas Munro <[email protected]> wrote:\n>> On Sat, Mar 30, 2024 at 4:53 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>> ... Maybe there should be some flag to force\n>>> issuing fadvise even for sequential patterns, perhaps at the tablespace\n>>> level? ...\n>>\n>> Yeah, I've wondered about trying harder to \"second guess\" the Linux\n>> RA. At the moment, read_stream.c detects *exactly* sequential reads\n>> (see seq_blocknum) to suppress advice, but if we knew/guessed the RA\n>> window size, we could (1) detect it with the same window that Linux\n>> will use to detect it, and (2) [new realisation from yesterday's\n>> testing] we could even \"tickle\" it to wake it up in certain cases\n>> where it otherwise wouldn't, by temporarily using a smaller\n>> io_combine_limit if certain patterns come along. I think that sounds\n>> like madness (I suspect that any place where the latter would help is\n>> a place where you could turn RA up a bit higher for the same effect\n>> without weird kludges), or another way to put it would be to call it\n>> \"overfitting\" to the pre-existing quirks; but maybe it's a future\n>> research idea...\n> \n\nI don't know if I'd call this overfitting - yes, we certainly don't want\nto tailor this code to only work with the linux RA, but OTOH it's the RA\nis what most systems do. And if we plan to rely on that, we probably\nhave to \"respect\" how it works ...\n\nMoving to a \"clean\" approach that however triggers regressions does not\nseem like a great thing for users. I'm not saying the goal has to be \"no\nregressions\", that would be rather impossible. At this point I still try\nto understand what's causing this.\n\nBTW are you suggesting that increasing the RA distance could maybe fix\nthe regressions? I can give it a try, but I was assuming that 128kB\nreadahead would be enough for combine_limit=8kB.\n\n> I guess I missed a step when responding that suggestion: I don't think\n> we could have an \"issue advice always\" flag, because it doesn't seem\n> to work out as well as letting the kernel do it, and a global flag\n> like that would affect everything else including sequential scans\n> (once the streaming seq scan patch goes in). But suppose we could do\n> that, maybe even just for BHS. In my little test yesterday had to\n> issue a lot of them, patched eic=4, to beat the kernel's RA with\n> unpatched eic=0:\n> \n> eic unpatched patched\n> 0 4172 9572\n> 1 30846 10376\n> 2 18435 5562\n> 4 18980 3503\n> \n> So if we forced fadvise to be issued with a GUC, it still wouldn't be\n> good enough in this case. So we might need to try to understand what\n> exactly is waking the RA up for unpatched but not patched, and try to\n> tickle it by doing a little less I/O combining (for example just\n> setting io_combine_limit=1 gives the same number for eic=0, a major\n> clue), but that seems to be going down a weird path, and tuning such a\n> copying algorithm seems too hard.\n\nHmmm. I admit I didn't think about the \"always prefetch\" flag too much,\nbut I did imagine it'd only affect some places (e.g. BHS, but not for\nsequential scans). If it could be done by lowering the combine limit,\nthat could work too - in fact, I was wondering if we should have combine\nlimit as a tablespace parameter too.\n\nBut I think adding such knobs should be only the last resort - I myself\ndon't know how to set these parameters, how could we expect users to\npick good values? Better to have something that \"just works\".\n\nI admit I never 100% understood when exactly the kernel RA kicks in, but\nI always thought it's enough for the patterns to be only \"close enough\"\nto sequential. Isn't the problem that this only skips fadvise for 100%\nsequential patterns, but keeps prefetching for cases the RA would deal\non it's own? So maybe we should either relax the conditions when to skip\nfadvise, or combine even pages that are not perfectly sequential (I'm\nnot sure if that's possible only for fadvise), though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Mar 2024 00:34:46 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 3/29/24 22:39, Thomas Munro wrote:\n> ...\n> \n>> I don't recall seeing a system with disabled readahead, but I'm\n>> sure there are cases where it may not really work - it clearly can't\n>> work with direct I/O, ...\n> \n> Right, for direct I/O everything is slow right now including seq scan.\n> We need to start asynchronous reads in the background (imagine\n> literally just a bunch of background \"I/O workers\" running preadv() on\n> your behalf to get your future buffers ready for you, or equivalently\n> Linux io_uring). That's the real goal of this project: restructuring\n> so we have the information we need to do that, ie teach every part of\n> PostgreSQL to predict the future in a standard and centralised way.\n> Should work out better than RA heuristics, because we're not just\n> driving in a straight line, we can turn corners too.\n> \n>> ... but I've also not been very successful with\n>> prefetching on ZFS.\n> \n> posix_favise() did not do anything in OpenZFS before 2.2, maybe you\n> have an older version?\n> \n\nSorry, I meant the prefetch (readahead) built into ZFS. I may be wrong\nbut I don't think the regular RA (in linux kernel) works for ZFS, right?\n\nI was wondering if we could use this (posix_fadvise) to improve that,\nessentially by issuing fadvise even for sequential patterns. But now\nthat I think about that, if posix_fadvise works since 2.2, maybe RA\nworks too now?)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Mar 2024 00:40:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 12:40 PM Tomas Vondra\n<[email protected]> wrote:\n> Sorry, I meant the prefetch (readahead) built into ZFS. I may be wrong\n> but I don't think the regular RA (in linux kernel) works for ZFS, right?\n\nRight, it separate page cache (\"ARC\") and prefetch settings:\n\nhttps://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html\n\nThat's probably why Linux posix_fadvise didn't affect it, well that\nand the fact, at a wild guess, that Solaris didn't have that system\ncall...\n\n> I was wondering if we could use this (posix_fadvise) to improve that,\n> essentially by issuing fadvise even for sequential patterns. But now\n> that I think about that, if posix_fadvise works since 2.2, maybe RA\n> works too now?)\n\nIt should work fine. I am planning to look into this a bit some day\nsoon -- I think there may be some interesting interactions between\nsystems with big pages/records like ZFS/BTRFS/... and io_combine_limit\nthat might offer interesting optimisation tweak opportunities, but\nfirst things first...\n\n\n",
"msg_date": "Sat, 30 Mar 2024 12:56:39 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 12:34 PM Tomas Vondra\n<[email protected]> wrote:\n> Hmmm. I admit I didn't think about the \"always prefetch\" flag too much,\n> but I did imagine it'd only affect some places (e.g. BHS, but not for\n> sequential scans). If it could be done by lowering the combine limit,\n> that could work too - in fact, I was wondering if we should have combine\n> limit as a tablespace parameter too.\n\nGood idea! Will add. Planning to commit the basic patch very soon,\nI'm just thinking about what to do with Heikki's recent optimisation\nfeedback (ie can it be done as follow-up, he thinks so, I'm thinking\nabout that today as time is running short).\n\n> But I think adding such knobs should be only the last resort - I myself\n> don't know how to set these parameters, how could we expect users to\n> pick good values? Better to have something that \"just works\".\n\nAgreed.\n\n> I admit I never 100% understood when exactly the kernel RA kicks in, but\n> I always thought it's enough for the patterns to be only \"close enough\"\n> to sequential. Isn't the problem that this only skips fadvise for 100%\n> sequential patterns, but keeps prefetching for cases the RA would deal\n> on it's own? So maybe we should either relax the conditions when to skip\n> fadvise, or combine even pages that are not perfectly sequential (I'm\n> not sure if that's possible only for fadvise), though.\n\nYes that might be worth considering, if we know/guess what the OS RA\nwindow size is for a tablespace. I will post a patch for that for\nconsideration/testing as a potential follow-up as it's super easy,\njust for experimentation. I just fear that it's getting into the\nrealms of \"hard to explain/understand\" but on the other hand I guess\nwe already have the mechanism and have to explain it.\n\n\n",
"msg_date": "Sat, 30 Mar 2024 13:09:29 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n> \n> \n> On 3/29/24 02:12, Thomas Munro wrote:\n> > On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >> I think there's some sort of bug, triggering this assert in heapam\n> >>\n> >> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n> > \n> > Thanks for the repro. I can't seem to reproduce it (still trying) but\n> > I assume this is with Melanie's v11 patch set which had\n> > v11-0016-v10-Read-Stream-API.patch.\n> > \n> > Would you mind removing that commit and instead applying the v13\n> > stream_read.c patches[1]? v10 stream_read.c was a little confused\n> > about random I/O combining, which I fixed with a small adjustment to\n> > the conditions for the \"if\" statement right at the end of\n> > read_stream_look_ahead(). Sorry about that. The fixed version, with\n> > eic=4, with your test query using WHERE a < a, ends its scan with:\n> > \n> \n> I'll give that a try. Unfortunately unfortunately the v11 still has the\n> problem I reported about a week ago:\n> \n> ERROR: prefetch and main iterators are out of sync\n> \n> So I can't run the full benchmarks :-( but master vs. streaming read API\n> should work, I think.\n\nOdd, I didn't notice you reporting this ERROR popping up. Now that I\ntake a look, v11 (at least, maybe also v10) had this very sill mistake:\n\n if (scan->bm_parallel == NULL &&\n scan->rs_pf_bhs_iterator &&\n hscan->pfblockno > hscan->rs_base.blockno)\n elog(ERROR, \"prefetch and main iterators are out of sync\");\n\nIt errors out if the prefetch block is ahead of the current block --\nwhich is the opposite of what we want. I've fixed this in attached v12.\n\nThis version also has v13 of the streaming read API. I noticed one\nmistake in my bitmapheap scan streaming read user -- it freed the\nstreaming read object at the wrong time. I don't know if this was\ncausing any other issues, but it at least is fixed in this version.\n\n- Melanie",
"msg_date": "Sun, 31 Mar 2024 11:45:51 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n> > \n> > \n> > On 3/29/24 02:12, Thomas Munro wrote:\n> > > On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >> I think there's some sort of bug, triggering this assert in heapam\n> > >>\n> > >> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n> > > \n> > > Thanks for the repro. I can't seem to reproduce it (still trying) but\n> > > I assume this is with Melanie's v11 patch set which had\n> > > v11-0016-v10-Read-Stream-API.patch.\n> > > \n> > > Would you mind removing that commit and instead applying the v13\n> > > stream_read.c patches[1]? v10 stream_read.c was a little confused\n> > > about random I/O combining, which I fixed with a small adjustment to\n> > > the conditions for the \"if\" statement right at the end of\n> > > read_stream_look_ahead(). Sorry about that. The fixed version, with\n> > > eic=4, with your test query using WHERE a < a, ends its scan with:\n> > > \n> > \n> > I'll give that a try. Unfortunately unfortunately the v11 still has the\n> > problem I reported about a week ago:\n> > \n> > ERROR: prefetch and main iterators are out of sync\n> > \n> > So I can't run the full benchmarks :-( but master vs. streaming read API\n> > should work, I think.\n> \n> Odd, I didn't notice you reporting this ERROR popping up. Now that I\n> take a look, v11 (at least, maybe also v10) had this very sill mistake:\n> \n> if (scan->bm_parallel == NULL &&\n> scan->rs_pf_bhs_iterator &&\n> hscan->pfblockno > hscan->rs_base.blockno)\n> elog(ERROR, \"prefetch and main iterators are out of sync\");\n> \n> It errors out if the prefetch block is ahead of the current block --\n> which is the opposite of what we want. I've fixed this in attached v12.\n> \n> This version also has v13 of the streaming read API. I noticed one\n> mistake in my bitmapheap scan streaming read user -- it freed the\n> streaming read object at the wrong time. I don't know if this was\n> causing any other issues, but it at least is fixed in this version.\n\nAttached v13 is rebased over master (which includes the streaming read\nAPI now). I also reset the streaming read object on rescan instead of\ncreating a new one each time.\n\nI don't know how much chance any of this has of going in to 17 now, but\nI thought I would start looking into the regression repro Tomas provided\nin [1].\n\nI'm also not sure if I should try and group the commits into fewer\ncommits now or wait until I have some idea of whether or not the\napproach in 0013 and 0014 is worth pursuing.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/5d5954ed-6f43-4f1a-8e19-ece75b2b7362%40enterprisedb.com",
"msg_date": "Wed, 3 Apr 2024 18:57:59 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/4/24 00:57, Melanie Plageman wrote:\n> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n>> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n>>>\n>>>\n>>> On 3/29/24 02:12, Thomas Munro wrote:\n>>>> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n>>>> <[email protected]> wrote:\n>>>>> I think there's some sort of bug, triggering this assert in heapam\n>>>>>\n>>>>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n>>>>\n>>>> Thanks for the repro. I can't seem to reproduce it (still trying) but\n>>>> I assume this is with Melanie's v11 patch set which had\n>>>> v11-0016-v10-Read-Stream-API.patch.\n>>>>\n>>>> Would you mind removing that commit and instead applying the v13\n>>>> stream_read.c patches[1]? v10 stream_read.c was a little confused\n>>>> about random I/O combining, which I fixed with a small adjustment to\n>>>> the conditions for the \"if\" statement right at the end of\n>>>> read_stream_look_ahead(). Sorry about that. The fixed version, with\n>>>> eic=4, with your test query using WHERE a < a, ends its scan with:\n>>>>\n>>>\n>>> I'll give that a try. Unfortunately unfortunately the v11 still has the\n>>> problem I reported about a week ago:\n>>>\n>>> ERROR: prefetch and main iterators are out of sync\n>>>\n>>> So I can't run the full benchmarks :-( but master vs. streaming read API\n>>> should work, I think.\n>>\n>> Odd, I didn't notice you reporting this ERROR popping up. Now that I\n>> take a look, v11 (at least, maybe also v10) had this very sill mistake:\n>>\n>> if (scan->bm_parallel == NULL &&\n>> scan->rs_pf_bhs_iterator &&\n>> hscan->pfblockno > hscan->rs_base.blockno)\n>> elog(ERROR, \"prefetch and main iterators are out of sync\");\n>>\n>> It errors out if the prefetch block is ahead of the current block --\n>> which is the opposite of what we want. I've fixed this in attached v12.\n>>\n>> This version also has v13 of the streaming read API. I noticed one\n>> mistake in my bitmapheap scan streaming read user -- it freed the\n>> streaming read object at the wrong time. I don't know if this was\n>> causing any other issues, but it at least is fixed in this version.\n> \n> Attached v13 is rebased over master (which includes the streaming read\n> API now). I also reset the streaming read object on rescan instead of\n> creating a new one each time.\n> \n> I don't know how much chance any of this has of going in to 17 now, but\n> I thought I would start looking into the regression repro Tomas provided\n> in [1].\n> \n\nMy personal opinion is that we should try to get in as many of the the\nrefactoring patches as possible, but I think it's probably too late for\nthe actual switch to the streaming API.\n\nIf someone else feels like committing that part, I won't stand in the\nway, but I'm not quite convinced it won't cause regressions. Maybe it's\nOK but I'd need more time to do more tests, collect data, and so on. And\nI don't think we have that, especially considering we'd still need to\ncommit the other parts first.\n\n> I'm also not sure if I should try and group the commits into fewer\n> commits now or wait until I have some idea of whether or not the\n> approach in 0013 and 0014 is worth pursuing.\n> \n\nYou mean whether to pursue the approach in general, or for v17? I think\nit looks like the right approach, but for v17 see above :-(\n\nAs for merging, I wouldn't do that. I looked at the commits and while\nsome of them seem somewhat \"trivial\", I really like how you organized\nthe commits, and kept those that just \"move\" code around, and those that\nactually change stuff. It's much easier to understand, IMO.\n\nI went through the first ~10 commits, and added some review - either as\na separate commit, when possible, in the code as XXX comment, and also\nin the commit message. The code tweaks are utterly trivial (whitespace\nor indentation to make the line shorter). It shouldn't take much time to\ndeal with those, I think.\n\nI think the main focus should be updating the commit messages. If it was\nonly a single patch, I'd probably try to write the messages myself, but\nwith this many patches it'd be great if you could update those and I'll\nreview that before commit.\n\nI always struggle with writing commit messages myself, and it takes me\nages to write a good one (well, I think the message is good, but who\nknows ...). But I think a good message should be concise enough to\nexplain what and why it's done. It may reference a thread for all the\ngory details, but the basic reasoning should be in the commit message.\nFor example the message for \"BitmapPrefetch use prefetch block recheck\nfor skip fetch\" now says that it \"makes more sense to do X\" but does not\nreally say why that's the case. The linked message does, but it'd be\ngood to have that in the message (because how would I know how much of\nthe thread to read?).\n\nAlso, it'd be very helpful if you could update the author & reviewed-by\nfields. I'll review those before commit, ofc, but I admit I lost track\nof who reviewed which part.\n\nI'd focus on the first ~8-9 commits or so for now, we can commit more if\nthings go reasonably well.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Apr 2024 16:35:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n> \n> \n> On 4/4/24 00:57, Melanie Plageman wrote:\n> > On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> >> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n> >>>\n> >>>\n> >>> On 3/29/24 02:12, Thomas Munro wrote:\n> >>>> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n> >>>> <[email protected]> wrote:\n> >>>>> I think there's some sort of bug, triggering this assert in heapam\n> >>>>>\n> >>>>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n> >>>>\n> >>>> Thanks for the repro. I can't seem to reproduce it (still trying) but\n> >>>> I assume this is with Melanie's v11 patch set which had\n> >>>> v11-0016-v10-Read-Stream-API.patch.\n> >>>>\n> >>>> Would you mind removing that commit and instead applying the v13\n> >>>> stream_read.c patches[1]? v10 stream_read.c was a little confused\n> >>>> about random I/O combining, which I fixed with a small adjustment to\n> >>>> the conditions for the \"if\" statement right at the end of\n> >>>> read_stream_look_ahead(). Sorry about that. The fixed version, with\n> >>>> eic=4, with your test query using WHERE a < a, ends its scan with:\n> >>>>\n> >>>\n> >>> I'll give that a try. Unfortunately unfortunately the v11 still has the\n> >>> problem I reported about a week ago:\n> >>>\n> >>> ERROR: prefetch and main iterators are out of sync\n> >>>\n> >>> So I can't run the full benchmarks :-( but master vs. streaming read API\n> >>> should work, I think.\n> >>\n> >> Odd, I didn't notice you reporting this ERROR popping up. Now that I\n> >> take a look, v11 (at least, maybe also v10) had this very sill mistake:\n> >>\n> >> if (scan->bm_parallel == NULL &&\n> >> scan->rs_pf_bhs_iterator &&\n> >> hscan->pfblockno > hscan->rs_base.blockno)\n> >> elog(ERROR, \"prefetch and main iterators are out of sync\");\n> >>\n> >> It errors out if the prefetch block is ahead of the current block --\n> >> which is the opposite of what we want. I've fixed this in attached v12.\n> >>\n> >> This version also has v13 of the streaming read API. I noticed one\n> >> mistake in my bitmapheap scan streaming read user -- it freed the\n> >> streaming read object at the wrong time. I don't know if this was\n> >> causing any other issues, but it at least is fixed in this version.\n> > \n> > Attached v13 is rebased over master (which includes the streaming read\n> > API now). I also reset the streaming read object on rescan instead of\n> > creating a new one each time.\n> > \n> > I don't know how much chance any of this has of going in to 17 now, but\n> > I thought I would start looking into the regression repro Tomas provided\n> > in [1].\n> > \n> \n> My personal opinion is that we should try to get in as many of the the\n> refactoring patches as possible, but I think it's probably too late for\n> the actual switch to the streaming API.\n\nCool. In the attached v15, I have dropped all commits that are related\nto the streaming read API and included *only* commits that are\nbeneficial to master. A few of the commits are merged or reordered as\nwell.\n\nWhile going through the commits with this new goal in mind (forget about\nthe streaming read API for now), I realized that it doesn't make much\nsense to just eliminate the layering violation for the current block and\nleave it there for the prefetch block. I had de-prioritized solving this\nwhen I thought we would just delete the prefetch code and replace it\nwith the streaming read.\n\nNow that we aren't doing that, I've spent the day trying to resolve the\nissues with pushing the prefetch code into heapam.c that I cited in [1].\n0010 - 0013 are the result of this. They are not very polished yet and\nneed more cleanup and review (especially 0011, which is probably too\nlarge), but I am happy with the solution I came up with.\n\nBasically, there are too many members needed for bitmap heap scan to put\nthem all in the HeapScanDescData (don't want to bloat it). So, I've made\na new BitmapHeapScanDescData and associated begin/rescan/end() functions\n\nIn the end, with all patches applied, BitmapHeapNext() loops invoking\ntable_scan_bitmap_next_tuple() and table AMs can implement that however\nthey choose.\n\n> > I'm also not sure if I should try and group the commits into fewer\n> > commits now or wait until I have some idea of whether or not the\n> > approach in 0013 and 0014 is worth pursuing.\n> > \n> \n> You mean whether to pursue the approach in general, or for v17? I think\n> it looks like the right approach, but for v17 see above :-(\n> \n> As for merging, I wouldn't do that. I looked at the commits and while\n> some of them seem somewhat \"trivial\", I really like how you organized\n> the commits, and kept those that just \"move\" code around, and those that\n> actually change stuff. It's much easier to understand, IMO.\n> \n> I went through the first ~10 commits, and added some review - either as\n> a separate commit, when possible, in the code as XXX comment, and also\n> in the commit message. The code tweaks are utterly trivial (whitespace\n> or indentation to make the line shorter). It shouldn't take much time to\n> deal with those, I think.\n\nAttached v15 incorporates your v14-0002-review.\n\nFor your v14-0008-review, I actually ended up removing that commit\nbecause once I removed everything that was for streaming read API, it\nbecame redundant with another commit.\n\nFor your v14-0010-review, we actually can't easily get rid of those\nlocal variables because we make the iterators before we make the scan\ndescriptors and the commit following that commit moves the iterators\nfrom the BitmapHeapScanState to the scan descriptor.\n\n> I think the main focus should be updating the commit messages. If it was\n> only a single patch, I'd probably try to write the messages myself, but\n> with this many patches it'd be great if you could update those and I'll\n> review that before commit.\n\nI did my best to update the commit messages to be less specific and more\nfocused on \"why should I care\". I found myself wanting to explain why I\nimplemented something the way I did and then getting back into the\nimplementation details again. I'm not sure if I suceeded in having less\ndetails and more substance.\n\n> I always struggle with writing commit messages myself, and it takes me\n> ages to write a good one (well, I think the message is good, but who\n> knows ...). But I think a good message should be concise enough to\n> explain what and why it's done. It may reference a thread for all the\n> gory details, but the basic reasoning should be in the commit message.\n> For example the message for \"BitmapPrefetch use prefetch block recheck\n> for skip fetch\" now says that it \"makes more sense to do X\" but does not\n> really say why that's the case. The linked message does, but it'd be\n> good to have that in the message (because how would I know how much of\n> the thread to read?).\n\nI fixed that particular one. I tried to take that feedback and apply it\nto other commit messages. I don't know how successful I was...\n\n> Also, it'd be very helpful if you could update the author & reviewed-by\n> fields. I'll review those before commit, ofc, but I admit I lost track\n> of who reviewed which part.\n\nI have updated reviewers. I didn't add reviewers on the ones that\nhaven't really been reviewed yet.\n\n> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n> things go reasonably well.\n\nSounds good. I will spend cleanup time on 0010-0013 tomorrow but would\nlove to know if you agree with the direction before I spend more time.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20240323002211.on5vb5ulk6lsdb2u%40liskov",
"msg_date": "Fri, 5 Apr 2024 04:06:34 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n> > \n> > \n> > On 4/4/24 00:57, Melanie Plageman wrote:\n> > > On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> > >> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n> > >>>\n> > >>>\n> > >>> On 3/29/24 02:12, Thomas Munro wrote:\n> > >>>> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n> > >>>> <[email protected]> wrote:\n> > >>>>> I think there's some sort of bug, triggering this assert in heapam\n> > >>>>>\n> > >>>>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n> > >>>>\n> > >>>> Thanks for the repro. I can't seem to reproduce it (still trying) but\n> > >>>> I assume this is with Melanie's v11 patch set which had\n> > >>>> v11-0016-v10-Read-Stream-API.patch.\n> > >>>>\n> > >>>> Would you mind removing that commit and instead applying the v13\n> > >>>> stream_read.c patches[1]? v10 stream_read.c was a little confused\n> > >>>> about random I/O combining, which I fixed with a small adjustment to\n> > >>>> the conditions for the \"if\" statement right at the end of\n> > >>>> read_stream_look_ahead(). Sorry about that. The fixed version, with\n> > >>>> eic=4, with your test query using WHERE a < a, ends its scan with:\n> > >>>>\n> > >>>\n> > >>> I'll give that a try. Unfortunately unfortunately the v11 still has the\n> > >>> problem I reported about a week ago:\n> > >>>\n> > >>> ERROR: prefetch and main iterators are out of sync\n> > >>>\n> > >>> So I can't run the full benchmarks :-( but master vs. streaming read API\n> > >>> should work, I think.\n> > >>\n> > >> Odd, I didn't notice you reporting this ERROR popping up. Now that I\n> > >> take a look, v11 (at least, maybe also v10) had this very sill mistake:\n> > >>\n> > >> if (scan->bm_parallel == NULL &&\n> > >> scan->rs_pf_bhs_iterator &&\n> > >> hscan->pfblockno > hscan->rs_base.blockno)\n> > >> elog(ERROR, \"prefetch and main iterators are out of sync\");\n> > >>\n> > >> It errors out if the prefetch block is ahead of the current block --\n> > >> which is the opposite of what we want. I've fixed this in attached v12.\n> > >>\n> > >> This version also has v13 of the streaming read API. I noticed one\n> > >> mistake in my bitmapheap scan streaming read user -- it freed the\n> > >> streaming read object at the wrong time. I don't know if this was\n> > >> causing any other issues, but it at least is fixed in this version.\n> > > \n> > > Attached v13 is rebased over master (which includes the streaming read\n> > > API now). I also reset the streaming read object on rescan instead of\n> > > creating a new one each time.\n> > > \n> > > I don't know how much chance any of this has of going in to 17 now, but\n> > > I thought I would start looking into the regression repro Tomas provided\n> > > in [1].\n> > > \n> > \n> > My personal opinion is that we should try to get in as many of the the\n> > refactoring patches as possible, but I think it's probably too late for\n> > the actual switch to the streaming API.\n> \n> Cool. In the attached v15, I have dropped all commits that are related\n> to the streaming read API and included *only* commits that are\n> beneficial to master. A few of the commits are merged or reordered as\n> well.\n> \n> While going through the commits with this new goal in mind (forget about\n> the streaming read API for now), I realized that it doesn't make much\n> sense to just eliminate the layering violation for the current block and\n> leave it there for the prefetch block. I had de-prioritized solving this\n> when I thought we would just delete the prefetch code and replace it\n> with the streaming read.\n> \n> Now that we aren't doing that, I've spent the day trying to resolve the\n> issues with pushing the prefetch code into heapam.c that I cited in [1].\n> 0010 - 0013 are the result of this. They are not very polished yet and\n> need more cleanup and review (especially 0011, which is probably too\n> large), but I am happy with the solution I came up with.\n> \n> Basically, there are too many members needed for bitmap heap scan to put\n> them all in the HeapScanDescData (don't want to bloat it). So, I've made\n> a new BitmapHeapScanDescData and associated begin/rescan/end() functions\n> \n> In the end, with all patches applied, BitmapHeapNext() loops invoking\n> table_scan_bitmap_next_tuple() and table AMs can implement that however\n> they choose.\n> \n> > > I'm also not sure if I should try and group the commits into fewer\n> > > commits now or wait until I have some idea of whether or not the\n> > > approach in 0013 and 0014 is worth pursuing.\n> > > \n> > \n> > You mean whether to pursue the approach in general, or for v17? I think\n> > it looks like the right approach, but for v17 see above :-(\n> > \n> > As for merging, I wouldn't do that. I looked at the commits and while\n> > some of them seem somewhat \"trivial\", I really like how you organized\n> > the commits, and kept those that just \"move\" code around, and those that\n> > actually change stuff. It's much easier to understand, IMO.\n> > \n> > I went through the first ~10 commits, and added some review - either as\n> > a separate commit, when possible, in the code as XXX comment, and also\n> > in the commit message. The code tweaks are utterly trivial (whitespace\n> > or indentation to make the line shorter). It shouldn't take much time to\n> > deal with those, I think.\n> \n> Attached v15 incorporates your v14-0002-review.\n> \n> For your v14-0008-review, I actually ended up removing that commit\n> because once I removed everything that was for streaming read API, it\n> became redundant with another commit.\n> \n> For your v14-0010-review, we actually can't easily get rid of those\n> local variables because we make the iterators before we make the scan\n> descriptors and the commit following that commit moves the iterators\n> from the BitmapHeapScanState to the scan descriptor.\n> \n> > I think the main focus should be updating the commit messages. If it was\n> > only a single patch, I'd probably try to write the messages myself, but\n> > with this many patches it'd be great if you could update those and I'll\n> > review that before commit.\n> \n> I did my best to update the commit messages to be less specific and more\n> focused on \"why should I care\". I found myself wanting to explain why I\n> implemented something the way I did and then getting back into the\n> implementation details again. I'm not sure if I suceeded in having less\n> details and more substance.\n> \n> > I always struggle with writing commit messages myself, and it takes me\n> > ages to write a good one (well, I think the message is good, but who\n> > knows ...). But I think a good message should be concise enough to\n> > explain what and why it's done. It may reference a thread for all the\n> > gory details, but the basic reasoning should be in the commit message.\n> > For example the message for \"BitmapPrefetch use prefetch block recheck\n> > for skip fetch\" now says that it \"makes more sense to do X\" but does not\n> > really say why that's the case. The linked message does, but it'd be\n> > good to have that in the message (because how would I know how much of\n> > the thread to read?).\n> \n> I fixed that particular one. I tried to take that feedback and apply it\n> to other commit messages. I don't know how successful I was...\n> \n> > Also, it'd be very helpful if you could update the author & reviewed-by\n> > fields. I'll review those before commit, ofc, but I admit I lost track\n> > of who reviewed which part.\n> \n> I have updated reviewers. I didn't add reviewers on the ones that\n> haven't really been reviewed yet.\n> \n> > I'd focus on the first ~8-9 commits or so for now, we can commit more if\n> > things go reasonably well.\n> \n> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n> love to know if you agree with the direction before I spend more time.\n\nIn attached v16, I've split out 0010-0013 into 0011-0017. I think it is\nmuch easier to understand.\n\nWhile I was doing that, I realized that I should remove the call to\ntable_rescan() from ExecReScanBitmapHeapScan() and just rely on the new\ntable_rescan_bm() invoked from BitmapHeapNext(). That is done in the\nattached.\n\n0010-0018 still need comments updated but I focused on getting the split\nout, reviewable version of them ready. I'll add comments (especially to\n0011 table AM functions) tomorrow. I also have to double-check if I\nshould add any asserts for table AMs about having implemented all of the\nnew begin/re/endscan() functions.\n\n- Melanie",
"msg_date": "Fri, 5 Apr 2024 19:53:30 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/6/24 01:53, Melanie Plageman wrote:\n> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n>> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n>>>\n>>>\n>>> On 4/4/24 00:57, Melanie Plageman wrote:\n>>>> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n>>>>> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:\n>>>>>>\n>>>>>>\n>>>>>> On 3/29/24 02:12, Thomas Munro wrote:\n>>>>>>> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra\n>>>>>>> <[email protected]> wrote:\n>>>>>>>> I think there's some sort of bug, triggering this assert in heapam\n>>>>>>>>\n>>>>>>>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);\n>>>>>>>\n>>>>>>> Thanks for the repro. I can't seem to reproduce it (still trying) but\n>>>>>>> I assume this is with Melanie's v11 patch set which had\n>>>>>>> v11-0016-v10-Read-Stream-API.patch.\n>>>>>>>\n>>>>>>> Would you mind removing that commit and instead applying the v13\n>>>>>>> stream_read.c patches[1]? v10 stream_read.c was a little confused\n>>>>>>> about random I/O combining, which I fixed with a small adjustment to\n>>>>>>> the conditions for the \"if\" statement right at the end of\n>>>>>>> read_stream_look_ahead(). Sorry about that. The fixed version, with\n>>>>>>> eic=4, with your test query using WHERE a < a, ends its scan with:\n>>>>>>>\n>>>>>>\n>>>>>> I'll give that a try. Unfortunately unfortunately the v11 still has the\n>>>>>> problem I reported about a week ago:\n>>>>>>\n>>>>>> ERROR: prefetch and main iterators are out of sync\n>>>>>>\n>>>>>> So I can't run the full benchmarks :-( but master vs. streaming read API\n>>>>>> should work, I think.\n>>>>>\n>>>>> Odd, I didn't notice you reporting this ERROR popping up. Now that I\n>>>>> take a look, v11 (at least, maybe also v10) had this very sill mistake:\n>>>>>\n>>>>> if (scan->bm_parallel == NULL &&\n>>>>> scan->rs_pf_bhs_iterator &&\n>>>>> hscan->pfblockno > hscan->rs_base.blockno)\n>>>>> elog(ERROR, \"prefetch and main iterators are out of sync\");\n>>>>>\n>>>>> It errors out if the prefetch block is ahead of the current block --\n>>>>> which is the opposite of what we want. I've fixed this in attached v12.\n>>>>>\n>>>>> This version also has v13 of the streaming read API. I noticed one\n>>>>> mistake in my bitmapheap scan streaming read user -- it freed the\n>>>>> streaming read object at the wrong time. I don't know if this was\n>>>>> causing any other issues, but it at least is fixed in this version.\n>>>>\n>>>> Attached v13 is rebased over master (which includes the streaming read\n>>>> API now). I also reset the streaming read object on rescan instead of\n>>>> creating a new one each time.\n>>>>\n>>>> I don't know how much chance any of this has of going in to 17 now, but\n>>>> I thought I would start looking into the regression repro Tomas provided\n>>>> in [1].\n>>>>\n>>>\n>>> My personal opinion is that we should try to get in as many of the the\n>>> refactoring patches as possible, but I think it's probably too late for\n>>> the actual switch to the streaming API.\n>>\n>> Cool. In the attached v15, I have dropped all commits that are related\n>> to the streaming read API and included *only* commits that are\n>> beneficial to master. A few of the commits are merged or reordered as\n>> well.\n>>\n>> While going through the commits with this new goal in mind (forget about\n>> the streaming read API for now), I realized that it doesn't make much\n>> sense to just eliminate the layering violation for the current block and\n>> leave it there for the prefetch block. I had de-prioritized solving this\n>> when I thought we would just delete the prefetch code and replace it\n>> with the streaming read.\n>>\n>> Now that we aren't doing that, I've spent the day trying to resolve the\n>> issues with pushing the prefetch code into heapam.c that I cited in [1].\n>> 0010 - 0013 are the result of this. They are not very polished yet and\n>> need more cleanup and review (especially 0011, which is probably too\n>> large), but I am happy with the solution I came up with.\n>>\n>> Basically, there are too many members needed for bitmap heap scan to put\n>> them all in the HeapScanDescData (don't want to bloat it). So, I've made\n>> a new BitmapHeapScanDescData and associated begin/rescan/end() functions\n>>\n>> In the end, with all patches applied, BitmapHeapNext() loops invoking\n>> table_scan_bitmap_next_tuple() and table AMs can implement that however\n>> they choose.\n>>\n>>>> I'm also not sure if I should try and group the commits into fewer\n>>>> commits now or wait until I have some idea of whether or not the\n>>>> approach in 0013 and 0014 is worth pursuing.\n>>>>\n>>>\n>>> You mean whether to pursue the approach in general, or for v17? I think\n>>> it looks like the right approach, but for v17 see above :-(\n>>>\n>>> As for merging, I wouldn't do that. I looked at the commits and while\n>>> some of them seem somewhat \"trivial\", I really like how you organized\n>>> the commits, and kept those that just \"move\" code around, and those that\n>>> actually change stuff. It's much easier to understand, IMO.\n>>>\n>>> I went through the first ~10 commits, and added some review - either as\n>>> a separate commit, when possible, in the code as XXX comment, and also\n>>> in the commit message. The code tweaks are utterly trivial (whitespace\n>>> or indentation to make the line shorter). It shouldn't take much time to\n>>> deal with those, I think.\n>>\n>> Attached v15 incorporates your v14-0002-review.\n>>\n>> For your v14-0008-review, I actually ended up removing that commit\n>> because once I removed everything that was for streaming read API, it\n>> became redundant with another commit.\n>>\n>> For your v14-0010-review, we actually can't easily get rid of those\n>> local variables because we make the iterators before we make the scan\n>> descriptors and the commit following that commit moves the iterators\n>> from the BitmapHeapScanState to the scan descriptor.\n>>\n>>> I think the main focus should be updating the commit messages. If it was\n>>> only a single patch, I'd probably try to write the messages myself, but\n>>> with this many patches it'd be great if you could update those and I'll\n>>> review that before commit.\n>>\n>> I did my best to update the commit messages to be less specific and more\n>> focused on \"why should I care\". I found myself wanting to explain why I\n>> implemented something the way I did and then getting back into the\n>> implementation details again. I'm not sure if I suceeded in having less\n>> details and more substance.\n>>\n>>> I always struggle with writing commit messages myself, and it takes me\n>>> ages to write a good one (well, I think the message is good, but who\n>>> knows ...). But I think a good message should be concise enough to\n>>> explain what and why it's done. It may reference a thread for all the\n>>> gory details, but the basic reasoning should be in the commit message.\n>>> For example the message for \"BitmapPrefetch use prefetch block recheck\n>>> for skip fetch\" now says that it \"makes more sense to do X\" but does not\n>>> really say why that's the case. The linked message does, but it'd be\n>>> good to have that in the message (because how would I know how much of\n>>> the thread to read?).\n>>\n>> I fixed that particular one. I tried to take that feedback and apply it\n>> to other commit messages. I don't know how successful I was...\n>>\n>>> Also, it'd be very helpful if you could update the author & reviewed-by\n>>> fields. I'll review those before commit, ofc, but I admit I lost track\n>>> of who reviewed which part.\n>>\n>> I have updated reviewers. I didn't add reviewers on the ones that\n>> haven't really been reviewed yet.\n>>\n>>> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n>>> things go reasonably well.\n>>\n>> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n>> love to know if you agree with the direction before I spend more time.\n> \n> In attached v16, I've split out 0010-0013 into 0011-0017. I think it is\n> much easier to understand.\n> \n\nDamn it, I went through the whole patch series, adding a couple review\ncomments and tweaks, and was just about to share my version, but you bet\nme to it ;-)\n\nAnyway, I've attached it as .tgz in order to not confuse cfbot. All the\nreview comments are marked with XXX, so grep for that in the patches.\nThere's two separate patches - the first one suggests a code change, so\nit was better to not merge that with your code. The second has just a\ncouple XXX comments, I'm not sure why I kept it separate.\n\nA couple review comments:\n\n* I think 0001-0009 are 99% ready to. I reworded some of the commit\nmessages a bit - I realize it's a bit bold, considering you're native\nspeaker and I'm not. If you could check I didn't make it worse, that\nwould be great.\n\n* I'm not sure extra_flags is the right way to pass the flag in 0003.\nThe \"extra_\" name is a bit weird, and no other table AM functions do it\nthis way and pass explicit bool flags instead. So my first \"review\"\ncommit does it like that. Do you agree it's better that way?\n\n* The one question I'm somewhat unsure about is why Tom chose to use the\n\"wrong\" recheck flag in the 2017 commit, when the correct recheck flag\nis readily available. Surely that had a reason, right? But I can't think\nof one ...\n\n> While I was doing that, I realized that I should remove the call to\n> table_rescan() from ExecReScanBitmapHeapScan() and just rely on the new\n> table_rescan_bm() invoked from BitmapHeapNext(). That is done in the\n> attached.\n> \n> 0010-0018 still need comments updated but I focused on getting the split\n> out, reviewable version of them ready. I'll add comments (especially to\n> 0011 table AM functions) tomorrow. I also have to double-check if I\n> should add any asserts for table AMs about having implemented all of the\n> new begin/re/endscan() functions.\n> \n\nI added a couple more comments for those patches (10-12). Chances are\nthe split in v16 clarifies some of my questions, but it'll have to wait\ntill the morning ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 6 Apr 2024 02:51:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:\n> \n> \n> On 4/6/24 01:53, Melanie Plageman wrote:\n> > On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n> >> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n> >>> On 4/4/24 00:57, Melanie Plageman wrote:\n> >>>> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> >>> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n> >>> things go reasonably well.\n> >>\n> >> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n> >> love to know if you agree with the direction before I spend more time.\n> > \n> > In attached v16, I've split out 0010-0013 into 0011-0017. I think it is\n> > much easier to understand.\n> > \n> \n> Anyway, I've attached it as .tgz in order to not confuse cfbot. All the\n> review comments are marked with XXX, so grep for that in the patches.\n> There's two separate patches - the first one suggests a code change, so\n> it was better to not merge that with your code. The second has just a\n> couple XXX comments, I'm not sure why I kept it separate.\n> \n> A couple review comments:\n> \n> * I think 0001-0009 are 99% ready to. I reworded some of the commit\n> messages a bit - I realize it's a bit bold, considering you're native\n> speaker and I'm not. If you could check I didn't make it worse, that\n> would be great.\n\nAttached v17 has *only* patches 0001-0009 with these changes. I will\nwork on applying the remaining patches, addressing feedback, and adding\ncomments next.\n\nI have reviewed and incorporated all of your feedback on these patches.\nAttached v17 is your exact patches with 1 or 2 *very* slight tweaks to\ncommit messages (comma splice removal and single word adjustments) as\nwell as the changes listed below:\n\nI have changed the following:\n\n- 0003 added an assert that rs_empty_tuples_pending is 0 on rescan and\n\tendscan\n\n- 0004 (your 0005)-- I followed up with Tom, but for now I have just\n\tremoved the XXX and also reworded the message a bit\n\n- 0006 (your 0007) fixed up the variable name (you changed valid ->\n\tvalid_block but it had gotten changed back)\n\nI have open questions on the following:\n\n- 0003: should it be SO_NEED_TUPLES and need_tuples (instead of\n\tSO_NEED_TUPLE and need_tuple)?\n\n- 0009 (your 0010)\n\t- Should I mention in the commit message that we added blockno and\n\t\tpfblockno in the BitmapHeapScanState only for validation or is that\n\t\ttoo specific?\n\n\t- Should I mention that a future (imminent) commit will remove the\n\t\titerators from TableScanDescData and put them in HeapScanDescData? I\n\t\timagine folks don't want those there, but it is easier for the\n\t\tprogression of commits to put them there first and then move them\n\n\t- I'm worried this comment is vague and or potentially not totally\n\t\tcorrect. Should we remove it? I don't think we have conclusive proof\n\t\tthat this is true.\n /*\n * Adjusting the prefetch iterator before invoking\n * table_scan_bitmap_next_block() keeps prefetch distance higher across\n * the parallel workers.\n */\n\n\n> * I'm not sure extra_flags is the right way to pass the flag in 0003.\n> The \"extra_\" name is a bit weird, and no other table AM functions do it\n> this way and pass explicit bool flags instead. So my first \"review\"\n> commit does it like that. Do you agree it's better that way?\n\nYes.\n\n> * The one question I'm somewhat unsure about is why Tom chose to use the\n> \"wrong\" recheck flag in the 2017 commit, when the correct recheck flag\n> is readily available. Surely that had a reason, right? But I can't think\n> of one ...\n\nSee above.\n\n> > While I was doing that, I realized that I should remove the call to\n> > table_rescan() from ExecReScanBitmapHeapScan() and just rely on the new\n> > table_rescan_bm() invoked from BitmapHeapNext(). That is done in the\n> > attached.\n> > \n> > 0010-0018 still need comments updated but I focused on getting the split\n> > out, reviewable version of them ready. I'll add comments (especially to\n> > 0011 table AM functions) tomorrow. I also have to double-check if I\n> > should add any asserts for table AMs about having implemented all of the\n> > new begin/re/endscan() functions.\n> > \n> \n> I added a couple more comments for those patches (10-12). Chances are\n> the split in v16 clarifies some of my questions, but it'll have to wait\n> till the morning ...\n\nWill address this in next mail.\n\n- Melanie",
"msg_date": "Sat, 6 Apr 2024 09:40:11 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/6/24 02:51, Tomas Vondra wrote:\n> \n> * The one question I'm somewhat unsure about is why Tom chose to use the\n> \"wrong\" recheck flag in the 2017 commit, when the correct recheck flag\n> is readily available. Surely that had a reason, right? But I can't think\n> of one ...\n> \n\nI've been wondering about this a bit more, so I decided to experiment\nand try to construct a case for which the current code prefetches the\nwrong blocks, and the patch fixes that. But I haven't been very\nsuccessful so far :-(\n\nMy understanding was that the current code should do the wrong thing if\nI alternate all-visible and not-all-visible pages. This understanding is\nnot correct, as I learned, because the thing that needs to change is the\nrecheck flag, not visibility :-( I'm still posting what I tried, perhaps\nyou will have an idea how to alter it to demonstrate the incorrect\nbehavior with current master.\n\nThe test was very simple:\n\n create table t (a int, b int) with (fillfactor=10);\n insert into t select mod((i/22),2), (i/22)\n from generate_series(0,1000) s(i);\n create index on t (a);\n\nwhich creates a table with 46 pages, 22 rows per page, column \"a\"\nalternates between 0/1 on pages, column \"b\" increments on each page (so\n\"b\" identifies page).\n\nand then\n\n delete from t where mod(b,8) = 0;\n\nwhich deletes tuples on pages 0, 8, 16, 24, 32, 40, so these pages will\nneed to be prefetched as not-all-visible by this query\n\n explain analyze select count(1) from t where a = 0\n\nwhen forced to do bitmap heap scan. The other even-numbered pages remain\nall-visible. I added a bit of logging into BitmapPrefetch(), but even\nwith master I get this:\n\n LOG: prefetching block 8 0 current block 6 0\n LOG: prefetching block 16 0 current block 14 0\n LOG: prefetching block 24 0 current block 22 0\n LOG: prefetching block 32 0 current block 30 0\n LOG: prefetching block 40 0 current block 38 0\n\nSo it prefetches the correct pages (the other value is the recheck flag\nfor that block from the iterator result).\n\nTurns out (and I realize the comment about the assumption actually\nstates that, I just failed to understand it) the thing that would have\nto differ for the blocks is the recheck flag.\n\nBut that can't actually happen because that's set by the AM/opclass and\nthe built-in ones do essentially this:\n\n.../hash.c: scan->xs_recheck = true;\n.../nbtree.c: scan->xs_recheck = false;\n\n\ngist opclasses (e.g. btree_gist):\n\n\t/* All cases served by this function are exact */\n\t*recheck = false;\n\nspgist opclasses (e.g. geo_spgist):\n\n\t/* All tests are exact. */\n\tout->recheck = false;\n\nIf there's an opclass that alters the recheck flag, it's well hidden and\nI missed it.\n\nAnyway, after this exercise and learning more about the recheck flag, I\nthink I agree the assumption is unnecessary. It's pretty harmless\nbecause none of the built-in opclasses alters the recheck flag, but the\ncorrect recheck flag is readily available. I'm still a bit puzzled why\nthe 2017 commit even relied on this assumption, though.\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 6 Apr 2024 16:40:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/6/24 15:40, Melanie Plageman wrote:\n> On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:\n>>\n>>\n>> On 4/6/24 01:53, Melanie Plageman wrote:\n>>> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n>>>> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n>>>>> On 4/4/24 00:57, Melanie Plageman wrote:\n>>>>>> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n>>>>> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n>>>>> things go reasonably well.\n>>>>\n>>>> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n>>>> love to know if you agree with the direction before I spend more time.\n>>>\n>>> In attached v16, I've split out 0010-0013 into 0011-0017. I think it is\n>>> much easier to understand.\n>>>\n>>\n>> Anyway, I've attached it as .tgz in order to not confuse cfbot. All the\n>> review comments are marked with XXX, so grep for that in the patches.\n>> There's two separate patches - the first one suggests a code change, so\n>> it was better to not merge that with your code. The second has just a\n>> couple XXX comments, I'm not sure why I kept it separate.\n>>\n>> A couple review comments:\n>>\n>> * I think 0001-0009 are 99% ready to. I reworded some of the commit\n>> messages a bit - I realize it's a bit bold, considering you're native\n>> speaker and I'm not. If you could check I didn't make it worse, that\n>> would be great.\n> \n> Attached v17 has *only* patches 0001-0009 with these changes. I will\n> work on applying the remaining patches, addressing feedback, and adding\n> comments next.\n> \n> I have reviewed and incorporated all of your feedback on these patches.\n> Attached v17 is your exact patches with 1 or 2 *very* slight tweaks to\n> commit messages (comma splice removal and single word adjustments) as\n> well as the changes listed below:\n> \n> I have changed the following:\n> \n> - 0003 added an assert that rs_empty_tuples_pending is 0 on rescan and\n> \tendscan\n> \n\nOK\n\n> - 0004 (your 0005)-- I followed up with Tom, but for now I have just\n> \tremoved the XXX and also reworded the message a bit\n> \n\nAfter the exercise I described a couple minutes ago, I think I'm\nconvinced the assumption is unnecessary and we should use the correct\nrecheck. Not that it'd make any difference in practice, considering none\nof the opclasses ever changes the recheck.\n\nMaybe the most prudent thing would be to skip this commit and maybe\nleave this for later, but I'm not forcing you to do that if it would\nmean a lot of disruption for the following patches.\n\n> - 0006 (your 0007) fixed up the variable name (you changed valid ->\n> \tvalid_block but it had gotten changed back)\n> \n\nOK\n\n> I have open questions on the following:\n> \n> - 0003: should it be SO_NEED_TUPLES and need_tuples (instead of\n> \tSO_NEED_TUPLE and need_tuple)?\n> \n\nI think SO_NEED_TUPLES is more accurate, as we need all tuples from the\nblock. But either would work.\n\n> - 0009 (your 0010)\n> \t- Should I mention in the commit message that we added blockno and\n> \t\tpfblockno in the BitmapHeapScanState only for validation or is that\n> \t\ttoo specific?\n> \n\nFor the commit message I'd say it's too specific, I'd put it in the\ncomment before the struct.\n\n> \t- Should I mention that a future (imminent) commit will remove the\n> \t\titerators from TableScanDescData and put them in HeapScanDescData? I\n> \t\timagine folks don't want those there, but it is easier for the\n> \t\tprogression of commits to put them there first and then move them\n> \n\nI'd try not to mention future commits as justification too often, if we\ndon't know that the future commit lands shortly after.\n\n> \t- I'm worried this comment is vague and or potentially not totally\n> \t\tcorrect. Should we remove it? I don't think we have conclusive proof\n> \t\tthat this is true.\n> /*\n> * Adjusting the prefetch iterator before invoking\n> * table_scan_bitmap_next_block() keeps prefetch distance higher across\n> * the parallel workers.\n> */\n> \n\nTBH it's not clear to me what \"higher across parallel workers\" means.\nBut it sure shouldn't claim things that we think may not be correct. I\ndon't have a good idea how to reword it, though.\n\n> \n>> * I'm not sure extra_flags is the right way to pass the flag in 0003.\n>> The \"extra_\" name is a bit weird, and no other table AM functions do it\n>> this way and pass explicit bool flags instead. So my first \"review\"\n>> commit does it like that. Do you agree it's better that way?\n> \n> Yes.\n> \n\nCool\n\n>> * The one question I'm somewhat unsure about is why Tom chose to use the\n>> \"wrong\" recheck flag in the 2017 commit, when the correct recheck flag\n>> is readily available. Surely that had a reason, right? But I can't think\n>> of one ...\n> \n> See above.\n> \n>>> While I was doing that, I realized that I should remove the call to\n>>> table_rescan() from ExecReScanBitmapHeapScan() and just rely on the new\n>>> table_rescan_bm() invoked from BitmapHeapNext(). That is done in the\n>>> attached.\n>>>\n>>> 0010-0018 still need comments updated but I focused on getting the split\n>>> out, reviewable version of them ready. I'll add comments (especially to\n>>> 0011 table AM functions) tomorrow. I also have to double-check if I\n>>> should add any asserts for table AMs about having implemented all of the\n>>> new begin/re/endscan() functions.\n>>>\n>>\n>> I added a couple more comments for those patches (10-12). Chances are\n>> the split in v16 clarifies some of my questions, but it'll have to wait\n>> till the morning ...\n> \n> Will address this in next mail.\n> \n\nOK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\nper above (tuple vs. tuples etc.), and the question about the recheck\nflag. If you can do these tweaks, I'll get that committed today and we\ncan try to get a couple more patches in tomorrow.\n\nSounds reasonable?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 6 Apr 2024 16:57:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Melanie Plageman <[email protected]> writes:\n> On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:\n>> * The one question I'm somewhat unsure about is why Tom chose to use the\n>> \"wrong\" recheck flag in the 2017 commit, when the correct recheck flag\n>> is readily available. Surely that had a reason, right? But I can't think\n>> of one ...\n\n> See above.\n\nHi, I hadn't been paying attention to this thread, but Melanie pinged\nme off-list about this question. I think it's just a flat-out\noversight in 7c70996eb. Looking at the mailing list thread\n(particularly [1][2]), it seems that Alexander hadn't really addressed\nthe question of when to prefetch at all, but just skipped prefetch if\nthe current page was skippable:\n\n+\t\t/*\n+\t\t * If we did not need to fetch the current page,\n+\t\t * we probably will not need to fetch the next.\n+\t\t */\n+\t\treturn;\n\nIt looks like I noticed that we could check the appropriate VM bits,\nbut failed to notice that we could easily check the appropriate\nrecheck flag as well. Feel free to change it.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/a6434d5c-ed8d-b09c-a7c3-b2d1677e35b3%40postgrespro.ru\n[2] https://www.postgresql.org/message-id/5974.1509573988%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 06 Apr 2024 10:59:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 04:57:51PM +0200, Tomas Vondra wrote:\n> On 4/6/24 15:40, Melanie Plageman wrote:\n> > On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:\n> >>\n> >>\n> >> On 4/6/24 01:53, Melanie Plageman wrote:\n> >>> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n> >>>> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n> >>>>> On 4/4/24 00:57, Melanie Plageman wrote:\n> >>>>>> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> >>>>> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n> >>>>> things go reasonably well.\n> >>>>\n> >>>> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n> >>>> love to know if you agree with the direction before I spend more time.\n> >>>\n> >>> In attached v16, I've split out 0010-0013 into 0011-0017. I think it is\n> >>> much easier to understand.\n> >>>\n> >>\n> >> Anyway, I've attached it as .tgz in order to not confuse cfbot. All the\n> >> review comments are marked with XXX, so grep for that in the patches.\n> >> There's two separate patches - the first one suggests a code change, so\n> >> it was better to not merge that with your code. The second has just a\n> >> couple XXX comments, I'm not sure why I kept it separate.\n> >>\n> >> A couple review comments:\n> >>\n> >> * I think 0001-0009 are 99% ready to. I reworded some of the commit\n> >> messages a bit - I realize it's a bit bold, considering you're native\n> >> speaker and I'm not. If you could check I didn't make it worse, that\n> >> would be great.\n> > \n> > Attached v17 has *only* patches 0001-0009 with these changes. I will\n> > work on applying the remaining patches, addressing feedback, and adding\n> > comments next.\n> > \n> > I have reviewed and incorporated all of your feedback on these patches.\n> > Attached v17 is your exact patches with 1 or 2 *very* slight tweaks to\n> > commit messages (comma splice removal and single word adjustments) as\n> > well as the changes listed below:\n> > \n> > I have open questions on the following:\n> > \n> > - 0003: should it be SO_NEED_TUPLES and need_tuples (instead of\n> > \tSO_NEED_TUPLE and need_tuple)?\n> > \n> \n> I think SO_NEED_TUPLES is more accurate, as we need all tuples from the\n> block. But either would work.\n\nAttached v18 changes it to TUPLES/tuples\n\n> \n> > - 0009 (your 0010)\n> > \t- Should I mention in the commit message that we added blockno and\n> > \t\tpfblockno in the BitmapHeapScanState only for validation or is that\n> > \t\ttoo specific?\n> > \n> \n> For the commit message I'd say it's too specific, I'd put it in the\n> comment before the struct.\n\nIt is in the comment for the struct\n\n> > \t- I'm worried this comment is vague and or potentially not totally\n> > \t\tcorrect. Should we remove it? I don't think we have conclusive proof\n> > \t\tthat this is true.\n> > /*\n> > * Adjusting the prefetch iterator before invoking\n> > * table_scan_bitmap_next_block() keeps prefetch distance higher across\n> > * the parallel workers.\n> > */\n> > \n> \n> TBH it's not clear to me what \"higher across parallel workers\" means.\n> But it sure shouldn't claim things that we think may not be correct. I\n> don't have a good idea how to reword it, though.\n\nI realized it makes more sense to add a FIXME (I used XXX. I'm not when\nto use what) with a link to the message where Andres describes why he\nthinks it is a bug. If we plan on fixing it, it is good to have a record\nof that. And it makes it easier to put a clear and accurate comment.\nDone in 0009.\n\n> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n> per above (tuple vs. tuples etc.), and the question about the recheck\n> flag. If you can do these tweaks, I'll get that committed today and we\n> can try to get a couple more patches in tomorrow.\n\nSounds good.\n\n- Melanie",
"msg_date": "Sat, 6 Apr 2024 12:04:23 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 12:04:23PM -0400, Melanie Plageman wrote:\n> On Sat, Apr 06, 2024 at 04:57:51PM +0200, Tomas Vondra wrote:\n> > On 4/6/24 15:40, Melanie Plageman wrote:\n> > > On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:\n> > >>\n> > >>\n> > >> On 4/6/24 01:53, Melanie Plageman wrote:\n> > >>> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:\n> > >>>> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:\n> > >>>>> On 4/4/24 00:57, Melanie Plageman wrote:\n> > >>>>>> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:\n> > >>>>> I'd focus on the first ~8-9 commits or so for now, we can commit more if\n> > >>>>> things go reasonably well.\n> > >>>>\n> > >>>> Sounds good. I will spend cleanup time on 0010-0013 tomorrow but would\n> > >>>> love to know if you agree with the direction before I spend more time.\n> > >>>\n> > >>> In attached v16, I've split out 0010-0013 into 0011-0017. I think it is\n> > >>> much easier to understand.\n> > >>>\n> > >>\n> > >> Anyway, I've attached it as .tgz in order to not confuse cfbot. All the\n> > >> review comments are marked with XXX, so grep for that in the patches.\n> > >> There's two separate patches - the first one suggests a code change, so\n> > >> it was better to not merge that with your code. The second has just a\n> > >> couple XXX comments, I'm not sure why I kept it separate.\n> > >>\n> > >> A couple review comments:\n> > >>\n> > >> * I think 0001-0009 are 99% ready to. I reworded some of the commit\n> > >> messages a bit - I realize it's a bit bold, considering you're native\n> > >> speaker and I'm not. If you could check I didn't make it worse, that\n> > >> would be great.\n> > > \n> > > Attached v17 has *only* patches 0001-0009 with these changes. I will\n> > > work on applying the remaining patches, addressing feedback, and adding\n> > > comments next.\n> > > \n> > > I have reviewed and incorporated all of your feedback on these patches.\n> > > Attached v17 is your exact patches with 1 or 2 *very* slight tweaks to\n> > > commit messages (comma splice removal and single word adjustments) as\n> > > well as the changes listed below:\n> > > \n> > > I have open questions on the following:\n> > > \n> > > - 0003: should it be SO_NEED_TUPLES and need_tuples (instead of\n> > > \tSO_NEED_TUPLE and need_tuple)?\n> > > \n> > \n> > I think SO_NEED_TUPLES is more accurate, as we need all tuples from the\n> > block. But either would work.\n> \n> Attached v18 changes it to TUPLES/tuples\n> \n> > \n> > > - 0009 (your 0010)\n> > > \t- Should I mention in the commit message that we added blockno and\n> > > \t\tpfblockno in the BitmapHeapScanState only for validation or is that\n> > > \t\ttoo specific?\n> > > \n> > \n> > For the commit message I'd say it's too specific, I'd put it in the\n> > comment before the struct.\n> \n> It is in the comment for the struct\n> \n> > > \t- I'm worried this comment is vague and or potentially not totally\n> > > \t\tcorrect. Should we remove it? I don't think we have conclusive proof\n> > > \t\tthat this is true.\n> > > /*\n> > > * Adjusting the prefetch iterator before invoking\n> > > * table_scan_bitmap_next_block() keeps prefetch distance higher across\n> > > * the parallel workers.\n> > > */\n> > > \n> > \n> > TBH it's not clear to me what \"higher across parallel workers\" means.\n> > But it sure shouldn't claim things that we think may not be correct. I\n> > don't have a good idea how to reword it, though.\n> \n> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n> to use what) with a link to the message where Andres describes why he\n> thinks it is a bug. If we plan on fixing it, it is good to have a record\n> of that. And it makes it easier to put a clear and accurate comment.\n> Done in 0009.\n> \n> > OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n> > per above (tuple vs. tuples etc.), and the question about the recheck\n> > flag. If you can do these tweaks, I'll get that committed today and we\n> > can try to get a couple more patches in tomorrow.\n\nAttached v19 rebases the rest of the commits from v17 over the first\nnine patches from v18. All patches 0001-0009 are unchanged from v18. I\nhave made updates and done cleanup on 0010-0021.\n\n- Melanie",
"msg_date": "Sat, 6 Apr 2024 17:34:50 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/6/24 23:34, Melanie Plageman wrote:\n> ...\n>>\n>> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n>> to use what) with a link to the message where Andres describes why he\n>> thinks it is a bug. If we plan on fixing it, it is good to have a record\n>> of that. And it makes it easier to put a clear and accurate comment.\n>> Done in 0009.\n>>\n>>> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n>>> per above (tuple vs. tuples etc.), and the question about the recheck\n>>> flag. If you can do these tweaks, I'll get that committed today and we\n>>> can try to get a couple more patches in tomorrow.\n> \n> Attached v19 rebases the rest of the commits from v17 over the first\n> nine patches from v18. All patches 0001-0009 are unchanged from v18. I\n> have made updates and done cleanup on 0010-0021.\n> \n\nI've pushed 0001-0005, I'll get back to this tomorrow and see how much\nmore we can get in for v17.\n\nWhat bothers me on 0006-0008 is that the justification in the commit\nmessages is \"future commit will do something\". I think it's fine to have\na separate \"prepareatory\" patches (I really like how you structured the\npatches this way), but it'd be good to have them right before that\n\"future\" commit - I'd like not to have one in v17 and then the \"future\ncommit\" in v18, because that's annoying complication for backpatching,\n(and probably also when implementing the AM?) etc.\n\nAFAICS for v19, the \"future commit\" for all three patches (0006-0008) is\n0012, which introduces the unified iterator. Is that correct?\n\nAlso, for 0008 I'm not sure we could even split it between v17 and v18,\nbecause even if heapam did not use the iterator, what if some other AM\nuses it? Without 0012 it'd be a problem for the AM, no?\n\nWould it make sense to move 0009 before these three patches? That seems\nlike a meaningful change on it's own, right?\n\nFWIW I don't think it's very likely I'll commit the UnifiedTBMIterator\nstuff. I do agree with the idea in general, but I think I'd need more\ntime to think about the details. Sorry about that ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 02:27:43 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:\n> On 4/6/24 23:34, Melanie Plageman wrote:\n> > ...\n> >>\n> >> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n> >> to use what) with a link to the message where Andres describes why he\n> >> thinks it is a bug. If we plan on fixing it, it is good to have a record\n> >> of that. And it makes it easier to put a clear and accurate comment.\n> >> Done in 0009.\n> >>\n> >>> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n> >>> per above (tuple vs. tuples etc.), and the question about the recheck\n> >>> flag. If you can do these tweaks, I'll get that committed today and we\n> >>> can try to get a couple more patches in tomorrow.\n> > \n> > Attached v19 rebases the rest of the commits from v17 over the first\n> > nine patches from v18. All patches 0001-0009 are unchanged from v18. I\n> > have made updates and done cleanup on 0010-0021.\n> > \n> \n> I've pushed 0001-0005, I'll get back to this tomorrow and see how much\n> more we can get in for v17.\n\nThanks! I thought about it a bit more, and I got worried about the\n\n\tAssert(scan->rs_empty_tuples_pending == 0);\n\nin heap_rescan() and heap_endscan().\n\nI was worried if we don't complete the scan it could end up tripping\nincorrectly.\n\nI tried to come up with a query which didn't end up emitting all of the\ntuples on the page (using a LIMIT clause), but I struggled to come up\nwith an example that qualified for the skip fetch optimization and also\nreturned before completing the scan.\n\nI could work a bit harder tomorrow to try and come up with something.\nHowever, I think it might be safer to just change these to:\n\n\tscan->rs_empty_tuples_pending = 0\n\n> What bothers me on 0006-0008 is that the justification in the commit\n> messages is \"future commit will do something\". I think it's fine to have\n> a separate \"prepareatory\" patches (I really like how you structured the\n> patches this way), but it'd be good to have them right before that\n> \"future\" commit - I'd like not to have one in v17 and then the \"future\n> commit\" in v18, because that's annoying complication for backpatching,\n> (and probably also when implementing the AM?) etc.\n\nYes, I was thinking about this also.\n\n> AFAICS for v19, the \"future commit\" for all three patches (0006-0008) is\n> 0012, which introduces the unified iterator. Is that correct?\n\nActually, those patches (v19 0006-0008) were required for v19 0009,\nwhich is why I put them directly before it. 0009 eliminates use of the\nTBMIterateResult for control flow in BitmapHeapNext().\n\nI've rephrased the commit messages to not mention future commits and\ninstead focus on what the changes in the commit are enabling.\n\nv19-0006 actually squashed very easily with v19-0009 and is actually\nprobably better that way. It is still easy to understand IMO.\n\nIn v20, I've attached just the functionality from v19 0006-0009 but in\nthree patches instead of four.\n\n> Also, for 0008 I'm not sure we could even split it between v17 and v18,\n> because even if heapam did not use the iterator, what if some other AM\n> uses it? Without 0012 it'd be a problem for the AM, no?\n\nThe iterators in the TableScanDescData were introduced in v19-0009. It\nis okay for other AMs to use it. In fact, they will want to use it. It\nis still initialized and set up in BitmapHeapNext(). They would just\nneed to call tbm_iterate()/tbm_shared_iterate() on it.\n\nAs for how table AMs will cope without the TBMIterateResult passed to\ntable_scan_bitmap_next_tuple() (which is what v19 0008 did): they can\nsave the location of the tuples to be scanned somewhere in their scan\ndescriptor. Heap AM already did this and actually didn't use the\nTBMIterateResult at all.\n\n> Would it make sense to move 0009 before these three patches? That seems\n> like a meaningful change on it's own, right?\n\nSince v19 0009 requires these patches, I don't think we could do that.\nI think up to and including 0009 would be an improvement in clarity and\nfunction.\n\nAs for all the patches after 0009, I've dropped them from this version.\nWe are out of time, and they need more thought.\n\nAfter we decided not to pursue streaming bitmapheapscan for 17, I wanted\nto make sure we removed the prefetch code table AM violation -- since we\nweren't deleting that code. So what started out as me looking for a way\nto clean up one commit ended up becoming a much larger project. Sorry\nabout that last minute code explosion! I do think there is a way to do\nit right and make it nice. Also that violation would be gone if we\nfigure out how to get streaming bitmapheapscan behaving correctly.\n\nSo, there's just more motivation to make streaming bitmapheapscan\nawesome for 18!\n\nGiven all that, I've only included the three patches I think we are\nconsidering (former v19 0006-0008). They are largely the same as you saw\nthem last except for squashing the two commits I mentioned above and\nupdating all of the commit messages.\n\n> FWIW I don't think it's very likely I'll commit the UnifiedTBMIterator\n> stuff. I do agree with the idea in general, but I think I'd need more\n> time to think about the details. Sorry about that ...\n\nYes, that makes total sense. I 100% agree.\n\nI do think the UnifiedTBMIterator (maybe the name is not good, though)\nis a good way to simplify the BitmapHeapScan code and is applicable to\nany future TIDBitmap user with both a parallel and serial\nimplementation. So, there's a nice, small patch I can register for July.\n\nThanks again for taking time to work on this!\n\n- Melanie",
"msg_date": "Sun, 7 Apr 2024 00:17:34 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 4/7/24 06:17, Melanie Plageman wrote:\n> On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:\n>> On 4/6/24 23:34, Melanie Plageman wrote:\n>>> ...\n>>>>\n>>>> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n>>>> to use what) with a link to the message where Andres describes why he\n>>>> thinks it is a bug. If we plan on fixing it, it is good to have a record\n>>>> of that. And it makes it easier to put a clear and accurate comment.\n>>>> Done in 0009.\n>>>>\n>>>>> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n>>>>> per above (tuple vs. tuples etc.), and the question about the recheck\n>>>>> flag. If you can do these tweaks, I'll get that committed today and we\n>>>>> can try to get a couple more patches in tomorrow.\n>>>\n>>> Attached v19 rebases the rest of the commits from v17 over the first\n>>> nine patches from v18. All patches 0001-0009 are unchanged from v18. I\n>>> have made updates and done cleanup on 0010-0021.\n>>>\n>>\n>> I've pushed 0001-0005, I'll get back to this tomorrow and see how much\n>> more we can get in for v17.\n> \n> Thanks! I thought about it a bit more, and I got worried about the\n> \n> \tAssert(scan->rs_empty_tuples_pending == 0);\n> \n> in heap_rescan() and heap_endscan().\n> \n> I was worried if we don't complete the scan it could end up tripping\n> incorrectly.\n> \n> I tried to come up with a query which didn't end up emitting all of the\n> tuples on the page (using a LIMIT clause), but I struggled to come up\n> with an example that qualified for the skip fetch optimization and also\n> returned before completing the scan.\n> \n> I could work a bit harder tomorrow to try and come up with something.\n> However, I think it might be safer to just change these to:\n> \n> \tscan->rs_empty_tuples_pending = 0\n> \n\nHmmm, good point. I haven't tried, but wouldn't something like \"SELECT 1\nFROM t WHERE column = X LIMIT 1\" do the trick? Probably in a join, as a\ncorrelated subquery?\n\nIt seemed OK to me and the buildfarm did not turn red, so I'd leave this\nuntil after the code freeze.\n\n>> What bothers me on 0006-0008 is that the justification in the commit\n>> messages is \"future commit will do something\". I think it's fine to have\n>> a separate \"prepareatory\" patches (I really like how you structured the\n>> patches this way), but it'd be good to have them right before that\n>> \"future\" commit - I'd like not to have one in v17 and then the \"future\n>> commit\" in v18, because that's annoying complication for backpatching,\n>> (and probably also when implementing the AM?) etc.\n> \n> Yes, I was thinking about this also.\n> \n\nGood we're on the same page.\n\n>> AFAICS for v19, the \"future commit\" for all three patches (0006-0008) is\n>> 0012, which introduces the unified iterator. Is that correct?\n> \n> Actually, those patches (v19 0006-0008) were required for v19 0009,\n> which is why I put them directly before it. 0009 eliminates use of the\n> TBMIterateResult for control flow in BitmapHeapNext().\n> \n\nAh, OK. Thanks for the clarification.\n\n> I've rephrased the commit messages to not mention future commits and\n> instead focus on what the changes in the commit are enabling.\n> \n> v19-0006 actually squashed very easily with v19-0009 and is actually\n> probably better that way. It is still easy to understand IMO.\n> \n> In v20, I've attached just the functionality from v19 0006-0009 but in\n> three patches instead of four.\n> \n\nGood. I'll take a look today.\n\n>> Also, for 0008 I'm not sure we could even split it between v17 and v18,\n>> because even if heapam did not use the iterator, what if some other AM\n>> uses it? Without 0012 it'd be a problem for the AM, no?\n> \n> The iterators in the TableScanDescData were introduced in v19-0009. It\n> is okay for other AMs to use it. In fact, they will want to use it. It\n> is still initialized and set up in BitmapHeapNext(). They would just\n> need to call tbm_iterate()/tbm_shared_iterate() on it.\n> \n> As for how table AMs will cope without the TBMIterateResult passed to\n> table_scan_bitmap_next_tuple() (which is what v19 0008 did): they can\n> save the location of the tuples to be scanned somewhere in their scan\n> descriptor. Heap AM already did this and actually didn't use the\n> TBMIterateResult at all.\n> \n\nThe reason I feel a bit uneasy about putting this in TableScanDescData\nis I see that struct as a \"description of the scan\" (~input parameters\ndefining what the scan should do), while for runtime state we have\nScanState in execnodes.h. But maybe I'm wrong - I see we have similar\nruntime state in the other scan descriptors (like xs_itup/xs_hitup, kill\nprior tuple in index scans etc.).\n\n>> Would it make sense to move 0009 before these three patches? That seems\n>> like a meaningful change on it's own, right?\n> \n> Since v19 0009 requires these patches, I don't think we could do that.\n> I think up to and including 0009 would be an improvement in clarity and\n> function.\n\nRight, thanks for the correction.\n\n> \n> As for all the patches after 0009, I've dropped them from this version.\n> We are out of time, and they need more thought.\n> \n\n+1\n\n> After we decided not to pursue streaming bitmapheapscan for 17, I wanted\n> to make sure we removed the prefetch code table AM violation -- since we\n> weren't deleting that code. So what started out as me looking for a way\n> to clean up one commit ended up becoming a much larger project. Sorry\n> about that last minute code explosion! I do think there is a way to do\n> it right and make it nice. Also that violation would be gone if we\n> figure out how to get streaming bitmapheapscan behaving correctly.\n> \n> So, there's just more motivation to make streaming bitmapheapscan\n> awesome for 18!\n> \n> Given all that, I've only included the three patches I think we are\n> considering (former v19 0006-0008). They are largely the same as you saw\n> them last except for squashing the two commits I mentioned above and\n> updating all of the commit messages.\n> \n\nRight. I do think the patches make sensible changes in principle, but\nthe later parts need more refinement so let's not rush it.\n\n>> FWIW I don't think it's very likely I'll commit the UnifiedTBMIterator\n>> stuff. I do agree with the idea in general, but I think I'd need more\n>> time to think about the details. Sorry about that ...\n> \n> Yes, that makes total sense. I 100% agree.\n> \n> I do think the UnifiedTBMIterator (maybe the name is not good, though)\n> is a good way to simplify the BitmapHeapScan code and is applicable to\n> any future TIDBitmap user with both a parallel and serial\n> implementation. So, there's a nice, small patch I can register for July.\n> \n> Thanks again for taking time to work on this!\n> \n\nYeah, The name seems a bit awkward ... but I don't have a better idea. I\nlike how it \"isolates\" the complexity and makes the BHS code simpler and\neasier to understand.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 13:37:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 4/7/24 06:17, Melanie Plageman wrote:\n> >> What bothers me on 0006-0008 is that the justification in the commit\n> >> messages is \"future commit will do something\". I think it's fine to have\n> >> a separate \"prepareatory\" patches (I really like how you structured the\n> >> patches this way), but it'd be good to have them right before that\n> >> \"future\" commit - I'd like not to have one in v17 and then the \"future\n> >> commit\" in v18, because that's annoying complication for backpatching,\n> >> (and probably also when implementing the AM?) etc.\n> >\n> > Yes, I was thinking about this also.\n> >\n>\n> Good we're on the same page.\n\nHaving thought about this some more I think we need to stop here for\n17. v20-0001 and v20-0002 both make changes to the table AM API that\nseem bizarre and unjustifiable without the other changes. Like, here\nwe changed all your parameters because someday we are going to do\nsomething! You're welcome!\n\nAlso, the iterators in the TableScanDescData might be something I\ncould live with in the source code for a couple months before we make\nthe rest of the changes in July+. But, adding them does push the\nTableScanDescData->rs_parallel member into the second cacheline, which\nwill be true in versions of Postgres people are using for years. I\ndidn't perf test, but seems bad.\n\nSo, yes, unfortunately, I think we should pick up on the BHS saga in a\nfew months. Or, actually, we should start focusing on that parallel\nBHS + 0 readahead bug and whether or not we are going to fix it.\n\nSorry for the about-face.\n\n- Melanie\n\n\n",
"msg_date": "Sun, 7 Apr 2024 09:11:07 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/7/24 15:11, Melanie Plageman wrote:\n> On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 4/7/24 06:17, Melanie Plageman wrote:\n>>>> What bothers me on 0006-0008 is that the justification in the commit\n>>>> messages is \"future commit will do something\". I think it's fine to have\n>>>> a separate \"prepareatory\" patches (I really like how you structured the\n>>>> patches this way), but it'd be good to have them right before that\n>>>> \"future\" commit - I'd like not to have one in v17 and then the \"future\n>>>> commit\" in v18, because that's annoying complication for backpatching,\n>>>> (and probably also when implementing the AM?) etc.\n>>>\n>>> Yes, I was thinking about this also.\n>>>\n>>\n>> Good we're on the same page.\n> \n> Having thought about this some more I think we need to stop here for\n> 17. v20-0001 and v20-0002 both make changes to the table AM API that\n> seem bizarre and unjustifiable without the other changes. Like, here\n> we changed all your parameters because someday we are going to do\n> something! You're welcome!\n> \n\nOK, I think that's essentially the \"temporary breakage\" that should not\nspan multiple releases, I mentioned ~yesterday. I appreciate you're\ncareful about this.\n\n> Also, the iterators in the TableScanDescData might be something I\n> could live with in the source code for a couple months before we make\n> the rest of the changes in July+. But, adding them does push the\n> TableScanDescData->rs_parallel member into the second cacheline, which\n> will be true in versions of Postgres people are using for years. I\n> didn't perf test, but seems bad.\n> \n\nI haven't though about how it affects cachelines, TBH. I'd expect it to\nhave minimal impact, because while it makes this struct larger it should\nmake some other struct (used in essentially the same places) smaller. So\nI'd guess this to be a zero sum game, but perhaps I'm wrong.\n\nFor me the main question was \"Is this the right place for this, even if\nit's only temporary?\"\n\n> So, yes, unfortunately, I think we should pick up on the BHS saga in a\n> few months. Or, actually, we should start focusing on that parallel\n> BHS + 0 readahead bug and whether or not we are going to fix it.\n>\n\nYes, the July CF is a good time to focus on this, early in the cycle.\n\n> Sorry for the about-face.\n> \n\nNo problem. I very much prefer this over something that may not be quite\nready yet.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 16:10:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 4/7/24 06:17, Melanie Plageman wrote:\n> > On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:\n> >> On 4/6/24 23:34, Melanie Plageman wrote:\n> >>> ...\n> >>>>\n> >>>> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n> >>>> to use what) with a link to the message where Andres describes why he\n> >>>> thinks it is a bug. If we plan on fixing it, it is good to have a record\n> >>>> of that. And it makes it easier to put a clear and accurate comment.\n> >>>> Done in 0009.\n> >>>>\n> >>>>> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n> >>>>> per above (tuple vs. tuples etc.), and the question about the recheck\n> >>>>> flag. If you can do these tweaks, I'll get that committed today and we\n> >>>>> can try to get a couple more patches in tomorrow.\n> >>>\n> >>> Attached v19 rebases the rest of the commits from v17 over the first\n> >>> nine patches from v18. All patches 0001-0009 are unchanged from v18. I\n> >>> have made updates and done cleanup on 0010-0021.\n> >>>\n> >>\n> >> I've pushed 0001-0005, I'll get back to this tomorrow and see how much\n> >> more we can get in for v17.\n> >\n> > Thanks! I thought about it a bit more, and I got worried about the\n> >\n> > Assert(scan->rs_empty_tuples_pending == 0);\n> >\n> > in heap_rescan() and heap_endscan().\n> >\n> > I was worried if we don't complete the scan it could end up tripping\n> > incorrectly.\n> >\n> > I tried to come up with a query which didn't end up emitting all of the\n> > tuples on the page (using a LIMIT clause), but I struggled to come up\n> > with an example that qualified for the skip fetch optimization and also\n> > returned before completing the scan.\n> >\n> > I could work a bit harder tomorrow to try and come up with something.\n> > However, I think it might be safer to just change these to:\n> >\n> > scan->rs_empty_tuples_pending = 0\n> >\n>\n> Hmmm, good point. I haven't tried, but wouldn't something like \"SELECT 1\n> FROM t WHERE column = X LIMIT 1\" do the trick? Probably in a join, as a\n> correlated subquery?\n\nUnfortunately (or fortunately, I guess) that exact thing won't work\nbecause even constant values in the target list disqualify it for the\nskip fetch optimization.\n\nBeing a bit too lazy to look at planner code this morning, I removed\nthe target list requirement like this:\n\n- need_tuples = (node->ss.ps.plan->qual != NIL ||\n- node->ss.ps.plan->targetlist != NIL);\n+ need_tuples = (node->ss.ps.plan->qual != NIL);\n\nAnd can easily trip the assert with this:\n\ncreate table foo (a int);\ninsert into foo select i from generate_series(1,10)i;\ncreate index on foo(a);\nvacuum foo;\nselect 1 from (select 2 from foo limit 3);\n\nAnyway, I don't know if we could find a query that does actually hit\nthis. The only bitmap heap scan queries in the regress suite that meet\nthe\n BitmapHeapScanState->ss.ps.plan->targetlist == NIL\ncondition are aggregates (all are count(*)).\n\nI'll dig a bit more later, but do you think this is worth adding an\nopen item for? Even though I don't have a repro yet?\n\n- Melanie\n\n\n",
"msg_date": "Sun, 7 Apr 2024 10:24:38 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 10:10 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 4/7/24 15:11, Melanie Plageman wrote:\n>\n> > Also, the iterators in the TableScanDescData might be something I\n> > could live with in the source code for a couple months before we make\n> > the rest of the changes in July+. But, adding them does push the\n> > TableScanDescData->rs_parallel member into the second cacheline, which\n> > will be true in versions of Postgres people are using for years. I\n> > didn't perf test, but seems bad.\n> >\n>\n> I haven't though about how it affects cachelines, TBH. I'd expect it to\n> have minimal impact, because while it makes this struct larger it should\n> make some other struct (used in essentially the same places) smaller. So\n> I'd guess this to be a zero sum game, but perhaps I'm wrong.\n\nYea, to be honest, I didn't do extensive analysis. I just ran `pahole\n-C TableScanDescData` with the patch and on master and further\nconvinced myself the whole thing was a bad idea.\n\n> For me the main question was \"Is this the right place for this, even if\n> it's only temporary?\"\n\nYep.\n\n> > So, yes, unfortunately, I think we should pick up on the BHS saga in a\n> > few months. Or, actually, we should start focusing on that parallel\n> > BHS + 0 readahead bug and whether or not we are going to fix it.\n> >\n>\n> Yes, the July CF is a good time to focus on this, early in the cycle.\n\nI've pushed the entry to the next CF. Even though some of the patches\nwere committed, I think it makes more sense to leave the CF item open\nuntil at least the prefetch table AM violation is removed. Then we\ncould make a new CF entry for the streaming read user.\n\nWhen I get a chance, I'll post a full set of the outstanding patches\nto this thread -- including the streaming read-related refactoring and\nuser.\n\nOh, and, side note, in my previous email [1] about the\nempty_tuples_pending assert, I should mention that you need\n set enable_seqscan = off\n set enable_indexscan = off\nto force bitmpaheapscan and get the plan for my fake repro.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_Zg8Bj66OZD46Jd-ksh02OGrPR8z0JLPQqEZNEHASi6uw%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 7 Apr 2024 10:38:46 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 4/7/24 16:24, Melanie Plageman wrote:\n> On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>>\n>> On 4/7/24 06:17, Melanie Plageman wrote:\n>>> On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:\n>>>> On 4/6/24 23:34, Melanie Plageman wrote:\n>>>>> ...\n>>>>>>\n>>>>>> I realized it makes more sense to add a FIXME (I used XXX. I'm not when\n>>>>>> to use what) with a link to the message where Andres describes why he\n>>>>>> thinks it is a bug. If we plan on fixing it, it is good to have a record\n>>>>>> of that. And it makes it easier to put a clear and accurate comment.\n>>>>>> Done in 0009.\n>>>>>>\n>>>>>>> OK, thanks. If think 0001-0008 are ready to go, with some minor tweaks\n>>>>>>> per above (tuple vs. tuples etc.), and the question about the recheck\n>>>>>>> flag. If you can do these tweaks, I'll get that committed today and we\n>>>>>>> can try to get a couple more patches in tomorrow.\n>>>>>\n>>>>> Attached v19 rebases the rest of the commits from v17 over the first\n>>>>> nine patches from v18. All patches 0001-0009 are unchanged from v18. I\n>>>>> have made updates and done cleanup on 0010-0021.\n>>>>>\n>>>>\n>>>> I've pushed 0001-0005, I'll get back to this tomorrow and see how much\n>>>> more we can get in for v17.\n>>>\n>>> Thanks! I thought about it a bit more, and I got worried about the\n>>>\n>>> Assert(scan->rs_empty_tuples_pending == 0);\n>>>\n>>> in heap_rescan() and heap_endscan().\n>>>\n>>> I was worried if we don't complete the scan it could end up tripping\n>>> incorrectly.\n>>>\n>>> I tried to come up with a query which didn't end up emitting all of the\n>>> tuples on the page (using a LIMIT clause), but I struggled to come up\n>>> with an example that qualified for the skip fetch optimization and also\n>>> returned before completing the scan.\n>>>\n>>> I could work a bit harder tomorrow to try and come up with something.\n>>> However, I think it might be safer to just change these to:\n>>>\n>>> scan->rs_empty_tuples_pending = 0\n>>>\n>>\n>> Hmmm, good point. I haven't tried, but wouldn't something like \"SELECT 1\n>> FROM t WHERE column = X LIMIT 1\" do the trick? Probably in a join, as a\n>> correlated subquery?\n> \n> Unfortunately (or fortunately, I guess) that exact thing won't work\n> because even constant values in the target list disqualify it for the\n> skip fetch optimization.\n> \n> Being a bit too lazy to look at planner code this morning, I removed\n> the target list requirement like this:\n> \n> - need_tuples = (node->ss.ps.plan->qual != NIL ||\n> - node->ss.ps.plan->targetlist != NIL);\n> + need_tuples = (node->ss.ps.plan->qual != NIL);\n> \n> And can easily trip the assert with this:\n> \n> create table foo (a int);\n> insert into foo select i from generate_series(1,10)i;\n> create index on foo(a);\n> vacuum foo;\n> select 1 from (select 2 from foo limit 3);\n> \n> Anyway, I don't know if we could find a query that does actually hit\n> this. The only bitmap heap scan queries in the regress suite that meet\n> the\n> BitmapHeapScanState->ss.ps.plan->targetlist == NIL\n> condition are aggregates (all are count(*)).\n> \n> I'll dig a bit more later, but do you think this is worth adding an\n> open item for? Even though I don't have a repro yet?\n> \n\nTry this:\n\ncreate table t (a int, b int) with (fillfactor=10);\ninsert into t select mod((i/22),2), (i/22) from generate_series(0,1000)\nS(i);\ncreate index on t(a);\nvacuum analyze t;\n\nset enable_indexonlyscan = off;\nset enable_seqscan = off;\nexplain (analyze, verbose) select 1 from (values (1)) s(x) where exists\n(select * from t where a = x);\n\nKABOOM!\n\n#2 0x000078a16ac5fafe in __GI_raise (sig=sig@entry=6) at\n../sysdeps/posix/raise.c:26\n#3 0x000078a16ac4887f in __GI_abort () at abort.c:79\n#4 0x0000000000bb2c5a in ExceptionalCondition (conditionName=0xc42ba8\n\"scan->rs_empty_tuples_pending == 0\", fileName=0xc429c8 \"heapam.c\",\nlineNumber=1090) at assert.c:66\n#5 0x00000000004f68bb in heap_endscan (sscan=0x19af3a0) at heapam.c:1090\n#6 0x000000000077a94c in table_endscan (scan=0x19af3a0) at\n../../../src/include/access/tableam.h:1001\n\nSo yeah, this assert is not quite correct. It's not breaking anything at\nthe moment, so we can fix it now or add it as an open item.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 16:41:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 10:42 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 4/7/24 16:24, Melanie Plageman wrote:\n> >>> Thanks! I thought about it a bit more, and I got worried about the\n> >>>\n> >>> Assert(scan->rs_empty_tuples_pending == 0);\n> >>>\n> >>> in heap_rescan() and heap_endscan().\n> >>>\n> >>> I was worried if we don't complete the scan it could end up tripping\n> >>> incorrectly.\n> >>>\n> >>> I tried to come up with a query which didn't end up emitting all of the\n> >>> tuples on the page (using a LIMIT clause), but I struggled to come up\n> >>> with an example that qualified for the skip fetch optimization and also\n> >>> returned before completing the scan.\n> >>>\n> >>> I could work a bit harder tomorrow to try and come up with something.\n> >>> However, I think it might be safer to just change these to:\n> >>>\n> >>> scan->rs_empty_tuples_pending = 0\n> >>>\n> >>\n> >> Hmmm, good point. I haven't tried, but wouldn't something like \"SELECT 1\n> >> FROM t WHERE column = X LIMIT 1\" do the trick? Probably in a join, as a\n> >> correlated subquery?\n> >\n> > Unfortunately (or fortunately, I guess) that exact thing won't work\n> > because even constant values in the target list disqualify it for the\n> > skip fetch optimization.\n> >\n> > Being a bit too lazy to look at planner code this morning, I removed\n> > the target list requirement like this:\n> >\n> > - need_tuples = (node->ss.ps.plan->qual != NIL ||\n> > - node->ss.ps.plan->targetlist != NIL);\n> > + need_tuples = (node->ss.ps.plan->qual != NIL);\n> >\n> > And can easily trip the assert with this:\n> >\n> > create table foo (a int);\n> > insert into foo select i from generate_series(1,10)i;\n> > create index on foo(a);\n> > vacuum foo;\n> > select 1 from (select 2 from foo limit 3);\n> >\n> > Anyway, I don't know if we could find a query that does actually hit\n> > this. The only bitmap heap scan queries in the regress suite that meet\n> > the\n> > BitmapHeapScanState->ss.ps.plan->targetlist == NIL\n> > condition are aggregates (all are count(*)).\n> >\n> > I'll dig a bit more later, but do you think this is worth adding an\n> > open item for? Even though I don't have a repro yet?\n> >\n>\n> Try this:\n>\n> create table t (a int, b int) with (fillfactor=10);\n> insert into t select mod((i/22),2), (i/22) from generate_series(0,1000)\n> S(i);\n> create index on t(a);\n> vacuum analyze t;\n>\n> set enable_indexonlyscan = off;\n> set enable_seqscan = off;\n> explain (analyze, verbose) select 1 from (values (1)) s(x) where exists\n> (select * from t where a = x);\n>\n> KABOOM!\n\nOoo fancy query! Good job.\n\n> #2 0x000078a16ac5fafe in __GI_raise (sig=sig@entry=6) at\n> ../sysdeps/posix/raise.c:26\n> #3 0x000078a16ac4887f in __GI_abort () at abort.c:79\n> #4 0x0000000000bb2c5a in ExceptionalCondition (conditionName=0xc42ba8\n> \"scan->rs_empty_tuples_pending == 0\", fileName=0xc429c8 \"heapam.c\",\n> lineNumber=1090) at assert.c:66\n> #5 0x00000000004f68bb in heap_endscan (sscan=0x19af3a0) at heapam.c:1090\n> #6 0x000000000077a94c in table_endscan (scan=0x19af3a0) at\n> ../../../src/include/access/tableam.h:1001\n>\n> So yeah, this assert is not quite correct. It's not breaking anything at\n> the moment, so we can fix it now or add it as an open item.\n\nI've added an open item [1], because what's one open item when you can\nhave two? (me)\n\n- Melanie\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Open_Issues\n\n\n",
"msg_date": "Sun, 7 Apr 2024 10:54:56 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 10:42 PM Tomas Vondra <[email protected]>\nwrote:\n\n> create table t (a int, b int) with (fillfactor=10);\n> insert into t select mod((i/22),2), (i/22) from generate_series(0,1000)\n> S(i);\n> create index on t(a);\n> vacuum analyze t;\n>\n> set enable_indexonlyscan = off;\n> set enable_seqscan = off;\n> explain (analyze, verbose) select 1 from (values (1)) s(x) where exists\n> (select * from t where a = x);\n>\n> KABOOM!\n\n\nFWIW, it seems to me that this assert could be triggered in cases where,\nduring a join, not all inner tuples need to be scanned before skipping to\nnext outer tuple. This can happen for 'single_match' or anti-join.\n\nThe query provided by Tomas is an example of 'single_match' case. Here\nis a query for anti-join that can also trigger this assert.\n\nexplain (analyze, verbose)\nselect t1.a from t t1 left join t t2 on t2.a = 1 where t2.a is null;\nserver closed the connection unexpectedly\n\nThanks\nRichard\n\nOn Sun, Apr 7, 2024 at 10:42 PM Tomas Vondra <[email protected]> wrote:\ncreate table t (a int, b int) with (fillfactor=10);\ninsert into t select mod((i/22),2), (i/22) from generate_series(0,1000)\nS(i);\ncreate index on t(a);\nvacuum analyze t;\n\nset enable_indexonlyscan = off;\nset enable_seqscan = off;\nexplain (analyze, verbose) select 1 from (values (1)) s(x) where exists\n(select * from t where a = x);\n\nKABOOM!FWIW, it seems to me that this assert could be triggered in cases where,during a join, not all inner tuples need to be scanned before skipping tonext outer tuple. This can happen for 'single_match' or anti-join.The query provided by Tomas is an example of 'single_match' case. Hereis a query for anti-join that can also trigger this assert.explain (analyze, verbose)select t1.a from t t1 left join t t2 on t2.a = 1 where t2.a is null;server closed the connection unexpectedlyThanksRichard",
"msg_date": "Fri, 12 Apr 2024 11:18:06 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 10:54:56AM -0400, Melanie Plageman wrote:\n> I've added an open item [1], because what's one open item when you can\n> have two? (me)\n\nAnd this is still an open item as of today. What's the plan to move\nforward here?\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 16:10:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/18/24 09:10, Michael Paquier wrote:\n> On Sun, Apr 07, 2024 at 10:54:56AM -0400, Melanie Plageman wrote:\n>> I've added an open item [1], because what's one open item when you can\n>> have two? (me)\n> \n> And this is still an open item as of today. What's the plan to move\n> forward here?\n\nAFAIK the plan is to replace the asserts with actually resetting the\nrs_empty_tuples_pending field to 0, as suggested by Melanie a week ago.\nI assume she was busy with the post-freeze AM reworks last week, so this\nwas on a back burner.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2024 11:39:22 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 5:39 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 4/18/24 09:10, Michael Paquier wrote:\n> > On Sun, Apr 07, 2024 at 10:54:56AM -0400, Melanie Plageman wrote:\n> >> I've added an open item [1], because what's one open item when you can\n> >> have two? (me)\n> >\n> > And this is still an open item as of today. What's the plan to move\n> > forward here?\n>\n> AFAIK the plan is to replace the asserts with actually resetting the\n> rs_empty_tuples_pending field to 0, as suggested by Melanie a week ago.\n> I assume she was busy with the post-freeze AM reworks last week, so this\n> was on a back burner.\n\nyep, sorry. Also I took a few days off. I'm just catching up today. I\nwant to pop in one of Richard or Tomas' examples as a test, since it\nseems like it would add some coverage. I will have a patch soon.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 22 Apr 2024 13:01:51 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 1:01 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Apr 18, 2024 at 5:39 AM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 4/18/24 09:10, Michael Paquier wrote:\n> > > On Sun, Apr 07, 2024 at 10:54:56AM -0400, Melanie Plageman wrote:\n> > >> I've added an open item [1], because what's one open item when you can\n> > >> have two? (me)\n> > >\n> > > And this is still an open item as of today. What's the plan to move\n> > > forward here?\n> >\n> > AFAIK the plan is to replace the asserts with actually resetting the\n> > rs_empty_tuples_pending field to 0, as suggested by Melanie a week ago.\n> > I assume she was busy with the post-freeze AM reworks last week, so this\n> > was on a back burner.\n>\n> yep, sorry. Also I took a few days off. I'm just catching up today. I\n> want to pop in one of Richard or Tomas' examples as a test, since it\n> seems like it would add some coverage. I will have a patch soon.\n\nThe patch with a fix is attached. I put the test in\nsrc/test/regress/sql/join.sql. It isn't the perfect location because\nit is testing something exercisable with a join but not directly\nrelated to the fact that it is a join. I also considered\nsrc/test/regress/sql/select.sql, but it also isn't directly related to\nthe query being a SELECT query. If there is a better place for a test\nof a bitmap heap scan edge case, let me know.\n\nOne other note: there is some concurrency effect in the parallel\nschedule group containing \"join\" where you won't trip the assert if\nall the tests in that group in the parallel schedule are run. But, if\nyou would like to verify that the test exercises the correct code,\njust reduce the group containing \"join\".\n\n- Melanie",
"msg_date": "Tue, 23 Apr 2024 12:05:05 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 4/23/24 18:05, Melanie Plageman wrote:\n> On Mon, Apr 22, 2024 at 1:01 PM Melanie Plageman\n> <[email protected]> wrote:\n>>\n>> On Thu, Apr 18, 2024 at 5:39 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>>\n>>> On 4/18/24 09:10, Michael Paquier wrote:\n>>>> On Sun, Apr 07, 2024 at 10:54:56AM -0400, Melanie Plageman wrote:\n>>>>> I've added an open item [1], because what's one open item when you can\n>>>>> have two? (me)\n>>>>\n>>>> And this is still an open item as of today. What's the plan to move\n>>>> forward here?\n>>>\n>>> AFAIK the plan is to replace the asserts with actually resetting the\n>>> rs_empty_tuples_pending field to 0, as suggested by Melanie a week ago.\n>>> I assume she was busy with the post-freeze AM reworks last week, so this\n>>> was on a back burner.\n>>\n>> yep, sorry. Also I took a few days off. I'm just catching up today. I\n>> want to pop in one of Richard or Tomas' examples as a test, since it\n>> seems like it would add some coverage. I will have a patch soon.\n> \n> The patch with a fix is attached. I put the test in\n> src/test/regress/sql/join.sql. It isn't the perfect location because\n> it is testing something exercisable with a join but not directly\n> related to the fact that it is a join. I also considered\n> src/test/regress/sql/select.sql, but it also isn't directly related to\n> the query being a SELECT query. If there is a better place for a test\n> of a bitmap heap scan edge case, let me know.\n> \n\nI don't see a problem with adding this to join.sql - why wouldn't this\ncount as something related to a join? Sure, it's not like this code\nmatters only for joins, but if you look at join.sql that applies to a\nnumber of other tests (e.g. there are a couple btree tests).\n\nThat being said, it'd be good to explain in the comment why we're\ntesting this particular plan, not just what the plan looks like.\n\n> One other note: there is some concurrency effect in the parallel\n> schedule group containing \"join\" where you won't trip the assert if\n> all the tests in that group in the parallel schedule are run. But, if\n> you would like to verify that the test exercises the correct code,\n> just reduce the group containing \"join\".\n> \n\nThat is ... interesting. Doesn't that mean that most test runs won't\nactually detect the problem? That would make the test a bit useless.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Apr 2024 00:43:38 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 4/23/24 18:05, Melanie Plageman wrote:\n> > The patch with a fix is attached. I put the test in\n> > src/test/regress/sql/join.sql. It isn't the perfect location because\n> > it is testing something exercisable with a join but not directly\n> > related to the fact that it is a join. I also considered\n> > src/test/regress/sql/select.sql, but it also isn't directly related to\n> > the query being a SELECT query. If there is a better place for a test\n> > of a bitmap heap scan edge case, let me know.\n>\n> I don't see a problem with adding this to join.sql - why wouldn't this\n> count as something related to a join? Sure, it's not like this code\n> matters only for joins, but if you look at join.sql that applies to a\n> number of other tests (e.g. there are a couple btree tests).\n\nI suppose it's true that other tests in this file use joins to test\nother code. I guess if we limited join.sql to containing tests of join\nimplementation, it would be rather small. I just imagined it would be\nnice if tests were grouped by what they were testing -- not how they\nwere testing it.\n\n> That being said, it'd be good to explain in the comment why we're\n> testing this particular plan, not just what the plan looks like.\n\nYou mean I should explain in the test comment why I included the\nEXPLAIN plan output? (because it needs to contain a bitmapheapscan to\nactually be testing anything)\n\nI do have a detailed explanation in my test comment of why this\nparticular query exercises the code we want to test.\n\n> > One other note: there is some concurrency effect in the parallel\n> > schedule group containing \"join\" where you won't trip the assert if\n> > all the tests in that group in the parallel schedule are run. But, if\n> > you would like to verify that the test exercises the correct code,\n> > just reduce the group containing \"join\".\n> >\n>\n> That is ... interesting. Doesn't that mean that most test runs won't\n> actually detect the problem? That would make the test a bit useless.\n\nYes, I should really have thought about it more. After further\ninvestigation, I found that the reason it doesn't trip the assert when\nthe join test is run concurrently with other tests is that the SELECT\nquery doesn't use the skip fetch optimization because the VACUUM\ndoesn't set the pages all visible in the VM. In this case, it's\nbecause the tuples' xmins are not before VacuumCutoffs->OldestXmin\n(which is derived from GetOldestNonRemovableTransactionId()).\n\nAfter thinking about it more, I suppose we can't add a test that\nrelies on the relation being all visible in the VM in a group in the\nparallel schedule. I'm not sure this edge case is important enough to\nmerit its own group or an isolation test. What do you think?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 24 Apr 2024 16:46:19 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Apr 24, 2024 at 4:46 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 4/23/24 18:05, Melanie Plageman wrote:\n> > > One other note: there is some concurrency effect in the parallel\n> > > schedule group containing \"join\" where you won't trip the assert if\n> > > all the tests in that group in the parallel schedule are run. But, if\n> > > you would like to verify that the test exercises the correct code,\n> > > just reduce the group containing \"join\".\n> > >\n> >\n> > That is ... interesting. Doesn't that mean that most test runs won't\n> > actually detect the problem? That would make the test a bit useless.\n>\n> Yes, I should really have thought about it more. After further\n> investigation, I found that the reason it doesn't trip the assert when\n> the join test is run concurrently with other tests is that the SELECT\n> query doesn't use the skip fetch optimization because the VACUUM\n> doesn't set the pages all visible in the VM. In this case, it's\n> because the tuples' xmins are not before VacuumCutoffs->OldestXmin\n> (which is derived from GetOldestNonRemovableTransactionId()).\n>\n> After thinking about it more, I suppose we can't add a test that\n> relies on the relation being all visible in the VM in a group in the\n> parallel schedule. I'm not sure this edge case is important enough to\n> merit its own group or an isolation test. What do you think?\n\nAndres rightly pointed out to me off-list that if I just used a temp\ntable, the table would only be visible to the testing backend anyway.\nI've done that in the attached v2. Now the test is deterministic.\n\n- Melanie",
"msg_date": "Thu, 25 Apr 2024 19:03:57 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Melanie Plageman <[email protected]> writes:\n> On Wed, Apr 24, 2024 at 4:46 PM Melanie Plageman\n> <[email protected]> wrote:\n>> After thinking about it more, I suppose we can't add a test that\n>> relies on the relation being all visible in the VM in a group in the\n>> parallel schedule. I'm not sure this edge case is important enough to\n>> merit its own group or an isolation test. What do you think?\n\n> Andres rightly pointed out to me off-list that if I just used a temp\n> table, the table would only be visible to the testing backend anyway.\n> I've done that in the attached v2. Now the test is deterministic.\n\nHmm, is that actually true? There's no more reason to think a tuple\nin a temp table is old enough to be visible to all other sessions\nthan one in any other table. It could be all right if we had a\nspecial-case rule for setting all-visible in temp tables. Which\nindeed I thought we had, but I can't find any evidence of that in\nvacuumlazy.c, nor did a trawl of the commit log turn up anything\npromising. Am I just looking in the wrong place?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2024 19:28:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "I wrote:\n> Hmm, is that actually true? There's no more reason to think a tuple\n> in a temp table is old enough to be visible to all other sessions\n> than one in any other table. It could be all right if we had a\n> special-case rule for setting all-visible in temp tables. Which\n> indeed I thought we had, but I can't find any evidence of that in\n> vacuumlazy.c, nor did a trawl of the commit log turn up anything\n> promising. Am I just looking in the wrong place?\n\nAh, never mind that --- I must be looking in the wrong place.\nDirect experimentation proves that VACUUM will set all-visible bits\nfor temp tables even in the presence of concurrent transactions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2024 19:57:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, Apr 25, 2024 at 7:57 PM Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > Hmm, is that actually true? There's no more reason to think a tuple\n> > in a temp table is old enough to be visible to all other sessions\n> > than one in any other table. It could be all right if we had a\n> > special-case rule for setting all-visible in temp tables. Which\n> > indeed I thought we had, but I can't find any evidence of that in\n> > vacuumlazy.c, nor did a trawl of the commit log turn up anything\n> > promising. Am I just looking in the wrong place?\n>\n> Ah, never mind that --- I must be looking in the wrong place.\n> Direct experimentation proves that VACUUM will set all-visible bits\n> for temp tables even in the presence of concurrent transactions.\n\nIf this seems correct to you, are you okay with the rest of the fix\nand test? We could close this open item once the patch is acceptable.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 26 Apr 2024 09:04:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "> On 26 Apr 2024, at 15:04, Melanie Plageman <[email protected]> wrote:\n\n> If this seems correct to you, are you okay with the rest of the fix\n> and test? We could close this open item once the patch is acceptable.\n\nFrom reading the discussion and the patch this seems like the right fix to me.\nDoes the test added here aptly cover 04e72ed617be in terms its functionality?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 30 Apr 2024 14:07:43 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 4/30/24 14:07, Daniel Gustafsson wrote:\n>> On 26 Apr 2024, at 15:04, Melanie Plageman <[email protected]> wrote:\n> \n>> If this seems correct to you, are you okay with the rest of the fix\n>> and test? We could close this open item once the patch is acceptable.\n> \n> From reading the discussion and the patch this seems like the right fix to me.\n\nI agree.\n\n> Does the test added here aptly cover 04e72ed617be in terms its functionality?\n> \n\nAFAIK the test fails without the fix and works with it, so I believe it\ndoes cover the relevant functionality.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 May 2024 23:31:00 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 4/24/24 22:46, Melanie Plageman wrote:\n> On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 4/23/24 18:05, Melanie Plageman wrote:\n>>> The patch with a fix is attached. I put the test in\n>>> src/test/regress/sql/join.sql. It isn't the perfect location because\n>>> it is testing something exercisable with a join but not directly\n>>> related to the fact that it is a join. I also considered\n>>> src/test/regress/sql/select.sql, but it also isn't directly related to\n>>> the query being a SELECT query. If there is a better place for a test\n>>> of a bitmap heap scan edge case, let me know.\n>>\n>> I don't see a problem with adding this to join.sql - why wouldn't this\n>> count as something related to a join? Sure, it's not like this code\n>> matters only for joins, but if you look at join.sql that applies to a\n>> number of other tests (e.g. there are a couple btree tests).\n> \n> I suppose it's true that other tests in this file use joins to test\n> other code. I guess if we limited join.sql to containing tests of join\n> implementation, it would be rather small. I just imagined it would be\n> nice if tests were grouped by what they were testing -- not how they\n> were testing it.\n> \n>> That being said, it'd be good to explain in the comment why we're\n>> testing this particular plan, not just what the plan looks like.\n> \n> You mean I should explain in the test comment why I included the\n> EXPLAIN plan output? (because it needs to contain a bitmapheapscan to\n> actually be testing anything)\n> \n\nNo, I meant that the comment before the test describes a couple\nrequirements the plan needs to meet (no need to process all inner\ntuples, bitmapscan eligible for skip_fetch on outer side, ...), but it\ndoes not explain why we're testing that plan.\n\nI could get to that by doing git-blame to see what commit added this\ncode, and then read the linked discussion. Perhaps that's enough, but\nmaybe the comment could say something like \"verify we properly discard\ntuples on rescans\" or something like that?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 May 2024 23:37:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Thu, May 2, 2024 at 5:37 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 4/24/24 22:46, Melanie Plageman wrote:\n> > On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 4/23/24 18:05, Melanie Plageman wrote:\n> >>> The patch with a fix is attached. I put the test in\n> >>> src/test/regress/sql/join.sql. It isn't the perfect location because\n> >>> it is testing something exercisable with a join but not directly\n> >>> related to the fact that it is a join. I also considered\n> >>> src/test/regress/sql/select.sql, but it also isn't directly related to\n> >>> the query being a SELECT query. If there is a better place for a test\n> >>> of a bitmap heap scan edge case, let me know.\n> >>\n> >> I don't see a problem with adding this to join.sql - why wouldn't this\n> >> count as something related to a join? Sure, it's not like this code\n> >> matters only for joins, but if you look at join.sql that applies to a\n> >> number of other tests (e.g. there are a couple btree tests).\n> >\n> > I suppose it's true that other tests in this file use joins to test\n> > other code. I guess if we limited join.sql to containing tests of join\n> > implementation, it would be rather small. I just imagined it would be\n> > nice if tests were grouped by what they were testing -- not how they\n> > were testing it.\n> >\n> >> That being said, it'd be good to explain in the comment why we're\n> >> testing this particular plan, not just what the plan looks like.\n> >\n> > You mean I should explain in the test comment why I included the\n> > EXPLAIN plan output? (because it needs to contain a bitmapheapscan to\n> > actually be testing anything)\n> >\n>\n> No, I meant that the comment before the test describes a couple\n> requirements the plan needs to meet (no need to process all inner\n> tuples, bitmapscan eligible for skip_fetch on outer side, ...), but it\n> does not explain why we're testing that plan.\n>\n> I could get to that by doing git-blame to see what commit added this\n> code, and then read the linked discussion. Perhaps that's enough, but\n> maybe the comment could say something like \"verify we properly discard\n> tuples on rescans\" or something like that?\n\nAttached is v3. I didn't use your exact language because the test\nwouldn't actually verify that we properly discard the tuples. Whether\nor not the empty tuples are all emitted, it just resets the counter to\n0. I decided to go with \"exercise\" instead.\n\n- Melanie",
"msg_date": "Fri, 10 May 2024 15:48:24 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 5/10/24 21:48, Melanie Plageman wrote:\n> On Thu, May 2, 2024 at 5:37 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>>\n>>\n>> On 4/24/24 22:46, Melanie Plageman wrote:\n>>> On Tue, Apr 23, 2024 at 6:43 PM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 4/23/24 18:05, Melanie Plageman wrote:\n>>>>> The patch with a fix is attached. I put the test in\n>>>>> src/test/regress/sql/join.sql. It isn't the perfect location because\n>>>>> it is testing something exercisable with a join but not directly\n>>>>> related to the fact that it is a join. I also considered\n>>>>> src/test/regress/sql/select.sql, but it also isn't directly related to\n>>>>> the query being a SELECT query. If there is a better place for a test\n>>>>> of a bitmap heap scan edge case, let me know.\n>>>>\n>>>> I don't see a problem with adding this to join.sql - why wouldn't this\n>>>> count as something related to a join? Sure, it's not like this code\n>>>> matters only for joins, but if you look at join.sql that applies to a\n>>>> number of other tests (e.g. there are a couple btree tests).\n>>>\n>>> I suppose it's true that other tests in this file use joins to test\n>>> other code. I guess if we limited join.sql to containing tests of join\n>>> implementation, it would be rather small. I just imagined it would be\n>>> nice if tests were grouped by what they were testing -- not how they\n>>> were testing it.\n>>>\n>>>> That being said, it'd be good to explain in the comment why we're\n>>>> testing this particular plan, not just what the plan looks like.\n>>>\n>>> You mean I should explain in the test comment why I included the\n>>> EXPLAIN plan output? (because it needs to contain a bitmapheapscan to\n>>> actually be testing anything)\n>>>\n>>\n>> No, I meant that the comment before the test describes a couple\n>> requirements the plan needs to meet (no need to process all inner\n>> tuples, bitmapscan eligible for skip_fetch on outer side, ...), but it\n>> does not explain why we're testing that plan.\n>>\n>> I could get to that by doing git-blame to see what commit added this\n>> code, and then read the linked discussion. Perhaps that's enough, but\n>> maybe the comment could say something like \"verify we properly discard\n>> tuples on rescans\" or something like that?\n> \n> Attached is v3. I didn't use your exact language because the test\n> wouldn't actually verify that we properly discard the tuples. Whether\n> or not the empty tuples are all emitted, it just resets the counter to\n> 0. I decided to go with \"exercise\" instead.\n> \n\nI did go over the v3 patch, did a bunch of tests, and I think it's fine\nand ready to go. The one thing that might need some minor tweaks is the\ncommit message.\n\n1) Isn't the subject \"Remove incorrect assert\" a bit misleading, as the\npatch does not simply remove an assert, but replaces it with a reset of\nthe field the assert used to check? (The commit message does not mention\nthis either, at least not explicitly.)\n\n2) The \"heap AM-specific bitmap heap scan code\" sounds a bit strange to\nme, isn't the first \"heap\" unnecessary?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 11 May 2024 21:18:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sat, May 11, 2024 at 3:18 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/10/24 21:48, Melanie Plageman wrote:\n> > Attached is v3. I didn't use your exact language because the test\n> > wouldn't actually verify that we properly discard the tuples. Whether\n> > or not the empty tuples are all emitted, it just resets the counter to\n> > 0. I decided to go with \"exercise\" instead.\n> >\n>\n> I did go over the v3 patch, did a bunch of tests, and I think it's fine\n> and ready to go. The one thing that might need some minor tweaks is the\n> commit message.\n>\n> 1) Isn't the subject \"Remove incorrect assert\" a bit misleading, as the\n> patch does not simply remove an assert, but replaces it with a reset of\n> the field the assert used to check? (The commit message does not mention\n> this either, at least not explicitly.)\n\nI've updated the commit message.\n\n> 2) The \"heap AM-specific bitmap heap scan code\" sounds a bit strange to\n> me, isn't the first \"heap\" unnecessary?\n\nbitmap heap scan has been used to refer to bitmap table scans, as the\nname wasn't changed from heap when the table AM API was added (e.g.\nBitmapHeapNext() is used by all table AMs doing bitmap table scans).\n04e72ed617be specifically pushed the skip fetch optimization into heap\nimplementations of bitmap table scan callbacks, so it was important to\nmake this distinction. I've changed the commit message to say heap AM\nimplementations of bitmap table scan callbacks.\n\nWhile looking at the patch again, I wondered if I should set\nenable_material=false in the test. It doesn't matter from the\nperspective of exercising the correct code; however, I wasn't sure if\ndisabling materialization would make the test more resilient against\nfuture planner changes which could cause it to incorrectly fail.\n\n- Melanie",
"msg_date": "Mon, 13 May 2024 10:05:03 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, May 13, 2024 at 10:05:03AM -0400, Melanie Plageman wrote:\n> Remove the assert and reset the field on which it previously asserted to\n> avoid incorrectly emitting NULL-filled tuples from a previous scan on\n> rescan.\n\n> -\tAssert(scan->rs_empty_tuples_pending == 0);\n> +\tscan->rs_empty_tuples_pending = 0;\n\nPerhaps this should document the reason why the reset is done in these\ntwo paths rather than let the reader guess it? And this is about\navoiding emitting some tuples from a previous scan.\n\n> +SET enable_indexonlyscan = off;\n> +set enable_indexscan = off;\n> +SET enable_seqscan = off;\n\nNit: adjusting the casing of the second SET here.\n--\nMichael",
"msg_date": "Tue, 14 May 2024 15:18:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, May 14, 2024 at 2:18 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, May 13, 2024 at 10:05:03AM -0400, Melanie Plageman wrote:\n> > Remove the assert and reset the field on which it previously asserted to\n> > avoid incorrectly emitting NULL-filled tuples from a previous scan on\n> > rescan.\n>\n> > - Assert(scan->rs_empty_tuples_pending == 0);\n> > + scan->rs_empty_tuples_pending = 0;\n>\n> Perhaps this should document the reason why the reset is done in these\n> two paths rather than let the reader guess it? And this is about\n> avoiding emitting some tuples from a previous scan.\n\nI've added a comment to heap_rescan() in the attached v5. Doing so\nmade me realize that we shouldn't bother resetting it in\nheap_endscan(). Doing so is perhaps more confusing, because it implies\nthat field may somehow be used later. I've removed the reset of\nrs_empty_tuples_pending from heap_endscan().\n\n> > +SET enable_indexonlyscan = off;\n> > +set enable_indexscan = off;\n> > +SET enable_seqscan = off;\n>\n> Nit: adjusting the casing of the second SET here.\n\nI've fixed this. I've also set enable_material off as I mentioned I\nmight in my earlier mail.\n\n- Melanie",
"msg_date": "Tue, 14 May 2024 13:42:09 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 5/14/24 19:42, Melanie Plageman wrote:\n> On Tue, May 14, 2024 at 2:18 AM Michael Paquier <[email protected]> wrote:\n>>\n>> On Mon, May 13, 2024 at 10:05:03AM -0400, Melanie Plageman wrote:\n>>> Remove the assert and reset the field on which it previously asserted to\n>>> avoid incorrectly emitting NULL-filled tuples from a previous scan on\n>>> rescan.\n>>\n>>> - Assert(scan->rs_empty_tuples_pending == 0);\n>>> + scan->rs_empty_tuples_pending = 0;\n>>\n>> Perhaps this should document the reason why the reset is done in these\n>> two paths rather than let the reader guess it? And this is about\n>> avoiding emitting some tuples from a previous scan.\n> \n> I've added a comment to heap_rescan() in the attached v5. Doing so\n> made me realize that we shouldn't bother resetting it in\n> heap_endscan(). Doing so is perhaps more confusing, because it implies\n> that field may somehow be used later. I've removed the reset of\n> rs_empty_tuples_pending from heap_endscan().\n> \n\n+1\n\n>>> +SET enable_indexonlyscan = off;\n>>> +set enable_indexscan = off;\n>>> +SET enable_seqscan = off;\n>>\n>> Nit: adjusting the casing of the second SET here.\n> \n> I've fixed this. I've also set enable_material off as I mentioned I\n> might in my earlier mail.\n> \n\nI'm not sure this (setting more and more GUCs to prevent hypothetical\nplan changes) is a good practice. Because how do you know the plan does\nnot change for some other unexpected reason, possibly in the future?\n\nIMHO if the test requires a specific plan, it's better to do an actual\n\"explain (rows off, costs off)\" to check that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 May 2024 20:33:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, May 14, 2024 at 2:33 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/14/24 19:42, Melanie Plageman wrote:\n> > I've fixed this. I've also set enable_material off as I mentioned I\n> > might in my earlier mail.\n> >\n>\n> I'm not sure this (setting more and more GUCs to prevent hypothetical\n> plan changes) is a good practice. Because how do you know the plan does\n> not change for some other unexpected reason, possibly in the future?\n\nSure. So, you think it is better not to have enable_material = false?\n\n> IMHO if the test requires a specific plan, it's better to do an actual\n> \"explain (rows off, costs off)\" to check that.\n\nWhen you say \"rows off\", do you mean do something to ensure that it\ndoesn't return tuples? Because I don't see a ROWS option for EXPLAIN\nin the docs.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 14 May 2024 14:40:14 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 5/14/24 20:40, Melanie Plageman wrote:\n> On Tue, May 14, 2024 at 2:33 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 5/14/24 19:42, Melanie Plageman wrote:\n>>> I've fixed this. I've also set enable_material off as I mentioned I\n>>> might in my earlier mail.\n>>>\n>>\n>> I'm not sure this (setting more and more GUCs to prevent hypothetical\n>> plan changes) is a good practice. Because how do you know the plan does\n>> not change for some other unexpected reason, possibly in the future?\n> \n> Sure. So, you think it is better not to have enable_material = false?\n> \n\nRight. Unless it's actually needed to force the necessary plan.\n\n>> IMHO if the test requires a specific plan, it's better to do an actual\n>> \"explain (rows off, costs off)\" to check that.\n> \n> When you say \"rows off\", do you mean do something to ensure that it\n> doesn't return tuples? Because I don't see a ROWS option for EXPLAIN\n> in the docs.\n> \n\nSorry, I meant to hide the cardinality estimates in the explain, but I\ngot confused. \"COSTS OFF\" is enough.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 May 2024 20:44:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, May 14, 2024 at 2:44 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/14/24 20:40, Melanie Plageman wrote:\n> > On Tue, May 14, 2024 at 2:33 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >>\n> >> On 5/14/24 19:42, Melanie Plageman wrote:\n> >>> I've fixed this. I've also set enable_material off as I mentioned I\n> >>> might in my earlier mail.\n> >>>\n> >>\n> >> I'm not sure this (setting more and more GUCs to prevent hypothetical\n> >> plan changes) is a good practice. Because how do you know the plan does\n> >> not change for some other unexpected reason, possibly in the future?\n> >\n> > Sure. So, you think it is better not to have enable_material = false?\n> >\n>\n> Right. Unless it's actually needed to force the necessary plan.\n\nAttached v6 does not use enable_material = false (as it is not needed).\n\n- Melanie",
"msg_date": "Tue, 14 May 2024 15:11:39 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2024-May-14, Tomas Vondra wrote:\n\n> On 5/14/24 19:42, Melanie Plageman wrote:\n> \n> >>> +SET enable_indexonlyscan = off;\n> >>> +set enable_indexscan = off;\n> >>> +SET enable_seqscan = off;\n> >>\n> >> Nit: adjusting the casing of the second SET here.\n> > \n> > I've fixed this. I've also set enable_material off as I mentioned I\n> > might in my earlier mail.\n>\n> I'm not sure this (setting more and more GUCs to prevent hypothetical\n> plan changes) is a good practice. Because how do you know the plan does\n> not change for some other unexpected reason, possibly in the future?\n\nI wonder why it resets enable_indexscan at all. I see that this query\nfirst tries a seqscan, then if you disable that it tries an index only\nscan, and if you disable that you get the expected bitmap indexscan.\nBut an indexscan doesn't seem to be in the cards.\n\n> IMHO if the test requires a specific plan, it's better to do an actual\n> \"explain (rows off, costs off)\" to check that.\n\nThat's already in the patch, right?\n\nI do wonder how do we _know_ that the test is testing what it wants to\ntest:\n QUERY PLAN \n─────────────────────────────────────────────────────────\n Nested Loop Anti Join\n -> Seq Scan on skip_fetch t1\n -> Materialize\n -> Bitmap Heap Scan on skip_fetch t2\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on skip_fetch_a_idx\n Index Cond: (a = 1)\n\nIs it because of the shape of the index condition? Maybe it's worth\nexplaining in the comments for the tests.\n\nBTW, I was running the explain while desultorily enabling and disabling\nthese GUCs and hit this assertion failure:\n\n#4 0x000055e6c72afe28 in ExceptionalCondition (conditionName=conditionName@entry=0x55e6c731a928 \"scan->rs_empty_tuples_pending == 0\", \n fileName=fileName@entry=0x55e6c731a3b0 \"../../../../../../../../../pgsql/source/master/src/backend/access/heap/heapam.c\", lineNumber=lineNumber@entry=1219)\n at ../../../../../../../../../pgsql/source/master/src/backend/utils/error/assert.c:66\n#5 0x000055e6c6e2e0c7 in heap_endscan (sscan=0x55e6c7b63e28) at ../../../../../../../../../pgsql/source/master/src/backend/access/heap/heapam.c:1219\n#6 0x000055e6c6fb35a7 in ExecEndPlan (estate=0x55e6c7a7e9d0, planstate=<optimized out>) at ../../../../../../../../pgsql/source/master/src/backend/executor/execMain.c:1485\n#7 standard_ExecutorEnd (queryDesc=0x55e6c7a736b8) at ../../../../../../../../pgsql/source/master/src/backend/executor/execMain.c:501\n#8 0x000055e6c6f4d9aa in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x55e6c7a735a8, into=into@entry=0x0, es=es@entry=0x55e6c7a448b8, \n queryString=queryString@entry=0x55e6c796c210 \"EXPLAIN (analyze, verbose, COSTS OFF) SELECT t1.a FROM skip_fetch t1 LEFT JOIN skip_fetch t2 ON t2.a = 1 WHERE t2.a IS NULL;\", params=params@entry=0x0, \n queryEnv=queryEnv@entry=0x0, planduration=0x7ffe8a291848, bufusage=0x0, mem_counters=0x0) at ../../../../../../../../pgsql/source/master/src/backend/commands/explain.c:770\n#9 0x000055e6c6f4e257 in standard_ExplainOneQuery (query=<optimized out>, cursorOptions=2048, into=0x0, es=0x55e6c7a448b8, \n queryString=0x55e6c796c210 \"EXPLAIN (analyze, verbose, COSTS OFF) SELECT t1.a FROM skip_fetch t1 LEFT JOIN skip_fetch t2 ON t2.a = 1 WHERE t2.a IS NULL;\", params=0x0, queryEnv=0x0)\n at ../../../../../../../../pgsql/source/master/src/backend/commands/explain.c:502\n\nI couldn't reproduce it again, though -- and for sure I don't know what\nit means. All three GUCs are set false in the core.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 14 May 2024 22:05:35 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2024-May-14, Alvaro Herrera wrote:\n\n\n> BTW, I was running the explain while desultorily enabling and disabling\n> these GUCs and hit this assertion failure:\n> \n> #4 0x000055e6c72afe28 in ExceptionalCondition (conditionName=conditionName@entry=0x55e6c731a928 \"scan->rs_empty_tuples_pending == 0\", \n> fileName=fileName@entry=0x55e6c731a3b0 \"../../../../../../../../../pgsql/source/master/src/backend/access/heap/heapam.c\", lineNumber=lineNumber@entry=1219)\n> at ../../../../../../../../../pgsql/source/master/src/backend/utils/error/assert.c:66\n\nAh, I see now that this is precisely the assertion that this patch\nremoves. Nevermind ...\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n",
"msg_date": "Tue, 14 May 2024 22:09:39 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, May 14, 2024 at 4:05 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-14, Tomas Vondra wrote:\n>\n> I wonder why it resets enable_indexscan at all. I see that this query\n> first tries a seqscan, then if you disable that it tries an index only\n> scan, and if you disable that you get the expected bitmap indexscan.\n> But an indexscan doesn't seem to be in the cards.\n\nAh, yes. That is true. I think I added that when, in an older version\nof the test, I had a query that did try an index scan before bitmap\nheap scan. I've removed that guc from the attached v7.\n\n> > IMHO if the test requires a specific plan, it's better to do an actual\n> > \"explain (rows off, costs off)\" to check that.\n>\n> That's already in the patch, right?\n\nYep.\n\n> I do wonder how do we _know_ that the test is testing what it wants to\n> test:\n> QUERY PLAN\n> ─────────────────────────────────────────────────────────\n> Nested Loop Anti Join\n> -> Seq Scan on skip_fetch t1\n> -> Materialize\n> -> Bitmap Heap Scan on skip_fetch t2\n> Recheck Cond: (a = 1)\n> -> Bitmap Index Scan on skip_fetch_a_idx\n> Index Cond: (a = 1)\n>\n> Is it because of the shape of the index condition? Maybe it's worth\n> explaining in the comments for the tests.\n\nThere is a comment in the test that explains what it is exercising and\nhow. We include the explain output (the plan) to ensure it is still\nusing a bitmap heap scan. The test exercises the skip fetch\noptimization in bitmap heap scan when not all of the inner tuples are\nemitted.\n\nWithout the patch, the test fails, so it is protection against someone\nadding back that assert in the future. It is not protection against\nsomeone deleting the line\nscan->rs_empty_tuples_pending = 0\nThat is, it doesn't verify that the empty unused tuples count is\ndiscarded. Do you think that is necessary?\n\n- Melanie",
"msg_date": "Tue, 14 May 2024 17:19:29 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Procedural comment:\n\nIt's better to get this patch committed with an imperfect test case\nthan to have it miss beta1.\n\n...Robert\n\n\n",
"msg_date": "Wed, 15 May 2024 10:24:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2024-May-14, Melanie Plageman wrote:\n\n> On Tue, May 14, 2024 at 4:05 PM Alvaro Herrera <[email protected]> wrote:\n\n> > I do wonder how do we _know_ that the test is testing what it wants to\n> > test:\n\n> We include the explain output (the plan) to ensure it is still\n> using a bitmap heap scan. The test exercises the skip fetch\n> optimization in bitmap heap scan when not all of the inner tuples are\n> emitted.\n\nI meant -- the query returns an empty resultset, so how do we know it's\nthe empty resultset that we want and not a different empty resultset\nthat happens to be identical? (This is probably not a critical point\nanyhow.)\n\n> Without the patch, the test fails, so it is protection against someone\n> adding back that assert in the future. It is not protection against\n> someone deleting the line\n> scan->rs_empty_tuples_pending = 0\n> That is, it doesn't verify that the empty unused tuples count is\n> discarded. Do you think that is necessary?\n\nI don't think that's absolutely necessary. I suspect there are\nthousands of lines that you could delete that would break things, with\nno test directly raising red flags.\n\nAt this point I would go with the RMT's recommendation of pushing this\nnow to make sure the bug is fixed for beta1, even if the test is\nimperfect. You can always polish the test afterwards if you feel like\nit ...\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n",
"msg_date": "Wed, 15 May 2024 17:14:15 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 12:17 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> After we decided not to pursue streaming bitmapheapscan for 17, I wanted\n> to make sure we removed the prefetch code table AM violation -- since we\n> weren't deleting that code. So what started out as me looking for a way\n> to clean up one commit ended up becoming a much larger project. Sorry\n> about that last minute code explosion! I do think there is a way to do\n> it right and make it nice. Also that violation would be gone if we\n> figure out how to get streaming bitmapheapscan behaving correctly.\n>\n> So, there's just more motivation to make streaming bitmapheapscan\n> awesome for 18!\n\nAttached v21 is the rest of the patches to make bitmap heap scan use\nthe read stream API.\nDon't be alarmed by the 20 patches in the set. I tried to keep the\nindividual patches as small and easy to review as possible.\n\nPatches 0001-0003 implement the async-friendly behavior needed both to\npush down the VM lookups for prefetching and eventually to use the\nread stream API.\n\nPatches 0004-0006 add and make use of a common interface for the\nshared and private (parallel and serial) bitmap iterator per Heikki's\nsuggestion in [1].\n\nPatches 0008 - 0012 make new scan descriptors for bitmap table scans\nand the heap AM implementation. It is not possible to remove the\nlayering violations mentioned for bitmap table scans in tableam.h\nwithout passing more bitmap-specific parameters to\ntable_begin/end/rescan(). Because bitmap heap scans use a fairly\nreduced set of the members in the HeapScanDescData, it made sense to\nenable specialized scan descriptors for bitmap table scans.\n\n0013 pushes the primary iterator setup down into heap code.\n\n0014 and 0015 push all of the prefetch code down into heap AM code as\nsuggested by Heikki in [1].\n\n0017 removes scan_bitmap_next_block() per Heikki's suggestion in [1].\n\nAfter all of these patches, we've removed the layering violations\nmentioned previously in tableam.h. There is no use of the visibility\nmap anymore in generic bitmap table scan code. Almost all\nblock-specific logic is gone. The table AMs own the iterator almost\ncompletely.\n\nThe one relic of iterator ownership is that for parallel bitmap heap\nscan, a single process must scan the index, construct the bitmap, and\ncall tbm_prepare_shared_iterate() to set up the iterator for all the\nprocesses. I didn't see a way to push this down (because to build the\nbitmap we have to scan the index and call ExecProcNode()). I wonder if\nthis creates an odd split of responsibilities. I could use some other\nsynchronization mechanism to communicate which process built the\nbitmap in the generic bitmap table scan code and thus should set up\nthe iterator in the heap implementation, but that sounds like a pretty\nbad idea. Also, I'm not sure how a non-block-based table AM would use\nTBMIterators (or even TIDBitmap) anyway.\n\nAs a side note, I've started naming all new structs in the bitmap\ntable scan code with the `BitmapTableScan` convention. It seems like\nit might be a good time to rename the executor node from\nBitmapHeapScanState to BitmapTableScanState. I didn't because I\nwondered if I also needed to rename the file (nodeBitmapHeapscan.c ->\nnodeBitmapTablescan.c) and update all other places using the\n`BitmapHeapScan` convention.\n\nPatches 0018 and 0019 make some changes to the TIDBitmap API to\nsupport having multiple TBMIterateResults at the same time instead of\nreusing the same one when iterating. With async execution we may have\nmore than one TBMIterateResult at a time.\n\nThere is one MTODO in the whole patch set -- in 0019 -- related to\nresetting state in the TBMIterateResult. See that patch for the full\nquestion.\n\nAnd, finally, 0020 uses the read stream API and removes all the\nbespoke prefetching code from bitmap heap scan.\n\nAssuming we all get to a happy place with the code up until 0020, the\nnext step is to go back and investigate the performance regression\nwith bitmap heap scan and the read stream API first reported by Tomas\nVondra in [2].\n\nI'd be very happy for code review on any of the patches, answers to my\nquestions above, or help investigating the regression Tomas found.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/5a172d1e-d69c-409a-b1fa-6521214c81c2%40iki.fi\n[2] https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Fri, 14 Jun 2024 19:56:42 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 7:56 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> Attached v21 is the rest of the patches to make bitmap heap scan use\n> the read stream API.\n---snip---\n> The one relic of iterator ownership is that for parallel bitmap heap\n> scan, a single process must scan the index, construct the bitmap, and\n> call tbm_prepare_shared_iterate() to set up the iterator for all the\n> processes. I didn't see a way to push this down (because to build the\n> bitmap we have to scan the index and call ExecProcNode()). I wonder if\n> this creates an odd split of responsibilities. I could use some other\n> synchronization mechanism to communicate which process built the\n> bitmap in the generic bitmap table scan code and thus should set up\n> the iterator in the heap implementation, but that sounds like a pretty\n> bad idea. Also, I'm not sure how a non-block-based table AM would use\n> TBMIterators (or even TIDBitmap) anyway.\n\nI tinkered around with this some more and actually came up with a\nsolution to my primary concern with the code structure. Attached is\nv22. It still needs the performance regression investigation mentioned\nin my previous email, but I feel more confident in the layering of the\niterator ownership I ended up with.\n\nBecause the patch numbers have changed, below is a summary of the\ncontents with the new patch numbers:\n\nPatches 0001-0003 implement the async-friendly behavior needed both to\npush down the VM lookups for prefetching and eventually to use the\nread stream API.\n\nPatches 0004-0006 add and make use of a common interface for the\nshared and private bitmap iterators per Heikki's\nsuggestion in [1].\n\nPatches 0008 - 0012 make new scan descriptors for bitmap table scans\nand the heap AM implementation.\n\n0013 and 0014 push all of the prefetch code down into heap AM code as\nsuggested by Heikki in [1].\n\n0016 removes scan_bitmap_next_block() per Heikki's suggestion in [1].\n\nPatches 0017 and 0018 make some changes to the TIDBitmap API to\nsupport having multiple TBMIterateResults at the same time instead of\nreusing the same one when iterating.\n\n0019 uses the read stream API and removes all the bespoke prefetching\ncode from bitmap heap scan.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/5a172d1e-d69c-409a-b1fa-6521214c81c2%40iki.fi",
"msg_date": "Mon, 17 Jun 2024 17:22:12 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Hi,\n\nI went through v22 to remind myself of what the patches do and do some\nbasic review - I have some simple questions / comments for now, nothing\nmajor. I've kept the comments in separate 'review' patches, it does not\nseem worth copying here.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 19 Jun 2024 00:02:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Tue, Jun 18, 2024 at 6:02 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> I went through v22 to remind myself of what the patches do and do some\n> basic review - I have some simple questions / comments for now, nothing\n> major. I've kept the comments in separate 'review' patches, it does not\n> seem worth copying here.\n\nThanks so much for the review!\n\nI've implemented your feedback in attached v23 except for what I\nmention below. I have not gone through each patch in the new set very\ncarefully after making the changes because I think we should resolve\nthe question of adding the new table scan descriptor before I do that.\nA change there will require a big rebase. Then I can go through each\npatch very carefully.\n\n From your v22b-0005-review.patch:\n\n src/backend/executor/nodeBitmapHeapscan.c | 14 ++++++++++++++\n src/include/access/tableam.h | 2 ++\n 2 files changed, 16 insertions(+)\n\ndiff --git a/src/backend/executor/nodeBitmapHeapscan.c\nb/src/backend/executor/nodeBitmapHeapscan.c\nindex e8b4a754434..6d7ef9ced19 100644\n--- a/src/backend/executor/nodeBitmapHeapscan.c\n+++ b/src/backend/executor/nodeBitmapHeapscan.c\n@@ -270,6 +270,20 @@ new_page:\n\n BitmapAdjustPrefetchIterator(node);\n\n+ /*\n+ * XXX I'm a bit unsure if this needs to be handled using\ngoto. Wouldn't\n+ * it be simpler / easier to understand to have two nested loops?\n+ *\n+ * while (true)\n+ * if (!table_scan_bitmap_next_block(...)) { break; }\n+ * while (table_scan_bitmap_next_tuple(...)) {\n+ * ... process tuples ...\n+ * }\n+ *\n+ * But I haven't tried implementing this.\n+ */\n if (!table_scan_bitmap_next_block(scan, &node->blockno, &node->recheck,\n &node->lossy_pages,\n&node->exact_pages))\n break;\n\nWe need to call table_scan_bimtap_next_block() the first time we call\nBitmapHeapNext() on each scan but all subsequent invocations of\nBitmapHeapNext() must call table_scan_bitmap_next_tuple() first each\n-- because we return from BitmapHeapNext() to yield a tuple even when\nthere are more tuples on the page. I tried refactoring this a few\ndifferent ways and personally found the goto most clear.\n\n From your v22b-0010-review.patch:\n\n@@ -557,6 +559,8 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node)\n table_rescan(node->ss.ss_currentScanDesc, NULL);\n\n /* release bitmaps and buffers if any */\n+ /* XXX seems it should not be right after the comment, also shouldn't\n+ * we still reset the prefetch_iterator field to NULL? */\n tbm_end_iterate(&node->prefetch_iterator);\n if (node->tbm)\n tbm_free(node->tbm);\n\nprefetch_iterator is a TBMIterator which is stored in the struct (as\nopposed to having a pointer to it stored in the struct).\ntbm_end_iterate() sets the actual private and shared iterator pointers\nto NULL.\n\n From your v22b-0017-review.patch\n\ndiff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\nindex 036ef29e7d5..9c711ce0eb0 100644\n--- a/src/include/access/relscan.h\n+++ b/src/include/access/relscan.h\n@@ -52,6 +52,13 @@ typedef struct TableScanDescData\n } TableScanDescData;\n typedef struct TableScanDescData *TableScanDesc;\n\n+/*\n+ * XXX I don't understand why we should have this special node if we\n+ * don't have special nodes for other scan types.\n\nIn this case, up until the final commit (using the read stream\ninterface), there are six fields required by bitmap heap scan that are\nnot needed by any other user of HeapScanDescData. There are also\nseveral members of HeapScanDescData that are not needed by bitmap heap\nscans and all of the setup in initscan() for those fields is not\nrequired for bitmap heap scans.\n\nAlso, because the BitmapHeapScanDesc needs information like the\nParallelBitmapHeapState and prefetch_maximum (for use in prefetching),\nthe scan_begin() callback would have to take those as parameters. I\nthought adding so much bitmap table scan-specific information to the\ngeneric table scan callbacks was a bad idea.\n\nOnce we add the read stream API code, the number of fields required\nfor bitmap heap scan that are in the scan descriptor goes down to\nthree. So, perhaps we could justify having that many bitmap heap\nscan-specific fields in the HeapScanDescData.\n\nThough, I actually think we could start moving toward having\nspecialized scan descriptors if the requirements for that scan type\nare appreciably different. I can't think of new code that would be\nadded to the HeapScanDescData that would have to be duplicated over to\nspecialized scan descriptors.\n\nWith the final read stream state, I can see the argument for bloating\nthe HeapScanDescData with three extra members and avoiding making new\nscan descriptors. But, for the intermediate patches which have all of\nthe bitmap prefetch members pushed down into the HeapScanDescData, I\nthink it is really not okay. Six members only for bitmap heap scans\nand two bitmap-specific members to begin_scan() seems bad.\n\nWhat I thought we plan to do is commit the refactoring patches\nsometime after the branch for 18 is cut and leave the final read\nstream patch uncommitted so we can do performance testing on it. If\nyou think it is okay to have the six member bloated HeapScanDescData\nin master until we push the read stream code, I am okay with removing\nthe BitmapTableScanDesc and BitmapHeapScanDesc.\n\n+ * XXX Also, maybe this should do the naming convention with Data at\n+ * the end (mostly for consistency).\n+ */\n typedef struct BitmapTableScanDesc\n {\n Relation rs_rd; /* heap relation descriptor */\n\nI really want to move away from these Data typedefs. I find them so\nconfusing as a developer, but it's hard to justify ripping out the\nexisting ones because of code churn. If we add new scan descriptors, I\nhad really hoped to start using a different pattern.\n\n From your v22b-0025-review.patch\n\ndiff --git a/src/backend/access/table/tableamapi.c\nb/src/backend/access/table/tableamapi.c\nindex a47527d490a..379b7df619e 100644\n--- a/src/backend/access/table/tableamapi.c\n+++ b/src/backend/access/table/tableamapi.c\n@@ -91,6 +91,9 @@ GetTableAmRoutine(Oid amhandler)\n\n Assert(routine->relation_estimate_size != NULL);\n\n+ /* XXX shouldn't this check that _block is not set without _tuple?\n+ * Also, the commit message says _block is \"local helper\" but then\n+ * why would it be part of TableAmRoutine? */\n Assert(routine->scan_sample_next_block != NULL);\n Assert(routine->scan_sample_next_tuple != NULL);\n\nscan_bitmap_next_block() is removed as a table AM callback here, so we\ndon't check if it is set. We do still check if scan_bitmap_next_tuple\nis set if amgetbitmap is not NULL. heapam_scan_bitmap_next_block() is\nnow a local helper for heapam_scan_bitmap_next_tuple(). Perhaps I\nshould change the name to something like heap_scan_bitmap_next_block()\nto make it clear it is not an implementation of a table AM callback?\n\n- Melanie",
"msg_date": "Wed, 19 Jun 2024 11:55:20 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "\n\nOn 6/19/24 17:55, Melanie Plageman wrote:\n> On Tue, Jun 18, 2024 at 6:02 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> I went through v22 to remind myself of what the patches do and do some\n>> basic review - I have some simple questions / comments for now, nothing\n>> major. I've kept the comments in separate 'review' patches, it does not\n>> seem worth copying here.\n> \n> Thanks so much for the review!\n> \n> I've implemented your feedback in attached v23 except for what I\n> mention below. I have not gone through each patch in the new set very\n> carefully after making the changes because I think we should resolve\n> the question of adding the new table scan descriptor before I do that.\n> A change there will require a big rebase. Then I can go through each\n> patch very carefully.\n> \n> From your v22b-0005-review.patch:\n> \n> src/backend/executor/nodeBitmapHeapscan.c | 14 ++++++++++++++\n> src/include/access/tableam.h | 2 ++\n> 2 files changed, 16 insertions(+)\n> \n> diff --git a/src/backend/executor/nodeBitmapHeapscan.c\n> b/src/backend/executor/nodeBitmapHeapscan.c\n> index e8b4a754434..6d7ef9ced19 100644\n> --- a/src/backend/executor/nodeBitmapHeapscan.c\n> +++ b/src/backend/executor/nodeBitmapHeapscan.c\n> @@ -270,6 +270,20 @@ new_page:\n> \n> BitmapAdjustPrefetchIterator(node);\n> \n> + /*\n> + * XXX I'm a bit unsure if this needs to be handled using\n> goto. Wouldn't\n> + * it be simpler / easier to understand to have two nested loops?\n> + *\n> + * while (true)\n> + * if (!table_scan_bitmap_next_block(...)) { break; }\n> + * while (table_scan_bitmap_next_tuple(...)) {\n> + * ... process tuples ...\n> + * }\n> + *\n> + * But I haven't tried implementing this.\n> + */\n> if (!table_scan_bitmap_next_block(scan, &node->blockno, &node->recheck,\n> &node->lossy_pages,\n> &node->exact_pages))\n> break;\n> \n> We need to call table_scan_bimtap_next_block() the first time we call\n> BitmapHeapNext() on each scan but all subsequent invocations of\n> BitmapHeapNext() must call table_scan_bitmap_next_tuple() first each\n> -- because we return from BitmapHeapNext() to yield a tuple even when\n> there are more tuples on the page. I tried refactoring this a few\n> different ways and personally found the goto most clear.\n> \n\nOK, I haven't tried refactoring this myself, so you're probably right.\n\n> From your v22b-0010-review.patch:\n> \n> @@ -557,6 +559,8 @@ ExecReScanBitmapHeapScan(BitmapHeapScanState *node)\n> table_rescan(node->ss.ss_currentScanDesc, NULL);\n> \n> /* release bitmaps and buffers if any */\n> + /* XXX seems it should not be right after the comment, also shouldn't\n> + * we still reset the prefetch_iterator field to NULL? */\n> tbm_end_iterate(&node->prefetch_iterator);\n> if (node->tbm)\n> tbm_free(node->tbm);\n> \n> prefetch_iterator is a TBMIterator which is stored in the struct (as\n> opposed to having a pointer to it stored in the struct).\n> tbm_end_iterate() sets the actual private and shared iterator pointers\n> to NULL.\n> \n\nAh, right.\n\n> From your v22b-0017-review.patch\n> \n> diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> index 036ef29e7d5..9c711ce0eb0 100644\n> --- a/src/include/access/relscan.h\n> +++ b/src/include/access/relscan.h\n> @@ -52,6 +52,13 @@ typedef struct TableScanDescData\n> } TableScanDescData;\n> typedef struct TableScanDescData *TableScanDesc;\n> \n> +/*\n> + * XXX I don't understand why we should have this special node if we\n> + * don't have special nodes for other scan types.\n> \n> In this case, up until the final commit (using the read stream\n> interface), there are six fields required by bitmap heap scan that are\n> not needed by any other user of HeapScanDescData. There are also\n> several members of HeapScanDescData that are not needed by bitmap heap\n> scans and all of the setup in initscan() for those fields is not\n> required for bitmap heap scans.\n> \n> Also, because the BitmapHeapScanDesc needs information like the\n> ParallelBitmapHeapState and prefetch_maximum (for use in prefetching),\n> the scan_begin() callback would have to take those as parameters. I\n> thought adding so much bitmap table scan-specific information to the\n> generic table scan callbacks was a bad idea.\n> \n> Once we add the read stream API code, the number of fields required\n> for bitmap heap scan that are in the scan descriptor goes down to\n> three. So, perhaps we could justify having that many bitmap heap\n> scan-specific fields in the HeapScanDescData.\n> \n> Though, I actually think we could start moving toward having\n> specialized scan descriptors if the requirements for that scan type\n> are appreciably different. I can't think of new code that would be\n> added to the HeapScanDescData that would have to be duplicated over to\n> specialized scan descriptors.\n> \n> With the final read stream state, I can see the argument for bloating\n> the HeapScanDescData with three extra members and avoiding making new\n> scan descriptors. But, for the intermediate patches which have all of\n> the bitmap prefetch members pushed down into the HeapScanDescData, I\n> think it is really not okay. Six members only for bitmap heap scans\n> and two bitmap-specific members to begin_scan() seems bad.\n> \n> What I thought we plan to do is commit the refactoring patches\n> sometime after the branch for 18 is cut and leave the final read\n> stream patch uncommitted so we can do performance testing on it. If\n> you think it is okay to have the six member bloated HeapScanDescData\n> in master until we push the read stream code, I am okay with removing\n> the BitmapTableScanDesc and BitmapHeapScanDesc.\n> \n\nI admit I don't have a very good idea what the ideal / desired state\nlook like. My comment is motivated solely by the feeling that it seems\nstrange to have one struct serving most scan types, and then a special\nstruct for one particular scan type ...\n\n> + * XXX Also, maybe this should do the naming convention with Data at\n> + * the end (mostly for consistency).\n> + */\n> typedef struct BitmapTableScanDesc\n> {\n> Relation rs_rd; /* heap relation descriptor */\n> \n> I really want to move away from these Data typedefs. I find them so\n> confusing as a developer, but it's hard to justify ripping out the\n> existing ones because of code churn. If we add new scan descriptors, I\n> had really hoped to start using a different pattern.\n> \n\nPerhaps, I understand that. I'm not a huge fan of Data structs myself,\nbut I'm not sure it's a great idea to do both things in the same area of\ncode. That's guaranteed to be confusing for everyone ...\n\nIf we want to move away from that, I'd rather rename the nearby structs\nand accept the code churn.\n\n> From your v22b-0025-review.patch\n> \n> diff --git a/src/backend/access/table/tableamapi.c\n> b/src/backend/access/table/tableamapi.c\n> index a47527d490a..379b7df619e 100644\n> --- a/src/backend/access/table/tableamapi.c\n> +++ b/src/backend/access/table/tableamapi.c\n> @@ -91,6 +91,9 @@ GetTableAmRoutine(Oid amhandler)\n> \n> Assert(routine->relation_estimate_size != NULL);\n> \n> + /* XXX shouldn't this check that _block is not set without _tuple?\n> + * Also, the commit message says _block is \"local helper\" but then\n> + * why would it be part of TableAmRoutine? */\n> Assert(routine->scan_sample_next_block != NULL);\n> Assert(routine->scan_sample_next_tuple != NULL);\n> \n> scan_bitmap_next_block() is removed as a table AM callback here, so we\n> don't check if it is set. We do still check if scan_bitmap_next_tuple\n> is set if amgetbitmap is not NULL. heapam_scan_bitmap_next_block() is\n> now a local helper for heapam_scan_bitmap_next_tuple(). Perhaps I\n> should change the name to something like heap_scan_bitmap_next_block()\n> to make it clear it is not an implementation of a table AM callback?\n> \n\nI'm quite confused by this. How could it not be am AM callback when it's\nin the AM routine?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jun 2024 18:38:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 2024-Jun-14, Melanie Plageman wrote:\n\n> Subject: [PATCH v21 12/20] Update variable names in bitmap scan descriptors\n>\n> The previous commit which added BitmapTableScanDesc and\n> BitmapHeapScanDesc used the existing member names from TableScanDescData\n> and HeapScanDescData for diff clarity. This commit renames the members\n> -- in many cases by removing the rs_ prefix which is not relevant or\n> needed here.\n\n*Cough* Why? It makes grepping for struct members useless. I'd rather\nkeep these prefixes, as they allow easier code exploration. (Sometimes\nwhen I need to search for uses of some field with a name that's too\ncommon, I add a prefix to the name and let the compiler guide me to\nthem. But that's a waste of time ...)\n\nThanks,\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Cuando mañana llegue pelearemos segun lo que mañana exija\" (Mowgli)\n\n\n",
"msg_date": "Wed, 19 Jun 2024 18:51:31 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Jun 19, 2024 at 12:38 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/19/24 17:55, Melanie Plageman wrote:\n> > On Tue, Jun 18, 2024 at 6:02 PM Tomas Vondra\n> > <[email protected]> wrote:\n>\n> > From your v22b-0017-review.patch\n> >\n> > diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h\n> > index 036ef29e7d5..9c711ce0eb0 100644\n> > --- a/src/include/access/relscan.h\n> > +++ b/src/include/access/relscan.h\n> > @@ -52,6 +52,13 @@ typedef struct TableScanDescData\n> > } TableScanDescData;\n> > typedef struct TableScanDescData *TableScanDesc;\n> >\n> > +/*\n> > + * XXX I don't understand why we should have this special node if we\n> > + * don't have special nodes for other scan types.\n> >\n> > In this case, up until the final commit (using the read stream\n> > interface), there are six fields required by bitmap heap scan that are\n> > not needed by any other user of HeapScanDescData. There are also\n> > several members of HeapScanDescData that are not needed by bitmap heap\n> > scans and all of the setup in initscan() for those fields is not\n> > required for bitmap heap scans.\n> >\n> > Also, because the BitmapHeapScanDesc needs information like the\n> > ParallelBitmapHeapState and prefetch_maximum (for use in prefetching),\n> > the scan_begin() callback would have to take those as parameters. I\n> > thought adding so much bitmap table scan-specific information to the\n> > generic table scan callbacks was a bad idea.\n> >\n> > Once we add the read stream API code, the number of fields required\n> > for bitmap heap scan that are in the scan descriptor goes down to\n> > three. So, perhaps we could justify having that many bitmap heap\n> > scan-specific fields in the HeapScanDescData.\n> >\n> > Though, I actually think we could start moving toward having\n> > specialized scan descriptors if the requirements for that scan type\n> > are appreciably different. I can't think of new code that would be\n> > added to the HeapScanDescData that would have to be duplicated over to\n> > specialized scan descriptors.\n> >\n> > With the final read stream state, I can see the argument for bloating\n> > the HeapScanDescData with three extra members and avoiding making new\n> > scan descriptors. But, for the intermediate patches which have all of\n> > the bitmap prefetch members pushed down into the HeapScanDescData, I\n> > think it is really not okay. Six members only for bitmap heap scans\n> > and two bitmap-specific members to begin_scan() seems bad.\n> >\n> > What I thought we plan to do is commit the refactoring patches\n> > sometime after the branch for 18 is cut and leave the final read\n> > stream patch uncommitted so we can do performance testing on it. If\n> > you think it is okay to have the six member bloated HeapScanDescData\n> > in master until we push the read stream code, I am okay with removing\n> > the BitmapTableScanDesc and BitmapHeapScanDesc.\n> >\n>\n> I admit I don't have a very good idea what the ideal / desired state\n> look like. My comment is motivated solely by the feeling that it seems\n> strange to have one struct serving most scan types, and then a special\n> struct for one particular scan type ...\n\nI see what you are saying. We could make BitmapTableScanDesc inherit\nfrom TableScanDescData which would be similar to what we do with other\nthings like the executor scan nodes themselves. We would waste space\nand LOC with initializing the unneeded members, but it might seem less\nweird.\n\nWhether we want the specialized scan descriptors at all is probably\nthe bigger question, though.\n\nThe top level BitmapTableScanDesc is motivated by wanting fewer bitmap\ntable scan specific members passed to scan_begin(). And the\nBitmapHeapScanDesc is motivated by this plus wanting to avoid bloating\nthe HeapScanDescData.\n\nIf you look at at HEAD~1 (with my patches applied) and think you would\nbe okay with\n1) the contents of the BitmapHeapScanDesc being in the HeapScanDescData and\n2) the extra bitmap table scan-specific parameters in scan_begin_bm()\nbeing passed to scan_begin()\n\nthen I will remove the specialized scan descriptors.\n\nThe final state (with the read stream) will still have three bitmap\nheap scan-specific members in the HeapScanDescData.\n\nWould it help if I do a version like this so you can see what it is like?\n\n> > + * XXX Also, maybe this should do the naming convention with Data at\n> > + * the end (mostly for consistency).\n> > + */\n> > typedef struct BitmapTableScanDesc\n> > {\n> > Relation rs_rd; /* heap relation descriptor */\n> >\n> > I really want to move away from these Data typedefs. I find them so\n> > confusing as a developer, but it's hard to justify ripping out the\n> > existing ones because of code churn. If we add new scan descriptors, I\n> > had really hoped to start using a different pattern.\n> >\n>\n> Perhaps, I understand that. I'm not a huge fan of Data structs myself,\n> but I'm not sure it's a great idea to do both things in the same area of\n> code. That's guaranteed to be confusing for everyone ...\n>\n> If we want to move away from that, I'd rather rename the nearby structs\n> and accept the code churn.\n\nMakes sense. I'll do a patch to get rid of the typedefs and rename\nTableScanDescData -> TableScanDesc (also for the heap scan desc) if we\nend up keeping the specialized scan descriptors.\n\n> > From your v22b-0025-review.patch\n> >\n> > diff --git a/src/backend/access/table/tableamapi.c\n> > b/src/backend/access/table/tableamapi.c\n> > index a47527d490a..379b7df619e 100644\n> > --- a/src/backend/access/table/tableamapi.c\n> > +++ b/src/backend/access/table/tableamapi.c\n> > @@ -91,6 +91,9 @@ GetTableAmRoutine(Oid amhandler)\n> >\n> > Assert(routine->relation_estimate_size != NULL);\n> >\n> > + /* XXX shouldn't this check that _block is not set without _tuple?\n> > + * Also, the commit message says _block is \"local helper\" but then\n> > + * why would it be part of TableAmRoutine? */\n> > Assert(routine->scan_sample_next_block != NULL);\n> > Assert(routine->scan_sample_next_tuple != NULL);\n> >\n> > scan_bitmap_next_block() is removed as a table AM callback here, so we\n> > don't check if it is set. We do still check if scan_bitmap_next_tuple\n> > is set if amgetbitmap is not NULL. heapam_scan_bitmap_next_block() is\n> > now a local helper for heapam_scan_bitmap_next_tuple(). Perhaps I\n> > should change the name to something like heap_scan_bitmap_next_block()\n> > to make it clear it is not an implementation of a table AM callback?\n> >\n>\n> I'm quite confused by this. How could it not be am AM callback when it's\n> in the AM routine?\n\nI removed the callback for getting the next block. Now,\nBitmapHeapNext() just calls table_scan_bitmap_next_tuple() and the\ntable AM is responsible for advancing to the next block when the\ncurrent block is out of tuples. In the new code structure, there isn't\nmuch point in having a separate table_scan_bitmap_next_block()\ncallback. Advancing the iterator to get the next block assignment is\nalready pushed into the table AM itself. So, getting the next block\ncan be an internal detail of how the table AM gets the next tuple.\nHeikki actually originally suggested it, and I thought it made sense.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Jun 2024 14:13:59 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Thanks for taking a look at my patches, Álvaro!\n\nOn Wed, Jun 19, 2024 at 12:51 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jun-14, Melanie Plageman wrote:\n>\n> > Subject: [PATCH v21 12/20] Update variable names in bitmap scan descriptors\n> >\n> > The previous commit which added BitmapTableScanDesc and\n> > BitmapHeapScanDesc used the existing member names from TableScanDescData\n> > and HeapScanDescData for diff clarity. This commit renames the members\n> > -- in many cases by removing the rs_ prefix which is not relevant or\n> > needed here.\n>\n> *Cough* Why? It makes grepping for struct members useless. I'd rather\n> keep these prefixes, as they allow easier code exploration. (Sometimes\n> when I need to search for uses of some field with a name that's too\n> common, I add a prefix to the name and let the compiler guide me to\n> them. But that's a waste of time ...)\n\nIf we want to make it possible to use no tools and only manually grep\nfor struct members, that means we can never reuse struct member names.\nAcross a project of our size, that seems like a very serious\nrestriction. Adding prefixes in struct members makes it harder to read\ncode -- both because it makes the names longer and because people are\nmore prone to abbreviate the meaningful parts of the struct member\nname to make the whole name shorter.\n\nWhile I understand we as a project want to make it possible to hack on\nPostgres without an IDE or a set of vim plugins a mile long, I also\nthink we have to make some compromises for readability. Most commonly\nused text editors have LSP (language server protocol) support and\nshould allow for meaningful identification of the usages of a struct\nmember even if it has the same name as a member of another struct.\n\nThat being said, I'm not unreasonable. If we have decided we can not\nreuse struct member names, I will change my patch.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Jun 2024 14:20:59 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On 19/06/2024 18:55, Melanie Plageman wrote:\n> On Tue, Jun 18, 2024 at 6:02 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> I went through v22 to remind myself of what the patches do and do some\n>> basic review - I have some simple questions / comments for now, nothing\n>> major. I've kept the comments in separate 'review' patches, it does not\n>> seem worth copying here.\n> \n> Thanks so much for the review!\n> \n> I've implemented your feedback in attached v23 except for what I\n> mention below. I have not gone through each patch in the new set very\n> carefully after making the changes because I think we should resolve\n> the question of adding the new table scan descriptor before I do that.\n> A change there will require a big rebase. Then I can go through each\n> patch very carefully.\nHad a quick look at this after a long pause. I only looked at the first \nfew, hoping that reviewing them would allow you to commit at least some \nof them, making the rest easier.\n\nv23-0001-table_scan_bitmap_next_block-counts-lossy-and-ex.patch\n\nLooks good to me. (I'm not sure if this would be a net positive change \non its own, but it's needed by the later patch so OK)\n\nv23-0002-Remove-table_scan_bitmap_next_tuple-parameter-tb.patch\n\nLGTM\n\nv23-0003-Make-table_scan_bitmap_next_block-async-friendly.patch\n\n> @@ -1955,19 +1954,26 @@ table_relation_estimate_size(Relation rel, int32 *attr_widths,\n> */\n> \n> /*\n> - * Prepare to fetch / check / return tuples from `tbmres->blockno` as part of a\n> - * bitmap table scan. `scan` needs to have been started via\n> - * table_beginscan_bm(). Returns false if there are no tuples to be found on\n> - * the page, true otherwise. `lossy_pages` is incremented if the block's\n> - * representation in the bitmap is lossy; otherwise, `exact_pages` is\n> - * incremented.\n> + * Prepare to fetch / check / return tuples as part of a bitmap table scan.\n> + * `scan` needs to have been started via table_beginscan_bm(). Returns false if\n> + * there are no more blocks in the bitmap, true otherwise. `lossy_pages` is\n> + * incremented if the block's representation in the bitmap is lossy; otherwise,\n> + * `exact_pages` is incremented.\n> + *\n> + * `recheck` is set by the table AM to indicate whether or not the tuples\n> + * from this block should be rechecked. Tuples from lossy pages will always\n> + * need to be rechecked, but some non-lossy pages' tuples may also require\n> + * recheck.\n> + *\n> + * `blockno` is only used in bitmap table scan code to validate that the\n> + * prefetch block is staying ahead of the current block.\n> *\n> * Note, this is an optionally implemented function, therefore should only be\n> * used after verifying the presence (at plan time or such).\n> */\n> static inline bool\n> table_scan_bitmap_next_block(TableScanDesc scan,\n> -\t\t\t\t\t\t\t struct TBMIterateResult *tbmres,\n> +\t\t\t\t\t\t\t BlockNumber *blockno, bool *recheck,\n> \t\t\t\t\t\t\t long *lossy_pages,\n> \t\t\t\t\t\t\t long *exact_pages)\n> {\n\nThe new comment doesn't really explain what *blockno means. Is it an \ninput or output parameter? How is it used with the prefetching?\n\nv23-0004-Add-common-interface-for-TBMIterators.patch\n\n> +/*\n> + * Start iteration on a shared or private bitmap iterator. Note that tbm will\n> + * only be provided by private BitmapHeapScan callers. dsa and dsp will only be\n> + * provided by parallel BitmapHeapScan callers. For shared callers, one process\n> + * must already have called tbm_prepare_shared_iterate() to create and set up\n> + * the TBMSharedIteratorState. The TBMIterator is passed by reference to\n> + * accommodate callers who would like to allocate it inside an existing struct.\n> + */\n> +void\n> +tbm_begin_iterate(TBMIterator *iterator, TIDBitmap *tbm,\n> +\t\t\t\t dsa_area *dsa, dsa_pointer dsp)\n> +{\n> +\tAssert(iterator);\n> +\n> +\titerator->private_iterator = NULL;\n> +\titerator->shared_iterator = NULL;\n> +\titerator->exhausted = false;\n> +\n> +\t/* Allocate a private iterator and attach the shared state to it */\n> +\tif (DsaPointerIsValid(dsp))\n> +\t\titerator->shared_iterator = tbm_attach_shared_iterate(dsa, dsp);\n> +\telse\n> +\t\titerator->private_iterator = tbm_begin_private_iterate(tbm);\n> +}\n\nHmm, I haven't looked at how this is used the later patches, but a \nfunction signature where some parameters are used or not depending on \nthe situation seems a bit awkward. Perhaps it would be better to let the \ncaller call tbm_attach_shared_iterate(dsa, dsp) or \ntbm_begin_private_iterate(tbm), and provide a function to turn that into \na TBMIterator? Something like:\n\nTBMIterator *tbm_iterator_from_shared_iterator(TBMSharedIterator *);\nTBMIterator *tbm_iterator_from_private_iterator(TBMPrivateIterator *);\n\nDoes tbm_iterator() really need the 'exhausted' flag? The private and \nshared variants work without one.\n\n+1 on this patch in general, and I have no objections to its current \nform either, the above are just suggestions to consider.\n\nv23-0006-BitmapHeapScan-uses-unified-iterator.patch\n\nMakes sense. (Might be better to squash this with the previous patch)\n\nv23-0007-BitmapHeapScan-Make-prefetch-sync-error-more-det.patch\n\nLGTM\n\nv23-0008-Push-current-scan-descriptor-into-specialized-sc.patch\nv23-0009-Remove-ss_current-prefix-from-ss_currentScanDesc.patch\n\nLGTM. I would squash these together.\n\nv23-0010-Add-scan_in_progress-to-BitmapHeapScanState.patch\n\n> --- a/src/include/nodes/execnodes.h\n> +++ b/src/include/nodes/execnodes.h\n> @@ -1804,6 +1804,7 @@ typedef struct ParallelBitmapHeapState\n> *\t\tprefetch_target current target prefetch distance\n> *\t\tprefetch_maximum maximum value for prefetch_target\n> *\t\tinitialized\t\t is node is ready to iterate\n> + *\t\tscan_in_progress is this a rescan\n> *\t\tpstate\t\t\t shared state for parallel bitmap scan\n> *\t\trecheck\t\t\t do current page's tuples need recheck\n> *\t\tblockno\t\t\t used to validate pf and current block in sync\n> @@ -1824,6 +1825,7 @@ typedef struct BitmapHeapScanState\n> \tint\t\t\tprefetch_target;\n> \tint\t\t\tprefetch_maximum;\n> \tbool\t\tinitialized;\n> +\tbool\t\tscan_in_progress;\n> \tParallelBitmapHeapState *pstate;\n> \tbool\t\trecheck;\n> \tBlockNumber blockno;\n\nHmm, the \"is this a rescan\" comment sounds inaccurate, because it is set \nas soon as the scan is started, not only when rescanning. Other than \nthat, LGTM.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 16:37:08 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Jun 19, 2024 at 2:21 PM Melanie Plageman\n<[email protected]> wrote:\n> If we want to make it possible to use no tools and only manually grep\n> for struct members, that means we can never reuse struct member names.\n> Across a project of our size, that seems like a very serious\n> restriction. Adding prefixes in struct members makes it harder to read\n> code -- both because it makes the names longer and because people are\n> more prone to abbreviate the meaningful parts of the struct member\n> name to make the whole name shorter.\n\nI don't think we should go so far as to never reuse a structure member\nname. But I also do use 'git grep' a lot to find stuff, and I don't\nappreciate it when somebody names a key piece of machinery 'x' or 'n'\nor something, especially when references to that thing could\nreasonably occur almost anywhere in the source code. So if somebody is\ncreating a struct whose names are fairly generic and reasonably short,\nI like the idea of using a prefix for those names. If the structure\nmembers are things like that_thing_i_stored_behind_the_fridge (which\nis long) or cytokine (which is non-generic) then they're greppable\nanyway and it doesn't really matter. But surely changing something\nlike rs_flags to just flags is just making everyone's life harder:\n\n[robert.haas pgsql]$ git grep rs_flags | wc -l\n 38\n[robert.haas pgsql]$ git grep flags | wc -l\n 6348\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 10:24:57 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 19, 2024 at 2:21 PM Melanie Plageman\n> <[email protected]> wrote:\n>> If we want to make it possible to use no tools and only manually grep\n>> for struct members, that means we can never reuse struct member names.\n>> Across a project of our size, that seems like a very serious\n>> restriction. Adding prefixes in struct members makes it harder to read\n>> code -- both because it makes the names longer and because people are\n>> more prone to abbreviate the meaningful parts of the struct member\n>> name to make the whole name shorter.\n\n> I don't think we should go so far as to never reuse a structure member\n> name. But I also do use 'git grep' a lot to find stuff, and I don't\n> appreciate it when somebody names a key piece of machinery 'x' or 'n'\n> or something, especially when references to that thing could\n> reasonably occur almost anywhere in the source code. So if somebody is\n> creating a struct whose names are fairly generic and reasonably short,\n> I like the idea of using a prefix for those names. If the structure\n> members are things like that_thing_i_stored_behind_the_fridge (which\n> is long) or cytokine (which is non-generic) then they're greppable\n> anyway and it doesn't really matter. But surely changing something\n> like rs_flags to just flags is just making everyone's life harder:\n\nI'm with Robert here: I care quite a lot about the greppability of\nfield names. I'm not arguing for prefixes everywhere, but I don't\nthink we should strip out prefixes we've already created, especially\nif the result will be to have extremely generic field names.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2024 10:49:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 10:49 AM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > On Wed, Jun 19, 2024 at 2:21 PM Melanie Plageman\n> > <[email protected]> wrote:\n> >> If we want to make it possible to use no tools and only manually grep\n> >> for struct members, that means we can never reuse struct member names.\n> >> Across a project of our size, that seems like a very serious\n> >> restriction. Adding prefixes in struct members makes it harder to read\n> >> code -- both because it makes the names longer and because people are\n> >> more prone to abbreviate the meaningful parts of the struct member\n> >> name to make the whole name shorter.\n>\n> > I don't think we should go so far as to never reuse a structure member\n> > name. But I also do use 'git grep' a lot to find stuff, and I don't\n> > appreciate it when somebody names a key piece of machinery 'x' or 'n'\n> > or something, especially when references to that thing could\n> > reasonably occur almost anywhere in the source code. So if somebody is\n> > creating a struct whose names are fairly generic and reasonably short,\n> > I like the idea of using a prefix for those names. If the structure\n> > members are things like that_thing_i_stored_behind_the_fridge (which\n> > is long) or cytokine (which is non-generic) then they're greppable\n> > anyway and it doesn't really matter. But surely changing something\n> > like rs_flags to just flags is just making everyone's life harder:\n>\n> I'm with Robert here: I care quite a lot about the greppability of\n> field names. I'm not arguing for prefixes everywhere, but I don't\n> think we should strip out prefixes we've already created, especially\n> if the result will be to have extremely generic field names.\n\nOkay, got it -- folks like the prefixes.\nI'm picking this patch set back up again after a long pause and I will\nrestore all prefixes.\n\nWhat does the rs_* in the HeapScanDescData stand for, though?\n\n- Melanie\n\n\n",
"msg_date": "Fri, 27 Sep 2024 14:58:31 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
},
{
"msg_contents": "On Wed, Jun 19, 2024 at 2:13 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 12:38 PM Tomas Vondra\n> <[email protected]> wrote:\n>\n> > > + * XXX I don't understand why we should have this special node if we\n> > > + * don't have special nodes for other scan types.\n> > >\n> > > In this case, up until the final commit (using the read stream\n> > > interface), there are six fields required by bitmap heap scan that are\n> > > not needed by any other user of HeapScanDescData. There are also\n> > > several members of HeapScanDescData that are not needed by bitmap heap\n> > > scans and all of the setup in initscan() for those fields is not\n> > > required for bitmap heap scans.\n> > >\n> > > Also, because the BitmapHeapScanDesc needs information like the\n> > > ParallelBitmapHeapState and prefetch_maximum (for use in prefetching),\n> > > the scan_begin() callback would have to take those as parameters. I\n> > > thought adding so much bitmap table scan-specific information to the\n> > > generic table scan callbacks was a bad idea.\n> > >\n> > > Once we add the read stream API code, the number of fields required\n> > > for bitmap heap scan that are in the scan descriptor goes down to\n> > > three. So, perhaps we could justify having that many bitmap heap\n> > > scan-specific fields in the HeapScanDescData.\n> > >\n> > > Though, I actually think we could start moving toward having\n> > > specialized scan descriptors if the requirements for that scan type\n> > > are appreciably different. I can't think of new code that would be\n> > > added to the HeapScanDescData that would have to be duplicated over to\n> > > specialized scan descriptors.\n> > >\n> > > With the final read stream state, I can see the argument for bloating\n> > > the HeapScanDescData with three extra members and avoiding making new\n> > > scan descriptors. But, for the intermediate patches which have all of\n> > > the bitmap prefetch members pushed down into the HeapScanDescData, I\n> > > think it is really not okay. Six members only for bitmap heap scans\n> > > and two bitmap-specific members to begin_scan() seems bad.\n> > >\n> > > What I thought we plan to do is commit the refactoring patches\n> > > sometime after the branch for 18 is cut and leave the final read\n> > > stream patch uncommitted so we can do performance testing on it. If\n> > > you think it is okay to have the six member bloated HeapScanDescData\n> > > in master until we push the read stream code, I am okay with removing\n> > > the BitmapTableScanDesc and BitmapHeapScanDesc.\n> > >\n> >\n> > I admit I don't have a very good idea what the ideal / desired state\n> > look like. My comment is motivated solely by the feeling that it seems\n> > strange to have one struct serving most scan types, and then a special\n> > struct for one particular scan type ...\n>\n> I see what you are saying. We could make BitmapTableScanDesc inherit\n> from TableScanDescData which would be similar to what we do with other\n> things like the executor scan nodes themselves. We would waste space\n> and LOC with initializing the unneeded members, but it might seem less\n> weird.\n>\n> Whether we want the specialized scan descriptors at all is probably\n> the bigger question, though.\n>\n> The top level BitmapTableScanDesc is motivated by wanting fewer bitmap\n> table scan specific members passed to scan_begin(). And the\n> BitmapHeapScanDesc is motivated by this plus wanting to avoid bloating\n> the HeapScanDescData.\n>\n> If you look at at HEAD~1 (with my patches applied) and think you would\n> be okay with\n> 1) the contents of the BitmapHeapScanDesc being in the HeapScanDescData and\n> 2) the extra bitmap table scan-specific parameters in scan_begin_bm()\n> being passed to scan_begin()\n>\n> then I will remove the specialized scan descriptors.\n>\n> The final state (with the read stream) will still have three bitmap\n> heap scan-specific members in the HeapScanDescData.\n>\n> Would it help if I do a version like this so you can see what it is like?\n\nI revisited this issue (how to keep from bloating the Heap and Table\nscan descriptors and adding many parameters to the scan_begin() table\nAM callback) and am trying to find a less noisy way to address it than\nmy previous proposal.\n\nI've attached a prototype of what I think might work applied on top of\nmaster instead of on top of my patchset.\n\nFor the top-level TableScanDescData, I suggest we use a union with the\nmembers of each scan type in it in anonymous structs (see 0001). This\nwill avoid too much bloat because there are other scan types (like TID\nRange scans) whose members we can move into the union. It isn't great,\nbut it avoids a new top-level scan descriptor and changes to the\ngeneric scanstate node.\n\nWe will still have to pass the parameters needed to set up the\nparallel bitmap iterators to scan_begin() in the intermediate patches,\nbut if we think that we can actually get the streaming read version of\nbitmapheapscan in in the same release, then I think it should be okay\nbecause the final version of these table AM callbacks do not need any\nbitmap-specific members.\n\nTo address the bloat in the HeapScanDescData, I've kept the\nBitmapHeapScanDesc but made it inherit from the HeapScanDescData with\na \"suffix\" of bitmap scan-specific members which were moved out of the\nHeapScanDescData and into the BitmapHeapScanDesc (in 0002).\n\nIt's probably better to temporarily increase the parameters to\nscan_begin() than to introduce new table AM callbacks and then rip\nthem out in a later commit.\n\n- Melanie",
"msg_date": "Fri, 27 Sep 2024 16:13:35 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitmapHeapScan streaming read user and prelim refactoring"
}
] |
[
{
"msg_contents": "A recent commit added the following message:\n\n> \"wal_level\" must be >= logical.\n\nThe use of the term \"logical\" here is a bit confusing, as it's unclear\nwhether it's meant to be a natural language word or a token. (I\nbelieve it to be a token.)\n\nOn the contrary, we already have the following message:\n\n> wal_level must be set to \"replica\" or \"logical\" at server start.\n\nThis has the conflicting policy about quotation of variable names and\nenum values.\n\nI suggest making the quoting policy consistent.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:26:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "About a recently-added message"
},
{
"msg_contents": "At Wed, 14 Feb 2024 16:26:52 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> > \"wal_level\" must be >= logical.\n..\n> > wal_level must be set to \"replica\" or \"logical\" at server start.\n..\n> I suggest making the quoting policy consistent.\n\nJust after this, I found another inconsistency regarding quotation.\n\n> 'dbname' must be specified in \"%s\".\n\nThe use of single quotes doesn't seem to comply with our standard.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:34:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 1:04 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Wed, 14 Feb 2024 16:26:52 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n> > > \"wal_level\" must be >= logical.\n> ..\n> > > wal_level must be set to \"replica\" or \"logical\" at server start.\n> ..\n> > I suggest making the quoting policy consistent.\n>\n> Just after this, I found another inconsistency regarding quotation.\n>\n> > 'dbname' must be specified in \"%s\".\n>\n> The use of single quotes doesn't seem to comply with our standard.\n>\n\nThanks for the report. I'll look into it after analyzing the BF\nfailure caused by the same commit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Feb 2024 14:16:29 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 1:04 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Just after this, I found another inconsistency regarding quotation.\n>\n> > 'dbname' must be specified in \"%s\".\n>\n> The use of single quotes doesn't seem to comply with our standard.\n>\n\nAgreed, I think we have two choices here one is to use dbname without\nany quotes (\"dbname must be specified in \\\"%s\\\".\", ...)) or use double\nquotes (\"\\\"%s\\\" must be specified in \\\"%s\\\".\", \"dbname\" ...)). I see\nmessages like: \"host name must be specified\", so if we want to follow\nthat earlier makes more sense. Any suggestions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:43:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 12:57 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> A recent commit added the following message:\n>\n> > \"wal_level\" must be >= logical.\n>\n> The use of the term \"logical\" here is a bit confusing, as it's unclear\n> whether it's meant to be a natural language word or a token. (I\n> believe it to be a token.)\n>\n> On the contrary, we already have the following message:\n>\n> > wal_level must be set to \"replica\" or \"logical\" at server start.\n>\n> This has the conflicting policy about quotation of variable names and\n> enum values.\n>\n> I suggest making the quoting policy consistent.\n>\n\nAs per docs [1] (In messages containing configuration variable names,\ndo not include quotes when the names are visibly not natural English\nwords, such as when they have underscores, are all-uppercase, or have\nmixed case. Otherwise, quotes must be added. Do include quotes in a\nmessage where an arbitrary variable name is to be expanded.), I think\nin this case GUC's name shouldn't have been quoted. I think the patch\ndid this because it's developed parallel to a thread where we were\ndiscussing whether to quote GUC names or not [2]. I think it is better\nnot to do as per docs till we get any further clarification.\n\nNow, I am less clear about whether to quote \"logical\" or not in the\nabove message. Do you have any suggestions?\n\n[1] - https://www.postgresql.org/docs/devel/error-style-guide.html\n[2] - https://www.postgresql.org/message-id/CAHut+Psf3NewXbsFKY88Qn1ON1_dMD6343MuWdMiiM2Ds9a_wA@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Feb 2024 17:15:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Wed, Feb 14, 2024, at 8:45 AM, Amit Kapila wrote:\n> Now, I am less clear about whether to quote \"logical\" or not in the\n> above message. Do you have any suggestions?\n\nThe possible confusion comes from the fact that the sentence contains \"must be\"\nin the middle of a comparison expression. For pg_createsubscriber, we are using\n\n publisher requires wal_level >= logical\n\nI suggest to use something like\n\n slot synchronization requires wal_level >= logical\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 14, 2024, at 8:45 AM, Amit Kapila wrote:Now, I am less clear about whether to quote \"logical\" or not in theabove message. Do you have any suggestions?The possible confusion comes from the fact that the sentence contains \"must be\"in the middle of a comparison expression. For pg_createsubscriber, we are using publisher requires wal_level >= logicalI suggest to use something like slot synchronization requires wal_level >= logical--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 14 Feb 2024 11:20:46 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 7:51 PM Euler Taveira <[email protected]> wrote:\n>\n> On Wed, Feb 14, 2024, at 8:45 AM, Amit Kapila wrote:\n>\n> Now, I am less clear about whether to quote \"logical\" or not in the\n> above message. Do you have any suggestions?\n>\n>\n> The possible confusion comes from the fact that the sentence contains \"must be\"\n> in the middle of a comparison expression. For pg_createsubscriber, we are using\n>\n> publisher requires wal_level >= logical\n>\n> I suggest to use something like\n>\n> slot synchronization requires wal_level >= logical\n>\n\nThis sounds like a better idea but shouldn't we directly use this in\n'errmsg' instead of a separate 'errhint'? Also, if change this, then\nwe should make a similar change for other messages in the same\nfunction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Feb 2024 08:26:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 8:26 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Feb 14, 2024 at 7:51 PM Euler Taveira <[email protected]> wrote:\n> >\n> > On Wed, Feb 14, 2024, at 8:45 AM, Amit Kapila wrote:\n> >\n> > Now, I am less clear about whether to quote \"logical\" or not in the\n> > above message. Do you have any suggestions?\n> >\n> >\n> > The possible confusion comes from the fact that the sentence contains \"must be\"\n> > in the middle of a comparison expression. For pg_createsubscriber, we are using\n> >\n> > publisher requires wal_level >= logical\n> >\n> > I suggest to use something like\n> >\n> > slot synchronization requires wal_level >= logical\n> >\n>\n> This sounds like a better idea but shouldn't we directly use this in\n> 'errmsg' instead of a separate 'errhint'? Also, if change this, then\n> we should make a similar change for other messages in the same\n> function.\n\n+1 on changing the msg(s) suggested way. Please find the patch for the\nsame. It also removes double quotes around the variable names\n\nthanks\nShveta",
"msg_date": "Thu, 15 Feb 2024 09:22:23 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "At Thu, 15 Feb 2024 09:22:23 +0530, shveta malik <[email protected]> wrote in \r\n> On Thu, Feb 15, 2024 at 8:26 AM Amit Kapila <[email protected]> wrote:\r\n> >\r\n> > On Wed, Feb 14, 2024 at 7:51 PM Euler Taveira <[email protected]> wrote:\r\n> > >\r\n> > > On Wed, Feb 14, 2024, at 8:45 AM, Amit Kapila wrote:\r\n> > >\r\n> > > Now, I am less clear about whether to quote \"logical\" or not in the\r\n> > > above message. Do you have any suggestions?\r\n> > >\r\n> > >\r\n> > > The possible confusion comes from the fact that the sentence contains \"must be\"\r\n> > > in the middle of a comparison expression. For pg_createsubscriber, we are using\r\n> > >\r\n> > > publisher requires wal_level >= logical\r\n> > >\r\n> > > I suggest to use something like\r\n> > >\r\n> > > slot synchronization requires wal_level >= logical\r\n> > >\r\n> >\r\n> > This sounds like a better idea but shouldn't we directly use this in\r\n> > 'errmsg' instead of a separate 'errhint'? Also, if change this, then\r\n> > we should make a similar change for other messages in the same\r\n> > function.\r\n> \r\n> +1 on changing the msg(s) suggested way. Please find the patch for the\r\n> same. It also removes double quotes around the variable names\r\n\r\nThanks for the discussion.\r\n\r\nWith a translator hat on, I would be happy if I could determine\r\nwhether a word requires translation with minimal background\r\ninformation. In this case, a translator needs to know which values\r\nwal_level can take. It's relatively easy in this case, but I'm not\r\nsure if this is always the case. Therefore, I would be slightly\r\nhappier if \"logical\" were double-quoted.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 15 Feb 2024 15:19:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 11:49 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Thu, 15 Feb 2024 09:22:23 +0530, shveta malik <[email protected]> wrote in\n> >\n> > +1 on changing the msg(s) suggested way. Please find the patch for the\n> > same. It also removes double quotes around the variable names\n>\n> Thanks for the discussion.\n>\n> With a translator hat on, I would be happy if I could determine\n> whether a word requires translation with minimal background\n> information. In this case, a translator needs to know which values\n> wal_level can take. It's relatively easy in this case, but I'm not\n> sure if this is always the case. Therefore, I would be slightly\n> happier if \"logical\" were double-quoted.\n>\n\nI see that we use \"logical\" in double quotes in various error\nmessages. For example: \"wal_level must be set to \\\"replica\\\" or\n\\\"logical\\\" at server start\". So following that we can use the double\nquotes here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:10:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 11:10 AM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Feb 15, 2024 at 11:49 AM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> >\n> > At Thu, 15 Feb 2024 09:22:23 +0530, shveta malik <[email protected]> wrote in\n> > >\n> > > +1 on changing the msg(s) suggested way. Please find the patch for the\n> > > same. It also removes double quotes around the variable names\n> >\n> > Thanks for the discussion.\n> >\n> > With a translator hat on, I would be happy if I could determine\n> > whether a word requires translation with minimal background\n> > information. In this case, a translator needs to know which values\n> > wal_level can take. It's relatively easy in this case, but I'm not\n> > sure if this is always the case. Therefore, I would be slightly\n> > happier if \"logical\" were double-quoted.\n> >\n>\n> I see that we use \"logical\" in double quotes in various error\n> messages. For example: \"wal_level must be set to \\\"replica\\\" or\n> \\\"logical\\\" at server start\". So following that we can use the double\n> quotes here as well.\n\nOkay, now since we will have double quotes for logical. So do you\nprefer the existing way of giving error msg or the changed one.\n\nExisting:\nerrmsg(\"bad configuration for slot synchronization\"),\nerrhint(\"wal_level must be >= logical.\"));\n\nerrmsg(\"bad configuration for slot synchronization\"),\nerrhint(\"%s must be defined.\", \"primary_conninfo\"));\n\nThe changed one:\nerrmsg(\"slot synchronization requires wal_level >= logical\"));\n\nerrmsg(\"slot synchronization requires %s to be defined\",\n \"primary_conninfo\"));\n\nthanks\nShveta\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:26:44 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 11:26 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 11:10 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Feb 15, 2024 at 11:49 AM Kyotaro Horiguchi\n> > <[email protected]> wrote:\n> > >\n> > > At Thu, 15 Feb 2024 09:22:23 +0530, shveta malik <[email protected]> wrote in\n> > > >\n> > > > +1 on changing the msg(s) suggested way. Please find the patch for the\n> > > > same. It also removes double quotes around the variable names\n> > >\n> > > Thanks for the discussion.\n> > >\n> > > With a translator hat on, I would be happy if I could determine\n> > > whether a word requires translation with minimal background\n> > > information. In this case, a translator needs to know which values\n> > > wal_level can take. It's relatively easy in this case, but I'm not\n> > > sure if this is always the case. Therefore, I would be slightly\n> > > happier if \"logical\" were double-quoted.\n> > >\n> >\n> > I see that we use \"logical\" in double quotes in various error\n> > messages. For example: \"wal_level must be set to \\\"replica\\\" or\n> > \\\"logical\\\" at server start\". So following that we can use the double\n> > quotes here as well.\n>\n> Okay, now since we will have double quotes for logical. So do you\n> prefer the existing way of giving error msg or the changed one.\n>\n> Existing:\n> errmsg(\"bad configuration for slot synchronization\"),\n> errhint(\"wal_level must be >= logical.\"));\n>\n> errmsg(\"bad configuration for slot synchronization\"),\n> errhint(\"%s must be defined.\", \"primary_conninfo\"));\n>\n> The changed one:\n> errmsg(\"slot synchronization requires wal_level >= logical\"));\n>\n> errmsg(\"slot synchronization requires %s to be defined\",\n> \"primary_conninfo\"));\n>\n\nI would prefer the changed ones as those clearly explain the problem\nwithout additional information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 14:13:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 2:13 PM Amit Kapila <[email protected]> wrote:\n>\n> I would prefer the changed ones as those clearly explain the problem\n> without additional information.\n\nokay, attached v2 patch with changed error msgs and double quotes\naround logical.\n\nthanks\nShveta",
"msg_date": "Tue, 20 Feb 2024 15:20:48 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 3:21 PM shveta malik <[email protected]> wrote:\n>\n> okay, attached v2 patch with changed error msgs and double quotes\n> around logical.\n>\n\nHoriguchi-San, does this address all your concerns related to\ntranslation with these new messages?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 14:57:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "At Wed, 21 Feb 2024 14:57:42 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Tue, Feb 20, 2024 at 3:21 PM shveta malik <[email protected]> wrote:\r\n> >\r\n> > okay, attached v2 patch with changed error msgs and double quotes\r\n> > around logical.\r\n> >\r\n> \r\n> Horiguchi-San, does this address all your concerns related to\r\n> translation with these new messages?\r\n\r\nYes, I'm happy with all of the changes. The proposed patch appears to\r\ncover all instances related to slotsync.c, and it looks fine to\r\nme. Thanks!\r\n\r\nI found that logica.c is also using the policy that I complained\r\nabout, but it is a separate issue.\r\n\r\n./logical.c\u0000122:\terrmsg(\"logical decoding requires wal_level >= logical\")));\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 22 Feb 2024 09:36:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "At Thu, 22 Feb 2024 09:36:43 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Yes, I'm happy with all of the changes. The proposed patch appears to\n> cover all instances related to slotsync.c, and it looks fine to\n> me. Thanks!\n\nI'd like to raise another potential issue outside the patch. The patch\nneeded to change only one test item even though it changed nine\nmessages. This means eigh out of nine messages that the patch changed\nare not covered by our test. I doubt all of them are worth additional\ntest items; however, I think we want to increase coverage.\n\nDo you think some additional tests for the rest of the messages are\nworth the trouble?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Feb 2024 09:46:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 6:16 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Thu, 22 Feb 2024 09:36:43 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n> > Yes, I'm happy with all of the changes. The proposed patch appears to\n> > cover all instances related to slotsync.c, and it looks fine to\n> > me. Thanks!\n>\n> I'd like to raise another potential issue outside the patch. The patch\n> needed to change only one test item even though it changed nine\n> messages. This means eigh out of nine messages that the patch changed\n> are not covered by our test. I doubt all of them are worth additional\n> test items; however, I think we want to increase coverage.\n>\n> Do you think some additional tests for the rest of the messages are\n> worth the trouble?\n>\n\nWe have discussed this during development and didn't find it worth\nadding tests for all misconfigured parameters. However, in the next\npatch where we are planning to add a slot sync worker that will\nautomatically sync slots, we are adding a test for one more parameter.\nI am not against adding tests for all the parameters but it didn't\nappeal to add more test cycles for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:51:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "At Thu, 22 Feb 2024 10:51:07 +0530, Amit Kapila <[email protected]> wrote in \n> > Do you think some additional tests for the rest of the messages are\n> > worth the trouble?\n> >\n> \n> We have discussed this during development and didn't find it worth\n> adding tests for all misconfigured parameters. However, in the next\n> patch where we are planning to add a slot sync worker that will\n> automatically sync slots, we are adding a test for one more parameter.\n> I am not against adding tests for all the parameters but it didn't\n> appeal to add more test cycles for this.\n\nThanks for the explanation. I'm fine with that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Feb 2024 14:40:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 11:10 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Thu, 22 Feb 2024 10:51:07 +0530, Amit Kapila <[email protected]> wrote in\n> > > Do you think some additional tests for the rest of the messages are\n> > > worth the trouble?\n> > >\n> >\n> > We have discussed this during development and didn't find it worth\n> > adding tests for all misconfigured parameters. However, in the next\n> > patch where we are planning to add a slot sync worker that will\n> > automatically sync slots, we are adding a test for one more parameter.\n> > I am not against adding tests for all the parameters but it didn't\n> > appeal to add more test cycles for this.\n>\n> Thanks for the explanation. I'm fine with that.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Feb 2024 16:22:55 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 11:36 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Wed, 21 Feb 2024 14:57:42 +0530, Amit Kapila <[email protected]> wrote in\n> > On Tue, Feb 20, 2024 at 3:21 PM shveta malik <[email protected]> wrote:\n> > >\n> > > okay, attached v2 patch with changed error msgs and double quotes\n> > > around logical.\n> > >\n> >\n> > Horiguchi-San, does this address all your concerns related to\n> > translation with these new messages?\n>\n> Yes, I'm happy with all of the changes. The proposed patch appears to\n> cover all instances related to slotsync.c, and it looks fine to\n> me. Thanks!\n>\n> I found that logica.c is also using the policy that I complained\n> about, but it is a separate issue.\n>\n> ./logical.c 122: errmsg(\"logical decoding requires wal_level >= logical\")));\n>\n\nHmm. I have a currently stalled patch-set to simplify the quoting of\nall the GUC names by using one rule. The consensus is to *always*\nquote them. See [1]. And those patches already are addressing that\nlogical.c code mentioned above.\n\nIMO it would be good if we could try to get that patch set approved to\nfix everything in one go, instead of ad-hoc hunting for and fixing\ncases one at a time.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPsf3NewXbsFKY88Qn1ON1_dMD6343MuWdMiiM2Ds9a_wA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 23 Feb 2024 15:46:38 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added message"
}
] |
[
{
"msg_contents": "Hi,\n\nI recently observed an assertion failure twice in t/001_rep_changes.pl\non HEAD with the backtrace [1] on my dev EC2 c5.4xlarge instance [2].\nUnfortunately I'm not observing it again. I haven't got a chance to\ndive deep into it. However, I'm posting it here just for the records,\nand in case something can be derived out of the backtrace.\n\n[1] t/001_rep_changes.pl\n\n2024-01-31 12:24:38.474 UTC [840166]\npg_16435_sync_16393_7330237333761601891 STATEMENT:\nDROP_REPLICATION_SLOT pg_16435_sync_16393_7330237333761601891 WAIT\nTRAP: failed Assert(\"list->head != INVALID_PGPROCNO\"), File:\n\"../../../../src/include/storage/proclist.h\", Line: 101, PID: 840166\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ExceptionalCondition+0xbb)[0x55c8edf6b8f9]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x6637de)[0x55c8edd517de]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ConditionVariablePrepareToSleep+0x85)[0x55c8edd51b91]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ReplicationSlotAcquire+0x142)[0x55c8edcead6b]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ReplicationSlotDrop+0x51)[0x55c8edceb47f]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x60da71)[0x55c8edcfba71]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(exec_replication_command+0x47e)[0x55c8edcfc96a]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(PostgresMain+0x7df)[0x55c8edd7d644]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5ab50c)[0x55c8edc9950c]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5aab21)[0x55c8edc98b21]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5a70de)[0x55c8edc950de]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(PostmasterMain+0x1534)[0x55c8edc949db]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x459c47)[0x55c8edb47c47]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f19fe629d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f19fe629e40]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(_start+0x25)[0x55c8ed7c4565]\n2024-01-31 12:24:38.476 UTC [840168]\npg_16435_sync_16390_7330237333761601891 LOG: statement: SELECT\na.attnum, a.attname, a.atttypid, a.attnum =\nANY(i.indkey) FROM pg_catalog.pg_attribute a LEFT JOIN\npg_catalog.pg_index i ON (i.indexrelid =\npg_get_replica_identity_index(16391)) WHERE a.attnum >\n0::pg_catalog.int2 AND NOT a.attisdropped AND a.attgenerated = ''\nAND a.attrelid = 16391 ORDER BY a.attnum\n\n[2] Linux ip-000-00-0-000 6.2.0-1018-aws #18~22.04.1-Ubuntu SMP Wed\nJan 10 22:54:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Feb 2024 13:19:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "A failure in t/001_rep_changes.pl"
},
{
"msg_contents": "On Wed, 14 Feb 2024 at 13:19, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I recently observed an assertion failure twice in t/001_rep_changes.pl\n> on HEAD with the backtrace [1] on my dev EC2 c5.4xlarge instance [2].\n> Unfortunately I'm not observing it again. I haven't got a chance to\n> dive deep into it. However, I'm posting it here just for the records,\n> and in case something can be derived out of the backtrace.\n>\n> [1] t/001_rep_changes.pl\n>\n> 2024-01-31 12:24:38.474 UTC [840166]\n> pg_16435_sync_16393_7330237333761601891 STATEMENT:\n> DROP_REPLICATION_SLOT pg_16435_sync_16393_7330237333761601891 WAIT\n> TRAP: failed Assert(\"list->head != INVALID_PGPROCNO\"), File:\n> \"../../../../src/include/storage/proclist.h\", Line: 101, PID: 840166\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(ExceptionalCondition+0xbb)[0x55c8edf6b8f9]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x6637de)[0x55c8edd517de]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(ConditionVariablePrepareToSleep+0x85)[0x55c8edd51b91]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(ReplicationSlotAcquire+0x142)[0x55c8edcead6b]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(ReplicationSlotDrop+0x51)[0x55c8edceb47f]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x60da71)[0x55c8edcfba71]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(exec_replication_command+0x47e)[0x55c8edcfc96a]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(PostgresMain+0x7df)[0x55c8edd7d644]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x5ab50c)[0x55c8edc9950c]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x5aab21)[0x55c8edc98b21]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x5a70de)[0x55c8edc950de]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(PostmasterMain+0x1534)[0x55c8edc949db]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(+0x459c47)[0x55c8edb47c47]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f19fe629d90]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f19fe629e40]\n> postgres: publisher: walsender ubuntu postgres [local]\n> DROP_REPLICATION_SLOT(_start+0x25)[0x55c8ed7c4565]\n> 2024-01-31 12:24:38.476 UTC [840168]\n> pg_16435_sync_16390_7330237333761601891 LOG: statement: SELECT\n> a.attnum, a.attname, a.atttypid, a.attnum =\n> ANY(i.indkey) FROM pg_catalog.pg_attribute a LEFT JOIN\n> pg_catalog.pg_index i ON (i.indexrelid =\n> pg_get_replica_identity_index(16391)) WHERE a.attnum >\n> 0::pg_catalog.int2 AND NOT a.attisdropped AND a.attgenerated = ''\n> AND a.attrelid = 16391 ORDER BY a.attnum\n>\n> [2] Linux ip-000-00-0-000 6.2.0-1018-aws #18~22.04.1-Ubuntu SMP Wed\n> Jan 10 22:54:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux\n\nBy any chance do you have the log files when this failure occurred, if\nso please share it.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 23 Feb 2024 15:50:21 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A failure in t/001_rep_changes.pl"
},
{
"msg_contents": "At Fri, 23 Feb 2024 15:50:21 +0530, vignesh C <[email protected]> wrote in \n> By any chance do you have the log files when this failure occurred, if\n> so please share it.\n\nIn my understanding, within a single instance, no two proclists can\nsimultaneously share the same waitlink member of PGPROC.\n\nOn the other hand, a publisher uses two condition variables for slots\nand WAL waiting, which work on the same PGPROC member cvWaitLink. I\nsuspect this issue arises from the configuration. However, although it\nis unlikly related to this specific issue, a similar problem can arise\nin instances that function both as logical publisher and physical\nprimary.\n\nRegardless of this issue, I think we should provide separate waitlink\nmembers for condition variables that can possibly be used\nsimultaneously.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Feb 2024 15:07:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A failure in t/001_rep_changes.pl"
}
] |
[
{
"msg_contents": "Hi,\n\nRecently there have been few upgrade tap test failures in buildfarm\nlike in [1] & [2]. Analysing these failures requires the log files\nthat are getting generated from src/bin/pg_upgrade at the following\nlocations:\ntmp_check/*/pgdata/pg_upgrade_output.d/*/*.txt - e.g.\ntmp_check/t_004_subscription_new_sub1_data/pgdata/pg_upgrade_output.d/20240214T052229.045/subs_invalid.txt\ntmp_check/*/pgdata/pg_upgrade_output.d/*/*/*.log - e.g.\ntmp_check/t_004_subscription_new_sub1_data/pgdata/pg_upgrade_output.d/20240214T052229.045/log/pg_upgrade_server.log\n\nFirst regex is the testname_clusterinstance_data, second regex is the\ntimestamp used for pg_upgrade, third regex is for the text files\ngenerated by pg_upgrade and fourth regex is for the log files\ngenerated by pg_upgrade.\n\nCan we include these log files also in the buildfarm?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Feb 2024 15:51:08 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can we include capturing logs of pgdata/pg_upgrade_output.d/*/log in\n buildfarm"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 03:51:08PM +0530, vignesh C wrote:\n> First regex is the testname_clusterinstance_data, second regex is the\n> timestamp used for pg_upgrade, third regex is for the text files\n> generated by pg_upgrade and fourth regex is for the log files\n> generated by pg_upgrade.\n> \n> Can we include these log files also in the buildfarm?\n> \n> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n> [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n\nIndeed, these lack some patterns. Why not sending a pull request\naround [1] to get more patterns covered?\n[1]: https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgrade.pm\n--\nMichael",
"msg_date": "Thu, 15 Feb 2024 10:54:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we include capturing logs of\n pgdata/pg_upgrade_output.d/*/log in buildfarm"
},
{
"msg_contents": "On Thu, 15 Feb 2024 at 07:24, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Feb 14, 2024 at 03:51:08PM +0530, vignesh C wrote:\n> > First regex is the testname_clusterinstance_data, second regex is the\n> > timestamp used for pg_upgrade, third regex is for the text files\n> > generated by pg_upgrade and fourth regex is for the log files\n> > generated by pg_upgrade.\n> >\n> > Can we include these log files also in the buildfarm?\n> >\n> > [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n> > [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n>\n> Indeed, these lack some patterns. Why not sending a pull request\n> around [1] to get more patterns covered?\n\nI have added a few more patterns to include the pg_upgrade generated\nfiles. The attached patch has the changes for the same.\nAdding Andrew also to get his thoughts on this.\n\nRegards,\nVignesh",
"msg_date": "Thu, 15 Feb 2024 08:36:39 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can we include capturing logs of pgdata/pg_upgrade_output.d/*/log\n in buildfarm"
},
{
"msg_contents": "On Thu, 15 Feb 2024 at 08:36, vignesh C <[email protected]> wrote:\n>\n> On Thu, 15 Feb 2024 at 07:24, Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Feb 14, 2024 at 03:51:08PM +0530, vignesh C wrote:\n> > > First regex is the testname_clusterinstance_data, second regex is the\n> > > timestamp used for pg_upgrade, third regex is for the text files\n> > > generated by pg_upgrade and fourth regex is for the log files\n> > > generated by pg_upgrade.\n> > >\n> > > Can we include these log files also in the buildfarm?\n> > >\n> > > [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n> > > [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n> >\n> > Indeed, these lack some patterns. Why not sending a pull request\n> > around [1] to get more patterns covered?\n>\n> I have added a few more patterns to include the pg_upgrade generated\n> files. The attached patch has the changes for the same.\n> Adding Andrew also to get his thoughts on this.\n\nI have added the following commitfest entry for this:\nhttps://commitfest.postgresql.org/47/4850/\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 25 Feb 2024 21:48:20 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can we include capturing logs of pgdata/pg_upgrade_output.d/*/log\n in buildfarm"
},
{
"msg_contents": "\nOn 2024-02-25 Su 11:18, vignesh C wrote:\n> On Thu, 15 Feb 2024 at 08:36, vignesh C <[email protected]> wrote:\n>> On Thu, 15 Feb 2024 at 07:24, Michael Paquier <[email protected]> wrote:\n>>> On Wed, Feb 14, 2024 at 03:51:08PM +0530, vignesh C wrote:\n>>>> First regex is the testname_clusterinstance_data, second regex is the\n>>>> timestamp used for pg_upgrade, third regex is for the text files\n>>>> generated by pg_upgrade and fourth regex is for the log files\n>>>> generated by pg_upgrade.\n>>>>\n>>>> Can we include these log files also in the buildfarm?\n>>>>\n>>>> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n>>>> [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n>>> Indeed, these lack some patterns. Why not sending a pull request\n>>> around [1] to get more patterns covered?\n>> I have added a few more patterns to include the pg_upgrade generated\n>> files. The attached patch has the changes for the same.\n>> Adding Andrew also to get his thoughts on this.\n> I have added the following commitfest entry for this:\n> https://commitfest.postgresql.org/47/4850/\n>\n\nBuildfarm code patches do not belong in the Commitfest, I have marked \nthe item as rejected. You can send me patches directly or add a PR to \nthe buildfarm's github repo.\n\nIn this case the issue on drongo was a typo, the fix for which I had \nforgotten to propagate back in December. Note that the buildfarm's \nTestUpgrade.pm module is only used for branches < 15. For branches >= 15 \nwe run the standard TAP test and this module does nothing.\n\nMore generally, the collection of logs etc. for pg_upgrade will improve \nwith the next release, which will be soon after I return from a vacation \nin about 2 weeks - experience shows that making releases just before a \nvacation is not a good idea :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 00:27:11 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we include capturing logs of pgdata/pg_upgrade_output.d/*/log\n in buildfarm"
},
{
"msg_contents": "On Mon, 26 Feb 2024 at 10:57, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2024-02-25 Su 11:18, vignesh C wrote:\n> > On Thu, 15 Feb 2024 at 08:36, vignesh C <[email protected]> wrote:\n> >> On Thu, 15 Feb 2024 at 07:24, Michael Paquier <[email protected]> wrote:\n> >>> On Wed, Feb 14, 2024 at 03:51:08PM +0530, vignesh C wrote:\n> >>>> First regex is the testname_clusterinstance_data, second regex is the\n> >>>> timestamp used for pg_upgrade, third regex is for the text files\n> >>>> generated by pg_upgrade and fourth regex is for the log files\n> >>>> generated by pg_upgrade.\n> >>>>\n> >>>> Can we include these log files also in the buildfarm?\n> >>>>\n> >>>> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-02-10%2007%3A03%3A10\n> >>>> [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-12-07%2003%3A56%3A20\n> >>> Indeed, these lack some patterns. Why not sending a pull request\n> >>> around [1] to get more patterns covered?\n> >> I have added a few more patterns to include the pg_upgrade generated\n> >> files. The attached patch has the changes for the same.\n> >> Adding Andrew also to get his thoughts on this.\n> > I have added the following commitfest entry for this:\n> > https://commitfest.postgresql.org/47/4850/\n> >\n>\n> Buildfarm code patches do not belong in the Commitfest, I have marked\n> the item as rejected. You can send me patches directly or add a PR to\n> the buildfarm's github repo.\n\nOk, I will send over the patch directly for the required things.\n\n>\n> In this case the issue on drongo was a typo, the fix for which I had\n> forgotten to propagate back in December. Note that the buildfarm's\n> TestUpgrade.pm module is only used for branches < 15. For branches >= 15\n> we run the standard TAP test and this module does nothing.\n>\n> More generally, the collection of logs etc. for pg_upgrade will improve\n> with the next release, which will be soon after I return from a vacation\n> in about 2 weeks - experience shows that making releases just before a\n> vacation is not a good idea :-)\n\nThanks, that will be helpful.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 27 Feb 2024 19:56:55 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can we include capturing logs of pgdata/pg_upgrade_output.d/*/log\n in buildfarm"
}
] |
[
{
"msg_contents": "This brings our .gitattributes and .editorconfig files more in line. I\nhad the problem that \"git add\" would complain often about trailing\nwhitespaces when I was changing sgml files specifically.",
"msg_date": "Wed, 14 Feb 2024 17:35:13 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 11:35 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> This brings our .gitattributes and .editorconfig files more in line. I\n> had the problem that \"git add\" would complain often about trailing\n> whitespaces when I was changing sgml files specifically.\n\n+1 from me. But when do we want it to be false? That is, why not\ndeclare it true for all file types?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 14 Feb 2024 17:06:35 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "> On 14 Feb 2024, at 23:06, Melanie Plageman <[email protected]> wrote:\n> \n> On Wed, Feb 14, 2024 at 11:35 AM Jelte Fennema-Nio <[email protected]> wrote:\n>> \n>> This brings our .gitattributes and .editorconfig files more in line. I\n>> had the problem that \"git add\" would complain often about trailing\n>> whitespaces when I was changing sgml files specifically.\n> \n> +1 from me. But when do we want it to be false? That is, why not\n> declare it true for all file types?\n\nRegression test .out files commonly have spaces at the end of the line. (Not\nto mention the ECPG .c files but they probably really shouldn't have.)\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 14 Feb 2024 23:19:55 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Wed, 14 Feb 2024 at 23:19, Daniel Gustafsson <[email protected]> wrote:\n> > +1 from me. But when do we want it to be false? That is, why not\n> > declare it true for all file types?\n>\n> Regression test .out files commonly have spaces at the end of the line. (Not\n> to mention the ECPG .c files but they probably really shouldn't have.)\n\nAttached is v2, which now makes the rules between gitattributes and\neditorconfig completely identical. As well as improving two minor\nthings about .gitattributes before doing that.",
"msg_date": "Thu, 15 Feb 2024 10:26:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On 15.02.24 10:26, Jelte Fennema-Nio wrote:\n> On Wed, 14 Feb 2024 at 23:19, Daniel Gustafsson <[email protected]> wrote:\n>>> +1 from me. But when do we want it to be false? That is, why not\n>>> declare it true for all file types?\n>>\n>> Regression test .out files commonly have spaces at the end of the line. (Not\n>> to mention the ECPG .c files but they probably really shouldn't have.)\n> \n> Attached is v2, which now makes the rules between gitattributes and\n> editorconfig completely identical. As well as improving two minor\n> things about .gitattributes before doing that.\n\nIs there a command-line tool to verify the syntax of .editorconfig and \ncheck compliance of existing files?\n\nI'm worried that expanding .editorconfig with detailed per-file rules \nwill lead to a lot of mistakes and blind editing, if we don't have \nverification tooling.\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 16:57:05 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Thu, 15 Feb 2024 at 16:57, Peter Eisentraut <[email protected]> wrote:\n> Is there a command-line tool to verify the syntax of .editorconfig and\n> check compliance of existing files?\n>\n> I'm worried that expanding .editorconfig with detailed per-file rules\n> will lead to a lot of mistakes and blind editing, if we don't have\n> verification tooling.\n\nI tried this one just now:\nhttps://github.com/editorconfig-checker/editorconfig-checker.javascript\n\nI fixed all the issues by updating my patchset to use \"unset\" for\ninsert_final_newline instead of \"false\".\n\nAll other files were already clean, which makes sense because the new\neditorconfig rules are exactly the same as gitattributes (which I'm\nguessing we are checking in CI/buildfarm). So I don't think it makes\nsense to introduce another tool to check the same thing again.",
"msg_date": "Thu, 15 Feb 2024 18:47:45 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "v3-0001-Remove-non-existing-file-from-.gitattributes.patch\n\nI have committed that one.\n\nv3-0002-Require-final-newline-in-.po-files.patch\n\nThe .po files are imported from elsewhere, so I'm not sure this is going \nto have the desired effect. Perhaps it's worth cleaning up, but it \nwould require more steps.\n\nv3-0003-Bring-editorconfig-in-line-with-gitattributes.patch\n\nI question whether we need to add rules to .editorconfig about files \nthat are generated or imported from elsewhere, since those are not meant \nto be edited.\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 11:44:56 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 11:45, Peter Eisentraut <[email protected]> wrote:\n> I have committed that one.\n\nThanks :)\n\n> v3-0002-Require-final-newline-in-.po-files.patch\n>\n> The .po files are imported from elsewhere, so I'm not sure this is going\n> to have the desired effect. Perhaps it's worth cleaning up, but it\n> would require more steps.\n\nOkay, yeah that would need to be changed at the source then. Removed\nthis change from the newly attached patchset, as well as updating\neditorconfig to have \"insert_final_newline = unset\" for .po files.\n\n> v3-0003-Bring-editorconfig-in-line-with-gitattributes.patch\n>\n> I question whether we need to add rules to .editorconfig about files\n> that are generated or imported from elsewhere, since those are not meant\n> to be edited.\n\nI agree that it's not strictly necessary to have .editorconfig match\n.gitattributes for files that are not meant to be edited by hand. But\nI don't really see a huge downside either, apart from having a few\nextra lines it .editorconfig. And adding these lines does have a few\nbenefits:\n1. It makes it easy to ensure that .editorconfig and .gitattributes stay in sync\n2. If someone opens a file that they are not supposed to edit by hand,\nand then saves it. Then no changes are made. As opposed to suddenly\nmaking some whitespace changes\n\nAttached is a new patchset with the first commit split in three\nseparate commits, which configure:\n1. Files meant to be edited by hand)\n2. Output test files (maybe edited by hand)\n3. Imported/autogenerated files\n\nThe first one is definitely the most useful to me personally.",
"msg_date": "Mon, 19 Feb 2024 16:21:27 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On 19.02.24 16:21, Jelte Fennema-Nio wrote:\n>> v3-0003-Bring-editorconfig-in-line-with-gitattributes.patch\n>>\n>> I question whether we need to add rules to .editorconfig about files\n>> that are generated or imported from elsewhere, since those are not meant\n>> to be edited.\n> I agree that it's not strictly necessary to have .editorconfig match\n> .gitattributes for files that are not meant to be edited by hand. But\n> I don't really see a huge downside either, apart from having a few\n> extra lines it .editorconfig. And adding these lines does have a few\n> benefits:\n> 1. It makes it easy to ensure that .editorconfig and .gitattributes stay in sync\n> 2. If someone opens a file that they are not supposed to edit by hand,\n> and then saves it. Then no changes are made. As opposed to suddenly\n> making some whitespace changes\n> \n> Attached is a new patchset with the first commit split in three\n> separate commits, which configure:\n> 1. Files meant to be edited by hand)\n> 2. Output test files (maybe edited by hand)\n> 3. Imported/autogenerated files\n\n > diff --git a/.gitattributes b/.gitattributes\n > index e9ff4a56bd..7923fc3387 100644\n > --- a/.gitattributes\n > +++ b/.gitattributes\n > @@ -1,3 +1,4 @@\n > +# IMPORTANT: When updating this file, also update .editorconfig to \nmatch.\n\nEverybody has git. Everybody who edits .gitattributes can use git to \ncheck what they did. Not everybody has editorconfig-related tools. I \ntried the editorconfig-checker that you had mentioned (I tried the Go \nversion, not the JavaScript one, because the former is packaged for \nHomebrew and Debian), but it was terrible and unusable. Maybe I'm \nholding it wrong. But I don't want users of a common tool to bear the \nburden of blindly updating files for a much-less-common tool. This is \nhow we got years of blindly updating Windows build files. The result \nwill be to that people will instead avoid updating .gitattributes.\n\nISTM that with a small shell script, .editorconfig could be generated \nfrom .gitattributes?\n\n\n",
"msg_date": "Thu, 4 Apr 2024 15:25:21 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 15:25, Peter Eisentraut <[email protected]> wrote:\n> Everybody has git. Everybody who edits .gitattributes can use git to\n> check what they did.\n\nWhat CLI command do you use to fix/ gitattributes on all existing\nfiles? Afaict there's no command to actually remove the trailing\nwhitespace that git add complains about. If you don't have such a\ncommand, then afaict updating gitattributes is also essentially\nblind-updating.\n\n> But I don't want users of a common tool to bear the\n> burden of blindly updating files for a much-less-common tool.\n\nIt's used quite a bit. Many editors/IDEs have built in support (Vim,\nVisual Studio, IntelliJ), and the ones that don't have an easy to\ninstall plugin. It's not meant to be used as a command line tool, but\nas the name suggests it's meant as editor integration.\n\n> ISTM that with a small shell script, .editorconfig could be generated\n> from .gitattributes?\n\nHonestly, I don't think building such automation is worth the effort.\nChanging the .editorconfig file to be the same is pretty trivial if\nyou look at the existing examples, honestly editorconfig syntax is\nmuch more straightforward to me than the gitattributes one. Also\ngitattributes is only changed very rarely, only 15 times in the 10\nyears since its creation in our repo, which makes any automation\naround it probably not worth the investement.\n\nThis whole comment really seems to only really be about 0004. We\nalready have an outdated editorconfig file in the repo, and it's\nseverely annoying me whenever I'm writing any docs for postgres\nbecause it doesn't trim my trailing spaces. If we wouldn't have this\neditorconfig file in the repo at all, it would actually be better for\nme, because I could maintain my own file locally myself. But now\nbecause there's an incorrect file, I'd have to git stash/pop all the\ntime. Is there any chance the other commits can be at least merged.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 16:58:25 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On 04.04.24 16:58, Jelte Fennema-Nio wrote:\n> On Thu, 4 Apr 2024 at 15:25, Peter Eisentraut<[email protected]> wrote:\n>> Everybody has git. Everybody who edits .gitattributes can use git to\n>> check what they did.\n> What CLI command do you use to fix/ gitattributes on all existing\n> files? Afaict there's no command to actually remove the trailing\n> whitespace that git add complains about. If you don't have such a\n> command, then afaict updating gitattributes is also essentially\n> blind-updating.\n\nI don't have a command to fix files automatically, but I have a command \nto check them:\n\n git diff-tree --check $(git hash-object -t tree /dev/null) HEAD\n\nThat's what I was hoping for for editorconfig-check, but as I said, the \nexperience wasn't good.\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 17:23:28 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 17:23, Peter Eisentraut <[email protected]> wrote:\n> git diff-tree --check $(git hash-object -t tree /dev/null) HEAD\n>\n> That's what I was hoping for for editorconfig-check, but as I said, the\n> experience wasn't good.\n\nAh, I wasn't able to find that git incantation. I definitely think it\nwould be good if there was an official cli tool like that for\neditorconfig, but the Javascript one was the closest I could find. The\nGo one I haven't tried.\n\nOn Thu, 4 Apr 2024 at 17:23, Peter Eisentraut <[email protected]> wrote:\n>\n> On 04.04.24 16:58, Jelte Fennema-Nio wrote:\n> > On Thu, 4 Apr 2024 at 15:25, Peter Eisentraut<[email protected]> wrote:\n> >> Everybody has git. Everybody who edits .gitattributes can use git to\n> >> check what they did.\n> > What CLI command do you use to fix/ gitattributes on all existing\n> > files? Afaict there's no command to actually remove the trailing\n> > whitespace that git add complains about. If you don't have such a\n> > command, then afaict updating gitattributes is also essentially\n> > blind-updating.\n>\n> I don't have a command to fix files automatically, but I have a command\n> to check them:\n>\n> git diff-tree --check $(git hash-object -t tree /dev/null) HEAD\n>\n> That's what I was hoping for for editorconfig-check, but as I said, the\n> experience wasn't good.\n>\n\n\n",
"msg_date": "Thu, 4 Apr 2024 17:28:25 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 16:58, Jelte Fennema-Nio <[email protected]> wrote:\n> > ISTM that with a small shell script, .editorconfig could be generated\n> > from .gitattributes?\n>\n> Honestly, I don't think building such automation is worth the effort.\n\nOkay, I spent the time to add a script to generate the editorconfig\nbased on .gitattributes after all. So attached is a patch that adds\nthat.",
"msg_date": "Tue, 9 Apr 2024 12:42:22 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Tue, 9 Apr 2024 at 12:42, Jelte Fennema-Nio <[email protected]> wrote:\n> Okay, I spent the time to add a script to generate the editorconfig\n> based on .gitattributes after all. So attached is a patch that adds\n> that.\n\nI would love to see this patch merged (or at least some feedback on\nthe latest version). I think it's pretty trivial and really low risk\nof breaking anyone's workflow, and it would *significantly* improve my\nown workflow.\n\nMatthias mentioned on Discord that our vendored in pg_bsd_indent uses\na tabwidth of 8 and that was showing up ugly in his editor. I updated\nthe patch to include a fix for that too.",
"msg_date": "Wed, 7 Aug 2024 19:09:59 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "\nOn 2024-08-07 We 1:09 PM, Jelte Fennema-Nio wrote:\n> On Tue, 9 Apr 2024 at 12:42, Jelte Fennema-Nio <[email protected]> wrote:\n>> Okay, I spent the time to add a script to generate the editorconfig\n>> based on .gitattributes after all. So attached is a patch that adds\n>> that.\n> I would love to see this patch merged (or at least some feedback on\n> the latest version). I think it's pretty trivial and really low risk\n> of breaking anyone's workflow, and it would *significantly* improve my\n> own workflow.\n>\n> Matthias mentioned on Discord that our vendored in pg_bsd_indent uses\n> a tabwidth of 8 and that was showing up ugly in his editor. I updated\n> the patch to include a fix for that too.\n\n\nYou're not meant to use our pg_bsd_indent on its own without the \nappropriate flags, namely (from src/tools/pgindent/pgindent):\n\n\"-bad -bap -bbb -bc -bl -cli1 -cp33 -cdb -nce -d0 -di12 -nfc1 -i4 -l79 \n-lp -lpl -nip -npro -sac -tpg -ts4\"\n\nIf that's inconvenient you can create a .indent.pro with the settings.\n\n\nAlso, why are you proposing to undet indent-style for .pl and .pm files? \nThat's not in accordance with our perltidy settings \n(src/tools/pgindent/perltidyrc), unless I'm misunderstanding.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 7 Aug 2024 15:09:51 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Wed, 7 Aug 2024 at 21:09, Andrew Dunstan <[email protected]> wrote:\n> You're not meant to use our pg_bsd_indent on its own without the\n> appropriate flags, namely (from src/tools/pgindent/pgindent):\n\nAh sorry, I wasn't clear in what I meant then. I meant that if you\nlook at the sources of pg_bsd_indent (such as\nsrc/tools/pg_bsd_indent/io.c) then you'll realize that comments are\nalligned using tabs of width 8, not tabs of width 4. And right now\n.editorconfig configures editors to show all .c files with a tab_width\nof 4, because we use that for Postgres source files. The bottom\n.gitattributes line now results in an editorconfig rule that sets a\ntab_width of 8 for just the c and h files in src/tools/pg_bsd_indent\ndirectory.\n\n> Also, why are you proposing to undet indent-style for .pl and .pm files?\n> That's not in accordance with our perltidy settings\n> (src/tools/pgindent/perltidyrc), unless I'm misunderstanding.\n\nAll the way at the bottom of the .editorconfig file those \"ident_style\n= unset\" lines are overridden to be \"tab\" for .pl and .pm files.\nThere's a comment there explaining why it's done that way.\n\n# We want editors to use tabs for indenting Perl files, but we cannot add it\n# such a rule to .gitattributes, because certain lines are still indented with\n# spaces (e.g. SYNOPSIS blocks).\n[*.{pl,pm}]\nindent_style = tab\n\nBut now thinking about this again after your comment, I realize it's\njust as easy and effective to change the script slightly to hardcode\nthe indent_style for \"*.pl\" and \"*.pm\" so that the resulting\n.editorconfig looks less confusing. Attached is a patch that does\nthat.\n\nI also added a .gitattributes rule for .py files, and changed the\ndefault tab_width to unset. Because I realized the resulting\n.editorconfig was using tab_width 8 for python files when editing\nsrc/tools/generate_editorconfig.py",
"msg_date": "Wed, 7 Aug 2024 22:42:08 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On 2024-08-07 We 4:42 PM, Jelte Fennema-Nio wrote:\n> On Wed, 7 Aug 2024 at 21:09, Andrew Dunstan<[email protected]> wrote:\n>> You're not meant to use our pg_bsd_indent on its own without the\n>> appropriate flags, namely (from src/tools/pgindent/pgindent):\n> Ah sorry, I wasn't clear in what I meant then. I meant that if you\n> look at the sources of pg_bsd_indent (such as\n> src/tools/pg_bsd_indent/io.c) then you'll realize that comments are\n> alligned using tabs of width 8, not tabs of width 4. And right now\n> .editorconfig configures editors to show all .c files with a tab_width\n> of 4, because we use that for Postgres source files. The bottom\n> .gitattributes line now results in an editorconfig rule that sets a\n> tab_width of 8 for just the c and h files in src/tools/pg_bsd_indent\n> directory.\n\n\nAh, OK. Yeah, that makes sense.\n\n\n>\n>> Also, why are you proposing to undet indent-style for .pl and .pm files?\n>> That's not in accordance with our perltidy settings\n>> (src/tools/pgindent/perltidyrc), unless I'm misunderstanding.\n> All the way at the bottom of the .editorconfig file those \"ident_style\n> = unset\" lines are overridden to be \"tab\" for .pl and .pm files.\n> There's a comment there explaining why it's done that way.\n>\n> # We want editors to use tabs for indenting Perl files, but we cannot add it\n> # such a rule to .gitattributes, because certain lines are still indented with\n> # spaces (e.g. SYNOPSIS blocks).\n> [*.{pl,pm}]\n> indent_style = tab\n>\n> But now thinking about this again after your comment, I realize it's\n> just as easy and effective to change the script slightly to hardcode\n> the indent_style for \"*.pl\" and \"*.pm\" so that the resulting\n> .editorconfig looks less confusing. Attached is a patch that does\n> that.\n\n\nOK, good, thanks.\n\n\n>\n> I also added a .gitattributes rule for .py files, and changed the\n> default tab_width to unset. Because I realized the resulting\n> .editorconfig was using tab_width 8 for python files when editing\n> src/tools/generate_editorconfig.py\n\n\nsounds good.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-07 We 4:42 PM, Jelte\n Fennema-Nio wrote:\n\n\nOn Wed, 7 Aug 2024 at 21:09, Andrew Dunstan <[email protected]> wrote:\n\n\nYou're not meant to use our pg_bsd_indent on its own without the\nappropriate flags, namely (from src/tools/pgindent/pgindent):\n\n\n\nAh sorry, I wasn't clear in what I meant then. I meant that if you\nlook at the sources of pg_bsd_indent (such as\nsrc/tools/pg_bsd_indent/io.c) then you'll realize that comments are\nalligned using tabs of width 8, not tabs of width 4. And right now\n.editorconfig configures editors to show all .c files with a tab_width\nof 4, because we use that for Postgres source files. The bottom\n.gitattributes line now results in an editorconfig rule that sets a\ntab_width of 8 for just the c and h files in src/tools/pg_bsd_indent\ndirectory.\n\n\n\nAh, OK. Yeah, that makes sense.\n\n\n\n\n\n\n\nAlso, why are you proposing to undet indent-style for .pl and .pm files?\nThat's not in accordance with our perltidy settings\n(src/tools/pgindent/perltidyrc), unless I'm misunderstanding.\n\n\n\nAll the way at the bottom of the .editorconfig file those \"ident_style\n= unset\" lines are overridden to be \"tab\" for .pl and .pm files.\nThere's a comment there explaining why it's done that way.\n\n# We want editors to use tabs for indenting Perl files, but we cannot add it\n# such a rule to .gitattributes, because certain lines are still indented with\n# spaces (e.g. SYNOPSIS blocks).\n[*.{pl,pm}]\nindent_style = tab\n\nBut now thinking about this again after your comment, I realize it's\njust as easy and effective to change the script slightly to hardcode\nthe indent_style for \"*.pl\" and \"*.pm\" so that the resulting\n.editorconfig looks less confusing. Attached is a patch that does\nthat.\n\n\n\nOK, good, thanks.\n\n\n\n\n\n\nI also added a .gitattributes rule for .py files, and changed the\ndefault tab_width to unset. Because I realized the resulting\n.editorconfig was using tab_width 8 for python files when editing\nsrc/tools/generate_editorconfig.py\n\n\n\nsounds good.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 8 Aug 2024 07:27:51 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On 07.08.24 22:42, Jelte Fennema-Nio wrote:\n> I also added a .gitattributes rule for .py files, and changed the\n> default tab_width to unset. Because I realized the resulting\n> .editorconfig was using tab_width 8 for python files when editing\n> src/tools/generate_editorconfig.py\n\nThis looks kind of weird:\n\n-*.sgml\t\twhitespace=space-before-tab,trailing-space,tab-in-indent\n-*.x[ms]l\twhitespace=space-before-tab,trailing-space,tab-in-indent\n+*.py\t\twhitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=4\n+*.sgml\t\twhitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=1\n+*.xml\t\twhitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=1\n+*.xsl\t\twhitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=2\n\nWhy add tabwidth settings to files that are not supposed to contain tabs?\n\n\n",
"msg_date": "Fri, 9 Aug 2024 15:16:11 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Fri, 9 Aug 2024 at 15:16, Peter Eisentraut <[email protected]> wrote:\n> -*.sgml whitespace=space-before-tab,trailing-space,tab-in-indent\n> -*.x[ms]l whitespace=space-before-tab,trailing-space,tab-in-indent\n> +*.py whitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=4\n> +*.sgml whitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=1\n> +*.xml whitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=1\n> +*.xsl whitespace=space-before-tab,trailing-space,tab-in-indent,tabwidth=2\n>\n> Why add tabwidth settings to files that are not supposed to contain tabs?\n\nThat's there so that the generated .editorconfig file the correct\nindent_size. I guess another approach would be to change the\ngenerate_editorconfig.py script to include hardcoded values for these\n4 filetypes.\n\n\n",
"msg_date": "Fri, 9 Aug 2024 16:09:59 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
},
{
"msg_contents": "On Fri, 9 Aug 2024 at 16:09, Jelte Fennema-Nio <[email protected]> wrote:\n> That's there so that the generated .editorconfig file the correct\n> indent_size. I guess another approach would be to change the\n> generate_editorconfig.py script to include hardcoded values for these\n> 4 filetypes.\n\nOkay, I've done this now. Any chance this can \"just\" be committed now?\nHaving to remove trailing spaces by hand whenever I edit sgml files is a\ncontinuous annoyance to me when working on other patches.",
"msg_date": "Thu, 5 Sep 2024 23:28:21 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add trim_trailing_whitespace to editorconfig file"
}
] |
[
{
"msg_contents": "The MAINTAIN privilege was reverted during the 16 cycle because of the\npotential for someone to play tricks with search_path.\n\nFor instance, if user foo does:\n\n CREATE FUNCTION mod7(INT) RETURNS INT IMMUTABLE\n LANGUAGE plpgsql AS $$ BEGIN RETURN mod($1, 7); END; $$;\n CREATE TABLE x(i INT);\n CREATE UNIQUE INDEX x_mod7_idx ON x (mod7(i));\n GRANT MAINTAIN ON x TO bar;\n\nThen user bar can create their own function named \"bar.mod(int, int)\",\nand \"SET search_path = bar, pg_catalog\", and then issue a \"REINDEX x\"\nand cause problems.\n\nThere are several factors required for that to be a problem:\n\n 1. foo hasn't used a \"SET search_path\" clause on their function\n 2. bar must have the privileges to create a function somewhere\n 3. bar must have privileges on table x\n\nThere's an argument that we should blame factor #1. Robert stated[1]\nthat users should use SET search_path clauses on their functions, even\nSECURITY INVOKER functions. And I've added a search_path cache which\nimproves the performance enough to make that more reasonable to do\ngenerally.\n\nThere's also an argument that #2 is to blame. Given the realities of\nour system, best practice is that users shouldn't have the privileges\nto create objects, even in their own schema, unless required. (Joe made\nthis suggestion in an offline discussion.)\n\nThere's also an arugment that #3 is not specific to the MAINTAIN\nprivilege. Clearly similar risks exist for other privileges, like\nTRIGGER. And even the INSERT privilege, in the above example, would\nallow bar to violate the unique constraint and corrupt the index[2].\n\nIf those arguments are still unconvincing, then the next idea is to fix\nthe search_path for all maintenance commands[3]. I tried this during\nthe 16 cycle, but due to timing issues it was also reverted. I can\nproceed with this approach again, but I'd like a clear endorsement, in\ncase there were other reasons to doubt the approach.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/CA+TgmoYEP40iBW-A9nPfDp8AhGoekPp3aPDFzTgBUrqmfCwZzQ@mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\n[3]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n\n",
"msg_date": "Wed, 14 Feb 2024 10:20:28 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 10:20:28AM -0800, Jeff Davis wrote:\n> If those arguments are still unconvincing, then the next idea is to fix\n> the search_path for all maintenance commands[3]. I tried this during\n> the 16 cycle, but due to timing issues it was also reverted. I can\n> proceed with this approach again, but I'd like a clear endorsement, in\n> case there were other reasons to doubt the approach.\n\nThis seemed like the approach folks were most in favor of at the developer\nmeeting a couple weeks ago [0]. At least, that was my interpretation of\nthe discussion.\n\nBTW I have been testing reverting commit 151c22d (i.e., un-reverting\nMAINTAIN) every month or two, and last I checked, it still applies pretty\ncleanly. The only changes I've needed to make are to the catversion and to\na hard-coded version in a test (16 -> 17).\n\n[0] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2024_Developer_Meeting#The_Path_to_un-reverting_the_MAINTAIN_privilege\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Feb 2024 13:02:26 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 01:02:26PM -0600, Nathan Bossart wrote:\n> BTW I have been testing reverting commit 151c22d (i.e., un-reverting\n> MAINTAIN) every month or two, and last I checked, it still applies pretty\n> cleanly. The only changes I've needed to make are to the catversion and to\n> a hard-coded version in a test (16 -> 17).\n\nPosting to get some cfbot coverage.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Feb 2024 10:14:35 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, 2024-02-14 at 13:02 -0600, Nathan Bossart wrote:\n> This seemed like the approach folks were most in favor of at the\n> developer\n> meeting a couple weeks ago [0]. At least, that was my interpretation\n> of\n> the discussion.\n\nAttached rebased version.\n\nNote the changes in amcheck. It's creating functions and calling those\nfunctions from the comparators, and so the comparators need to set the\nsearch_path. I don't think that's terribly common, but does represent a\nbehavior change and could break something.\n\nRegards,\n\tJef Davis",
"msg_date": "Fri, 16 Feb 2024 16:03:55 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "(Apologies in advance for anything I'm bringing up that we've already\ncovered somewhere else.)\n\nOn Fri, Feb 16, 2024 at 04:03:55PM -0800, Jeff Davis wrote:\n> Note the changes in amcheck. It's creating functions and calling those\n> functions from the comparators, and so the comparators need to set the\n> search_path. I don't think that's terribly common, but does represent a\n> behavior change and could break something.\n\nWhy is this change needed? Is the idea to make amcheck follow the same\nrules as maintenance commands to encourage folks to set up index functions\ncorrectly? Or is amcheck similarly affected by search_path tricks?\n\n> void\n> InitializeSearchPath(void)\n> {\n> +\t/* Make the context we'll keep search path cache hashtable in */\n> +\tSearchPathCacheContext = AllocSetContextCreate(TopMemoryContext,\n> +\t\t\t\t\t\t\t\t\t\t\t\t \"search_path processing cache\",\n> +\t\t\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n> +\n> \tif (IsBootstrapProcessingMode())\n> \t{\n> \t\t/*\n> @@ -4739,11 +4744,6 @@ InitializeSearchPath(void)\n> \t}\n> \telse\n> \t{\n> -\t\t/* Make the context we'll keep search path cache hashtable in */\n> -\t\tSearchPathCacheContext = AllocSetContextCreate(TopMemoryContext,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t \"search_path processing cache\",\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n> -\n\nWhat is the purpose of this change?\n\n> +\tSetConfigOption(\"search_path\", GUC_SAFE_SEARCH_PATH, PGC_USERSET,\n> +\t\t\t\t\tPGC_S_SESSION);\n\nI wonder if it's worth using PGC_S_INTERACTIVE or introducing a new value\nfor these.\n\n> +/*\n> + * Safe search path when executing code as the table owner, such as during\n> + * maintenance operations.\n> + */\n> +#define GUC_SAFE_SEARCH_PATH \"pg_catalog, pg_temp\"\n\nIs including pg_temp actually safe? I worry that a user could use their\ntemporary schema to inject objects that would take the place of\nnon-schema-qualified stuff in functions.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Feb 2024 15:30:55 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "New version attached.\n\nDo we need a documentation update here? If so, where would be a good\nplace?\n\nOn Fri, 2024-02-23 at 15:30 -0600, Nathan Bossart wrote:\n> Why is this change needed? Is the idea to make amcheck follow the\n> same\n> rules as maintenance commands to encourage folks to set up index\n> functions\n> correctly?\n\namcheck is calling functions it defined, so in order to find those\nfunctions it needs to set the right search path.\n\n\n> \n> What is the purpose of this [bootstrap-related] change?\n\nDefineIndex() is called during bootstrap, and it's also a maintenance\ncommand. I tried to handle the bootstrapping case, but I think it's\nbest to just guard it with a conditional. Done.\n\nI also added Assert(!IsBootstrapProcessingMode()) in\nassign_search_path().\n\n> > + SetConfigOption(\"search_path\", GUC_SAFE_SEARCH_PATH,\n> > PGC_USERSET,\n> > + PGC_S_SESSION);\n> \n> I wonder if it's worth using PGC_S_INTERACTIVE or introducing a new\n> value\n> for these.\n> > \n\nDid you have a particular concern about PGC_S_SESSION?\n\nIf it's less than PGC_S_SESSION, it won't work, because the caller's\nSET command will override it, and the same manipulation is possible.\n\nAnd I don't think we want it higher than PGC_S_SESSION, otherwise the\nfunction can't set its own search_path, if needed.\n\n> > +#define GUC_SAFE_SEARCH_PATH \"pg_catalog, pg_temp\"\n> \n> Is including pg_temp actually safe? I worry that a user could use\n> their\n> temporary schema to inject objects that would take the place of\n> non-schema-qualified stuff in functions.\n\npg_temp cannot (currently) be excluded. If it is omitted from the\nstring, it will be placed *first* in the search_path, which is more\ndangerous.\n\npg_temp does not take part in function or operator resolution, which\nmakes it safer than it first appears. There are potentially some risks\naround tables, but it's not typical to access a table in a function\ncalled as part of an index expression.\n\nIf we determine that pg_temp is actually unsafe to include, we need to\ndo something like what I proposed here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nbefore this change.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 27 Feb 2024 16:22:34 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 04:22:34PM -0800, Jeff Davis wrote:\n> Do we need a documentation update here? If so, where would be a good\n> place?\n\nI'm afraid I don't have a better idea than adding a short note in each\naffected commands's page.\n\n> On Fri, 2024-02-23 at 15:30 -0600, Nathan Bossart wrote:\n>> I wonder if it's worth using PGC_S_INTERACTIVE or introducing a new\n>> value\n>> for these.\n> \n> Did you have a particular concern about PGC_S_SESSION?\n\nMy only concern is that it could obscure the source of the search_path\nchange, which in turn might cause confusion when things fail.\n\n> If it's less than PGC_S_SESSION, it won't work, because the caller's\n> SET command will override it, and the same manipulation is possible.\n> \n> And I don't think we want it higher than PGC_S_SESSION, otherwise the\n> function can't set its own search_path, if needed.\n\nYeah, we would have to make it equivalent in priority to PGC_S_SESSION,\nwhich would likely require a bunch of special logic. I don't know if this\nis worth it, and this seems like something that could pretty easily be\nadded in the future if it became necessary.\n\n>> > +#define GUC_SAFE_SEARCH_PATH \"pg_catalog, pg_temp\"\n>> \n>> Is including pg_temp actually safe?� I worry that a user could use\n>> their\n>> temporary schema to inject objects that would take the place of\n>> non-schema-qualified stuff in functions.\n> \n> pg_temp cannot (currently) be excluded. If it is omitted from the\n> string, it will be placed *first* in the search_path, which is more\n> dangerous.\n> \n> pg_temp does not take part in function or operator resolution, which\n> makes it safer than it first appears. There are potentially some risks\n> around tables, but it's not typical to access a table in a function\n> called as part of an index expression.\n> \n> If we determine that pg_temp is actually unsafe to include, we need to\n> do something like what I proposed here:\n> \n> https://www.postgresql.org/message-id/[email protected]\n\nI don't doubt anything you've said, but I can't help but think that we\nmight as well handle the pg_temp risk, too.\n\nFurthermore, I see that we use \"\" as a safe search_path for autovacuum and\nfe_utils/connect.h. Is there any way to unite these? IIUC it might be\npossible to combine the autovacuum and maintenance command cases (i.e.,\n\"!pg_temp\"), but we might need to keep pg_temp for the frontend case. I\nthink it's worth trying to add comments about why this setting is safe for\nsome cases but not others, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 10:55:23 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, 2024-02-28 at 10:55 -0600, Nathan Bossart wrote:\n> I'm afraid I don't have a better idea than adding a short note in\n> each\n> affected commands's page.\n\nOK, that works for now.\n\nLater we should also document that the functions are run as the table\nowner.\n\n> > On Fri, 2024-02-23 at 15:30 -0600, Nathan Bossart wrote:\n> > > I wonder if it's worth using PGC_S_INTERACTIVE or introducing a\n> > > new\n> > > value\n> > > for these.\n> > \n> > Did you have a particular concern about PGC_S_SESSION?\n> \n> My only concern is that it could obscure the source of the\n> search_path\n> change, which in turn might cause confusion when things fail.\n\nThat's a good point. AutoVacWorkerMain uses PGC_S_OVERRIDE, but it\ndoesn't have to worry about SET, because there's no real session.\n\nThe function SET clause uses PGC_S_SESSION. It's arguable whether\nthat's really the same source as a SET command, but it's definitely\ncloser.\n\n> \n> Yeah, we would have to make it equivalent in priority to\n> PGC_S_SESSION,\n> which would likely require a bunch of special logic.\n\nI'm not clear on what problem that would solve.\n\n> I don't doubt anything you've said, but I can't help but think that\n> we\n> might as well handle the pg_temp risk, too.\n\nThat sounds good to me, but I didn't get many replies in that last\nthread. And although it solves the problem, it is a bit awkward.\n\nCan we get some closure on whether that !pg_temp patch is the right\napproach? That was just my first idea, and it would be good to hear\nwhat others think.\n\n> Furthermore, I see that we use \"\" as a safe search_path for\n> autovacuum and\n> fe_utils/connect.h. Is there any way to unite these?\n\nWe could have a single function like RestrictSearchPath(), which I\nbelieve I had in some previous iteration. That would use the safest\nsearch path (either excluding pg_temp or putting it at the end) and\nPGC_S_SESSION, and then use it everywhere.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 09:29:04 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, 2024-02-28 at 09:29 -0800, Jeff Davis wrote:\n> On Wed, 2024-02-28 at 10:55 -0600, Nathan Bossart wrote:\n> > I'm afraid I don't have a better idea than adding a short note in\n> > each\n> > affected commands's page.\n> \n> OK, that works for now.\n\nCommitted.\n\nThe only changes are documentation and test updates.\n\nThis is a behavior change, so it still carries some risk, though we've\nhad a lot of discussion and generally it seems to be worth it. If it\nturns out worse than expected during beta, of course we can re-revert\nit.\n\nI will restate the risks here, which come basically from two places:\n\n(1) Functions called from index expressions which rely on search_path\n(and don't have a SET clause).\n\nSuch a function would have already been fairly broken before my commit,\nbecause anyone accessing the table without the right search_path would\nhave seen an error or wrong results. And there is no means to set the\n\"right\" search_path for autoanalyze or logical replication, so those\nwould not have worked with such a broken function before my commit, no\nmatter what.\n\nThat being said, surely some users did have such broken functions, and\nwith this commit, they will have to remedy them with a SET clause.\nFortunately, the performance impact of doing so has been greatly\nreduced.\n\n(2) Matierialized views which call functions that rely on search_path\n(and don't have a SET clause).\n\nThis is arguably a worse kind of breakage because materialized views\nare often refreshed only by the table owner, and it's easier to control\nsearch_path when running REFRESH. Additionally, functions called from\nmaterialized views are more likely to be \"interesting\" than functions\ncalled from an index expression. However, the remedy is\nstraightforward: use a SET clause.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 19:52:05 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 07:52:05PM -0800, Jeff Davis wrote:\n> Committed.\n\nCommit 2af07e2 wrote:\n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -1412,6 +1412,8 @@ brin_summarize_range(PG_FUNCTION_ARGS)\n> \t\tSetUserIdAndSecContext(heapRel->rd_rel->relowner,\n> \t\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> \t\tsave_nestlevel = NewGUCNestLevel();\n> +\t\tSetConfigOption(\"search_path\", GUC_SAFE_SEARCH_PATH, PGC_USERSET,\n> +\t\t\t\t\t\tPGC_S_SESSION);\n\nI've audited NewGUCNestLevel() calls that didn't get this addition. Among\nthose, these need the addition:\n\n- Each in ComputeIndexAttrs() -- they arise when the caller is DefineIndex()\n- In DefineIndex(), after comment \"changed a behavior-affecting GUC\"\n\nWhile \"not necessary for security\", ExecCreateTableAs() should do it for the\nsame reason it calls NewGUCNestLevel().\n\n\n",
"msg_date": "Sun, 30 Jun 2024 15:23:44 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Sun, Jun 30, 2024 at 03:23:44PM -0700, Noah Misch wrote:\n> I've audited NewGUCNestLevel() calls that didn't get this addition. Among\n> those, these need the addition:\n> \n> - Each in ComputeIndexAttrs() -- they arise when the caller is DefineIndex()\n> - In DefineIndex(), after comment \"changed a behavior-affecting GUC\"\n\nHmm. Is RestrictSearchPath() something that we should advertise more\nstrongly, thinking here about extensions that call NewGUCNestLevel()?\nThat would be really easy to miss, and it could have bad consequences.\nI know that this is not something that's published in the release\nnotes, but it looks like something sensible to have, though.\n\n> While \"not necessary for security\", ExecCreateTableAs() should do it for the\n> same reason it calls NewGUCNestLevel().\n\n+1.\n--\nMichael",
"msg_date": "Tue, 9 Jul 2024 15:20:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Tue, 2024-07-09 at 15:20 +0900, Michael Paquier wrote:\n> On Sun, Jun 30, 2024 at 03:23:44PM -0700, Noah Misch wrote:\n> > I've audited NewGUCNestLevel() calls that didn't get this\n> > addition. Among\n> > those, these need the addition:\n> > \n> > - Each in ComputeIndexAttrs() -- they arise when the caller is\n> > DefineIndex()\n> > - In DefineIndex(), after comment \"changed a behavior-affecting\n> > GUC\"\n\nThank you for the report. Patch attached to address these missing call\nsites.\n\n> Hmm. Is RestrictSearchPath() something that we should advertise more\n> strongly, thinking here about extensions that call NewGUCNestLevel()?\n> That would be really easy to miss, and it could have bad\n> consequences.\n> I know that this is not something that's published in the release\n> notes, but it looks like something sensible to have, though.\n\nThe pattern also involves SetUserIdAndSecContext(). Perhaps we could\ncome up with a wrapper function to better encapsulate the general\npattern?\n\n> > While \"not necessary for security\", ExecCreateTableAs() should do\n> > it for the\n> > same reason it calls NewGUCNestLevel().\n> \n> +1.\n\nDo you have a suggestion about how that should be done?\n\nIt's not trivial, because the both creates the table and populates it\nin ExecutorRun. For table creation, we need to use the original\nsearch_path, but we need to use the restricted search_path when\npopulating it.\n\nI could try to refactor it into two statements and execute them\nseparately, or I could try to rewrite the statement to use a fully-\nqualified destination table before execution. Thoughts?\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 09 Jul 2024 17:47:36 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Tue, Jul 09, 2024 at 05:47:36PM -0700, Jeff Davis wrote:\n> On Tue, 2024-07-09 at 15:20 +0900, Michael Paquier wrote:\n> > On Sun, Jun 30, 2024 at 03:23:44PM -0700, Noah Misch wrote:\n\n> > Hmm.� Is RestrictSearchPath() something that we should advertise more\n> > strongly, thinking here about extensions that call NewGUCNestLevel()?\n> > That would be really easy to miss, and it could have bad\n> > consequences.\n> > I know that this is not something that's published in the release\n> > notes, but it looks like something sensible to have, though.\n> \n> The pattern also involves SetUserIdAndSecContext(). Perhaps we could\n> come up with a wrapper function to better encapsulate the general\n> pattern?\n\nWorth a look. usercontext.c has an existing wrapper for a superuser process\nswitching to an untrusted user. It could become the home for another wrapper\ntargeting MAINTAIN-relevant callers.\n\n> > > While \"not necessary for security\", ExecCreateTableAs() should do\n> > > it for the\n> > > same reason it calls NewGUCNestLevel().\n> > \n> > +1.\n> \n> Do you have a suggestion about how that should be done?\n> \n> It's not trivial, because the both creates the table and populates it\n> in ExecutorRun. For table creation, we need to use the original\n> search_path, but we need to use the restricted search_path when\n> populating it.\n> \n> I could try to refactor it into two statements and execute them\n> separately, or I could try to rewrite the statement to use a fully-\n> qualified destination table before execution. Thoughts?\n\nThose sound fine. Also fine: just adding a comment on why creation namespace\nconsiderations led to not doing it there.\n\n\n",
"msg_date": "Thu, 11 Jul 2024 05:52:07 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Thu, 2024-07-11 at 05:52 -0700, Noah Misch wrote:\n> > I could try to refactor it into two statements and execute them\n> > separately, or I could try to rewrite the statement to use a fully-\n> > qualified destination table before execution. Thoughts?\n> \n> Those sound fine. Also fine: just adding a comment on why creation\n> namespace\n> considerations led to not doing it there.\n\nAttached. 0002 separates the CREATE MATERIALIZED VIEW ... WITH DATA\ninto (effectively):\n\n CREATE MATERIALIZED VIEW ... WITH NO DATA;\n REFRESH MATERIALIZED VIEW ...;\n\nUsing refresh also achieves the stated goal more directly: to (mostly)\nensure that a subsequent REFRESH will succeed.\n\nNote: the creation itself no longer executes in a security-restricted\ncontext, but I don't think that's a problem. The only reason it's using\nthe security restricted context is so the following REFRESH will\nsucceed, right?\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 12 Jul 2024 14:50:52 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 02:50:52PM -0700, Jeff Davis wrote:\n> On Thu, 2024-07-11 at 05:52 -0700, Noah Misch wrote:\n> > > I could try to refactor it into two statements and execute them\n> > > separately, or I could try to rewrite the statement to use a fully-\n> > > qualified destination table before execution. Thoughts?\n> > \n> > Those sound fine.� Also fine: just adding a comment on why creation\n> > namespace\n> > considerations led to not doing it there.\n> \n> Attached. 0002 separates the CREATE MATERIALIZED VIEW ... WITH DATA\n> into (effectively):\n> \n> CREATE MATERIALIZED VIEW ... WITH NO DATA;\n> REFRESH MATERIALIZED VIEW ...;\n> \n> Using refresh also achieves the stated goal more directly: to (mostly)\n> ensure that a subsequent REFRESH will succeed.\n> \n> Note: the creation itself no longer executes in a security-restricted\n> context, but I don't think that's a problem. The only reason it's using\n> the security restricted context is so the following REFRESH will\n> succeed, right?\n\nRight, that's the only reason.\n\n> @@ -346,13 +339,21 @@ ExecCreateTableAs(ParseState *pstate, CreateTableAsStmt *stmt,\n> \t\tPopActiveSnapshot();\n> \t}\n> \n> -\tif (is_matview)\n> +\t/*\n> +\t * For materialized views, use REFRESH, which locks down\n> +\t * security-restricted operations and restricts the search_path.\n> +\t * Otherwise, one could create a materialized view not possible to\n> +\t * refresh.\n> +\t */\n> +\tif (do_refresh)\n> \t{\n> -\t\t/* Roll back any GUC changes */\n> -\t\tAtEOXact_GUC(false, save_nestlevel);\n> +\t\tRefreshMatViewStmt *refresh = makeNode(RefreshMatViewStmt);\n> \n> -\t\t/* Restore userid and security context */\n> -\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\trefresh->relation = into->rel;\n> +\t\tExecRefreshMatView(refresh, pstate->p_sourcetext, NULL, qc);\n> +\n> +\t\tif (qc)\n> +\t\t\tqc->commandTag = CMDTAG_SELECT;\n> \t}\n\nSince refresh->relation is a RangeVar, this departs from the standard against\nrepeated name lookups, from CVE-2014-0062 (commit 5f17304).\n\n\n",
"msg_date": "Fri, 12 Jul 2024 16:11:49 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, 2024-07-12 at 16:11 -0700, Noah Misch wrote:\n> Since refresh->relation is a RangeVar, this departs from the standard\n> against\n> repeated name lookups, from CVE-2014-0062 (commit 5f17304).\n\nInteresting, thank you.\n\nI did a rough refactor and attached v3. Aside from cleanup issues, is\nthis what you had in mind?\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 12 Jul 2024 16:50:17 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 04:50:17PM -0700, Jeff Davis wrote:\n> On Fri, 2024-07-12 at 16:11 -0700, Noah Misch wrote:\n> > Since refresh->relation is a RangeVar, this departs from the standard\n> > against\n> > repeated name lookups, from CVE-2014-0062 (commit 5f17304).\n> \n> Interesting, thank you.\n> \n> I did a rough refactor and attached v3. Aside from cleanup issues, is\n> this what you had in mind?\n\n> +extern ObjectAddress RefreshMatViewByOid(Oid matviewOid, bool skipData, bool concurrent,\n> +\t\t\t\t\t\t\t\t\t\t const char *queryString, ParamListInfo params,\n> +\t\t\t\t\t\t\t\t\t\t QueryCompletion *qc);\n> \n\nYes, that's an API design that avoids repeated name lookups.\n\n\n",
"msg_date": "Sat, 13 Jul 2024 14:47:48 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "Hi,\n\nOn Sat, 13 Jul 2024 14:47:48 -0700\nNoah Misch <[email protected]> wrote:\n\n> On Fri, Jul 12, 2024 at 04:50:17PM -0700, Jeff Davis wrote:\n> > On Fri, 2024-07-12 at 16:11 -0700, Noah Misch wrote:\n> > > Since refresh->relation is a RangeVar, this departs from the standard\n> > > against\n> > > repeated name lookups, from CVE-2014-0062 (commit 5f17304).\n> > \n> > Interesting, thank you.\n> > \n> > I did a rough refactor and attached v3. Aside from cleanup issues, is\n> > this what you had in mind?\n> \n> > +extern ObjectAddress RefreshMatViewByOid(Oid matviewOid, bool skipData, bool concurrent,\n> > +\t\t\t\t\t\t\t\t\t\t const char *queryString, ParamListInfo params,\n> > +\t\t\t\t\t\t\t\t\t\t QueryCompletion *qc);\n> > \n> \n> Yes, that's an API design that avoids repeated name lookups.\n\nSince this commit, matviews are no longer handled in ExecCreateTableAs, so the\nfollowing error message has not to consider materialized view cases, and can be made simple.\n\n /* SELECT should never rewrite to more or less than one SELECT query */\n if (list_length(rewritten) != 1)\n elog(ERROR, \"unexpected rewrite result for %s\",\n is_matview ? \"CREATE MATERIALIZED VIEW\" :\n \"CREATE TABLE AS SELECT\");\n\nRefreshMatViewByOid has REFRESH specific error messages in spite of its use\nin CREATE MATERIALIZED VIEW, but these errors seem not to occur in CREATE MATERIALIZED\nVIEW case, so I don't think it would be a problem.\n\n\nAnother my question is why RefreshMatViewByOid has a ParamListInfo parameter.\nI don't understand why ExecRefreshMatView has one, either, because currently\nmaterialized views may not be defined using bound parameters, which is checked\nin transformCreateTableAsStmt, and the param argument is not used at all. It might\nbe unsafe to change the interface of ExecRefreshMatView since this is public for a\nlong time, but I don't think the new interface RefreshMatViewByOid has to have this\nunused argument.\n\nI attaehd patches for fixing them respectedly.\n\nWhat do you think about it?\n\nRegards,\nYugo Nagata\n\n\n\n-- \nYugo Nagata <[email protected]>",
"msg_date": "Fri, 26 Jul 2024 12:26:30 +0900",
"msg_from": "Yugo Nagata <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "Hello,\n\nThank you for looking.\n\nOn Fri, 2024-07-26 at 12:26 +0900, Yugo Nagata wrote:\n> Since this commit, matviews are no longer handled in\n> ExecCreateTableAs, so the\n> following error message has not to consider materialized view cases,\n> and can be made simple.\n> \n> /* SELECT should never rewrite to more or less than one\n> SELECT query */\n> if (list_length(rewritten) != 1)\n> elog(ERROR, \"unexpected rewrite result for %s\",\n> is_matview ? \"CREATE MATERIALIZED VIEW\" :\n> \"CREATE TABLE AS SELECT\");\n\nThere's a similar error in refresh_matview_datafill(), and I suppose\nthat should account for the CREATE MATERIALIZED VIEW case. We could\npass an additional flag to RefreshMatViewByOid to indicate whether it's\na CREATE or REFRESH, but it's an internal error, so perhaps it's not\nimportant.\n\n> Another my question is why RefreshMatViewByOid has a ParamListInfo\n> parameter.\n\nI just passed the params through, but you're right, they aren't\nreferenced at all.\n\nI looked at the history, and it appears to go all the way back to the\nfunction's introduction in commit 3bf3ab8c56.\n\n> I don't understand why ExecRefreshMatView has one, either, because\n> currently\n> materialized views may not be defined using bound parameters, which\n> is checked\n> in transformCreateTableAsStmt, and the param argument is not used at\n> all. It might\n> be unsafe to change the interface of ExecRefreshMatView since this is\n> public for a\n> long time, but I don't think the new interface RefreshMatViewByOid\n> has to have this\n> unused argument.\n\nExtensions should be prepared for reasonable changes in these kinds of\nfunctions between releases. Even if the signatures remain the same, the\nparse structures may change, which creates similar incompatibilities.\nSo let's just get rid of the 'params' argument from both functions.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 26 Jul 2024 16:47:23 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "Hi,\n\nOn Fri, 26 Jul 2024 16:47:23 -0700\nJeff Davis <[email protected]> wrote:\n\n> Hello,\n> \n> Thank you for looking.\n> \n> On Fri, 2024-07-26 at 12:26 +0900, Yugo Nagata wrote:\n> > Since this commit, matviews are no longer handled in\n> > ExecCreateTableAs, so the\n> > following error message has not to consider materialized view cases,\n> > and can be made simple.\n> > \n> > /* SELECT should never rewrite to more or less than one\n> > SELECT query */\n> > if (list_length(rewritten) != 1)\n> > elog(ERROR, \"unexpected rewrite result for %s\",\n> > is_matview ? \"CREATE MATERIALIZED VIEW\" :\n> > \"CREATE TABLE AS SELECT\");\n> \n> There's a similar error in refresh_matview_datafill(), and I suppose\n> that should account for the CREATE MATERIALIZED VIEW case. We could\n> pass an additional flag to RefreshMatViewByOid to indicate whether it's\n> a CREATE or REFRESH, but it's an internal error, so perhaps it's not\n> important.\n\nThank you for looking into the pach.\n\nI agree that it might not be important, but I think adding the flag would be\nalso helpful for improving code-readability because it clarify the function\nis used in the two cases. I attached patch for this fix (patch 0003).\n\n> > Another my question is why RefreshMatViewByOid has a ParamListInfo\n> > parameter.\n> \n> I just passed the params through, but you're right, they aren't\n> referenced at all.\n> \n> I looked at the history, and it appears to go all the way back to the\n> function's introduction in commit 3bf3ab8c56.\n> \n> > I don't understand why ExecRefreshMatView has one, either, because\n> > currently\n> > materialized views may not be defined using bound parameters, which\n> > is checked\n> > in transformCreateTableAsStmt, and the param argument is not used at\n> > all. It might\n> > be unsafe to change the interface of ExecRefreshMatView since this is\n> > public for a\n> > long time, but I don't think the new interface RefreshMatViewByOid\n> > has to have this\n> > unused argument.\n> \n> Extensions should be prepared for reasonable changes in these kinds of\n> functions between releases. Even if the signatures remain the same, the\n> parse structures may change, which creates similar incompatibilities.\n> So let's just get rid of the 'params' argument from both functions.\n\nSure. I fixed the patch to remove 'param' from both functions. (patch 0002)\n\nI also add the small refactoring around ExecCreateTableAs(). (patch 0001)\n\n- Remove matview-related codes from intorel_startup.\n Materialized views are no longer handled in this function.\n\n- RefreshMatViewByOid is moved to just after create_ctas_nodata\n call to improve code readability.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 31 Jul 2024 18:20:12 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Wed, 2024-07-31 at 18:20 +0900, Yugo NAGATA wrote:\n> I agree that it might not be important, but I think adding the flag\n> would be\n> also helpful for improving code-readability because it clarify the\n> function\n> is used in the two cases. I attached patch for this fix (patch 0003).\n\nCommitted with one minor modification: I moved the boolean flag to be\nnear the other booleans rather than at the end. Thank you.\n\n> Sure. I fixed the patch to remove 'param' from both functions. (patch\n> 0002)\n\nCommitted, thank you.\n\n> I also add the small refactoring around ExecCreateTableAs(). (patch\n> 0001)\n> \n> - Remove matview-related codes from intorel_startup.\n> Materialized views are no longer handled in this function.\n> \n> - RefreshMatViewByOid is moved to just after create_ctas_nodata\n> call to improve code readability.\n> \n\nI'm not sure the changes in intorel_startup() are correct. I tried\nadding an Assert(into->viewQuery == NULL), and it fails because there's\nanother path I did not consider: \"EXPLAIN ANALYZE CREATE MATERIALIZED\nVIEW ...\", which does not go through ExecCreateTableAs() but does go\nthrough CreateIntoRelDestReceiver().\n\nSee:\n\nhttps://postgr.es/m/[email protected]\n\nShould we refactor a bit and try to make EXPLAIN use the same code\npaths?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 01 Aug 2024 11:31:53 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Thu, 01 Aug 2024 11:31:53 -0700\nJeff Davis <[email protected]> wrote:\n\n> On Wed, 2024-07-31 at 18:20 +0900, Yugo NAGATA wrote:\n> > I agree that it might not be important, but I think adding the flag\n> > would be\n> > also helpful for improving code-readability because it clarify the\n> > function\n> > is used in the two cases. I attached patch for this fix (patch 0003).\n> \n> Committed with one minor modification: I moved the boolean flag to be\n> near the other booleans rather than at the end. Thank you.\n> \n> > Sure. I fixed the patch to remove 'param' from both functions. (patch\n> > 0002)\n> \n> Committed, thank you.\n\nThank you for committing them.\nShould not they be backported to REL_17_STABLE?\n\n> \n> > I also add the small refactoring around ExecCreateTableAs(). (patch\n> > 0001)\n> > \n> > - Remove matview-related codes from intorel_startup.\n> > Materialized views are no longer handled in this function.\n> > \n> > - RefreshMatViewByOid is moved to just after create_ctas_nodata\n> > call to improve code readability.\n> > \n> \n> I'm not sure the changes in intorel_startup() are correct. I tried\n> adding an Assert(into->viewQuery == NULL), and it fails because there's\n> another path I did not consider: \"EXPLAIN ANALYZE CREATE MATERIALIZED\n> VIEW ...\", which does not go through ExecCreateTableAs() but does go\n> through CreateIntoRelDestReceiver().\n> \n> See:\n> \n> https://postgr.es/m/[email protected]\n> \n> Should we refactor a bit and try to make EXPLAIN use the same code\n> paths?\n\nI overlooked that CreateIntoRelDestReceiver() is used from EXPLAIN. I saw the\nthread above and I agree that we should refactor it to make EXPLAIN consistent\nCREATE MATERIALIZED VIEW, but I suppose this should be discussed the other thread.\n\nI attached a updated patch removed the intorel_startup() part from.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 2 Aug 2024 16:13:01 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, 2 Aug 2024 16:13:01 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Thu, 01 Aug 2024 11:31:53 -0700\n> Jeff Davis <[email protected]> wrote:\n> \n> > On Wed, 2024-07-31 at 18:20 +0900, Yugo NAGATA wrote:\n> > > I agree that it might not be important, but I think adding the flag\n> > > would be\n> > > also helpful for improving code-readability because it clarify the\n> > > function\n> > > is used in the two cases. I attached patch for this fix (patch 0003).\n> > \n> > Committed with one minor modification: I moved the boolean flag to be\n> > near the other booleans rather than at the end. Thank you.\n> > \n> > > Sure. I fixed the patch to remove 'param' from both functions. (patch\n> > > 0002)\n> > \n> > Committed, thank you.\n> \n> Thank you for committing them.\n> Should not they be backported to REL_17_STABLE?\n> \n> > \n> > > I also add the small refactoring around ExecCreateTableAs(). (patch\n> > > 0001)\n> > > \n> > > - Remove matview-related codes from intorel_startup.\n> > > Materialized views are no longer handled in this function.\n> > > \n> > > - RefreshMatViewByOid is moved to just after create_ctas_nodata\n> > > call to improve code readability.\n> > > \n> > \n> > I'm not sure the changes in intorel_startup() are correct. I tried\n> > adding an Assert(into->viewQuery == NULL), and it fails because there's\n> > another path I did not consider: \"EXPLAIN ANALYZE CREATE MATERIALIZED\n> > VIEW ...\", which does not go through ExecCreateTableAs() but does go\n> > through CreateIntoRelDestReceiver().\n> > \n> > See:\n> > \n> > https://postgr.es/m/[email protected]\n> > \n> > Should we refactor a bit and try to make EXPLAIN use the same code\n> > paths?\n> \n> I overlooked that CreateIntoRelDestReceiver() is used from EXPLAIN. I saw the\n> thread above and I agree that we should refactor it to make EXPLAIN consistent\n> CREATE MATERIALIZED VIEW, but I suppose this should be discussed the other thread.\n> \n> I attached a updated patch removed the intorel_startup() part from.\n\nI confirmed that this has been committed to the master branch.\nThank you!\n\nI also noticed that the documentation of CREATE MATERIALIZED VIEW doesn't mention\nsearch_path while it also changes search_path since it uses the REFRESH logic.\nI attached a trivial patch to fix this.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo Nagata <[email protected]>",
"msg_date": "Mon, 5 Aug 2024 16:05:02 +0900",
"msg_from": "Yugo Nagata <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Mon, Aug 05, 2024 at 04:05:02PM +0900, Yugo Nagata wrote:\n> + <para>\n> + While <command>CREATE MATERIALIZED VIEW</command> is running, the <xref\n> + linkend=\"guc-search-path\"/> is temporarily changed to <literal>pg_catalog,\n> + pg_temp</literal>.\n> + </para>\n\nI think we should mention that this is not true when WITH NO DATA is used.\nMaybe something like:\n\n\tUnless WITH NO DATA is used, the search_path is temporarily changed to\n\tpg_catalog, pg_temp while CREATE MATERIALIZED VIEW is running.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 26 Sep 2024 16:33:06 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Thu, 26 Sep 2024 16:33:06 -0500\nNathan Bossart <[email protected]> wrote:\n\n> On Mon, Aug 05, 2024 at 04:05:02PM +0900, Yugo Nagata wrote:\n> > + <para>\n> > + While <command>CREATE MATERIALIZED VIEW</command> is running, the <xref\n> > + linkend=\"guc-search-path\"/> is temporarily changed to <literal>pg_catalog,\n> > + pg_temp</literal>.\n> > + </para>\n> \n> I think we should mention that this is not true when WITH NO DATA is used.\n> Maybe something like:\n> \n> \tUnless WITH NO DATA is used, the search_path is temporarily changed to\n> \tpg_catalog, pg_temp while CREATE MATERIALIZED VIEW is running.\n> \n\nI agree with you. I overlooked WITH NO DATA.\nI attached a updated patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 27 Sep 2024 12:42:34 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, Sep 27, 2024 at 12:42:34PM +0900, Yugo NAGATA wrote:\n> I agree with you. I overlooked WITH NO DATA.\n> I attached a updated patch.\n\nThanks. Unless someone objects, I plan to commit this shortly.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 27 Sep 2024 10:34:42 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, 2024-09-27 at 10:34 -0500, Nathan Bossart wrote:\n> On Fri, Sep 27, 2024 at 12:42:34PM +0900, Yugo NAGATA wrote:\n> > I agree with you. I overlooked WITH NO DATA.\n> > I attached a updated patch.\n> \n> Thanks. Unless someone objects, I plan to commit this shortly.\n\nThe command is run effectively in two parts: the CREATE part and the\nREFRESH part. The former just uses the session search path, while the\nlatter uses the safe search path.\n\nI suggest that we add the wording to the\n<replaceable>query</replaceable> portion of the doc, near \"security-\nrestricted operation\".\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Sep 2024 09:22:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, Sep 27, 2024 at 09:22:48AM -0700, Jeff Davis wrote:\n> I suggest that we add the wording to the\n> <replaceable>query</replaceable> portion of the doc, near \"security-\n> restricted operation\".\n\nHow does this look?\n\ndiff --git a/doc/src/sgml/ref/create_materialized_view.sgml b/doc/src/sgml/ref/create_materialized_view.sgml\nindex 0d2fea2b97..62d897931c 100644\n--- a/doc/src/sgml/ref/create_materialized_view.sgml\n+++ b/doc/src/sgml/ref/create_materialized_view.sgml\n@@ -143,7 +143,9 @@ CREATE MATERIALIZED VIEW [ IF NOT EXISTS ] <replaceable>table_name</replaceable>\n A <link linkend=\"sql-select\"><command>SELECT</command></link>, <link linkend=\"sql-table\"><command>TABLE</command></link>,\n or <link linkend=\"sql-values\"><command>VALUES</command></link> command. This query will run within a\n security-restricted operation; in particular, calls to functions that\n- themselves create temporary tables will fail.\n+ themselves create temporary tables will fail. Also, while the query is\n+ running, the <xref linkend=\"guc-search-path\"/> is temporarily changed to\n+ <literal>pg_catalog, pg_temp</literal>.\n </para>\n </listitem>\n </varlistentry>\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 27 Sep 2024 15:04:46 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, 2024-09-27 at 15:04 -0500, Nathan Bossart wrote:\n> On Fri, Sep 27, 2024 at 09:22:48AM -0700, Jeff Davis wrote:\n> > I suggest that we add the wording to the\n> > <replaceable>query</replaceable> portion of the doc, near\n> > \"security-\n> > restricted operation\".\n> \n> How does this look?\n\nLooks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Sep 2024 13:27:38 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
},
{
"msg_contents": "On Fri, Sep 27, 2024 at 01:27:38PM -0700, Jeff Davis wrote:\n> Looks good to me.\n\nThanks, committed.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 27 Sep 2024 16:24:50 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MAINTAIN privilege -- what do we need to un-revert it?"
}
] |
[
{
"msg_contents": "Greetings, everyone!\n\nWhile analyzing output of Svace static analyzer [1] I've found a bug.\n\nIn function pgxmlNodeSetToText there is a call of xmlBufferCreate that \ndoesn't\nhave its return value checked. In all four other calls of \nxmlBufferCreate there\nis a try...catch that checks the return value inside.\n\nI suggest to add the same checks here that are used in other four calls \nof\nxmlBufferCreate.\n\nThe proposed patch is attached.\n\n[1] - https://svace.pages.ispras.ru/svace-website/en/\n\nOleg Tselebrovskiy, Postgres Pro",
"msg_date": "Thu, 15 Feb 2024 12:18:33 +0700",
"msg_from": "Oleg Tselebrovskiy <[email protected]>",
"msg_from_op": true,
"msg_subject": "xmlBufferCreate return value not checked in pgxmlNodeSetToText"
}
] |
[
{
"msg_contents": "Hi\nThis is Shibagaki.\n\nWhen FIPS mode is enabled, some encryption algorithms cannot be used.\nSince PostgreSQL15, pgcrypto requires OpenSSL[1], digest() and other functions\nalso follow this policy.\n\nHowever, crypt() and gen_salt() do not use OpenSSL as mentioned in [2].\nTherefore, if we run crypt() and gen_salt() on a machine with FIPS mode enabled,\nthey are not affected by FIPS mode. This means we can use encryption algorithms \ndisallowed in FIPS.\n\nI would like to change the proprietary implementations of crypt() and gen_salt()\nto use OpenSSL API.\nIf it's not a problem, I am going to create a patch, but if you have a better \napproach, please let me know.\n\nThank you\n\n\n[1] https://github.com/postgres/postgres/commit/db7d1a7b0530e8cbd045744e1c75b0e63fb6916f\n[2] https://peter.eisentraut.org/blog/2023/12/05/postgresql-and-fips-mode\n\ncrypt() and gen_salt() are performed on in example below.\n\n/////\n\n-- OS RHEL8.6\n\n$openssl version\nOpenSSL 1.1.1k FIPS 25 Mar 2021\n\n$fips-mode-setup --check\nFIPS mode is enabled.\n\n$./pgsql17/bin/psql\npsql (17devel)\nType \"help\" for help.\n\npostgres=# SHOW server_version;\n server_version \n----------------\n 17devel\n(1 row)\n\npostgres=# SELECT digest('data','md5');\nERROR: Cannot use \"md5\": Cipher cannot be initialized\n\npostgres=# SELECT crypt('new password',gen_salt('md5')); -- md5 is not available when fips mode is turned on. This is a normal behavior\nERROR: crypt(3) returned NULL\n\npostgres=# SELECT crypt('new password',gen_salt('des')); -- however, des is avalable. This may break a FIPS rule\n crypt \n---------------\n 32REGk7H6dSnE\n(1 row)\n\n/////\n\nFYI - OpenSSL itself cannot use DES algorithm while encrypting files. This is an expected behavior.\n\n-----------------------------------------------\nFujitsu Limited\nShibagaki Koshi\[email protected]\n\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 12:42:26 +0000",
"msg_from": "\"Koshi Shibagaki (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replace current implementations in crypt() and gen_salt() to OpenSSL"
},
{
"msg_contents": "On 15.02.24 13:42, Koshi Shibagaki (Fujitsu) wrote:\n> However, crypt() and gen_salt() do not use OpenSSL as mentioned in [2].\n> Therefore, if we run crypt() and gen_salt() on a machine with FIPS mode enabled,\n> they are not affected by FIPS mode. This means we can use encryption algorithms\n> disallowed in FIPS.\n> \n> I would like to change the proprietary implementations of crypt() and gen_salt()\n> to use OpenSSL API.\n> If it's not a problem, I am going to create a patch, but if you have a better\n> approach, please let me know.\n\nThe problems are:\n\n1. All the block ciphers currently supported by crypt() and gen_salt() \nare not FIPS-compliant.\n\n2. The crypt() and gen_salt() methods built on top of them (modes of \noperation, kind of) are not FIPS-compliant.\n\n3. The implementations (crypt-blowfish.c, crypt-des.c, etc.) are not \nstructured in a way that OpenSSL calls can easily be patched in.\n\nSo if you want FIPS-compliant cryptography, these interfaces look like a \ndead end. I don't know if there are any modern equivalents of these \nfunctions that we should be supplying instead.\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 16:49:45 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 15 Feb 2024, at 16:49, Peter Eisentraut <[email protected]> wrote:\n\n> 1. All the block ciphers currently supported by crypt() and gen_salt() are not FIPS-compliant.\n> \n> 2. The crypt() and gen_salt() methods built on top of them (modes of operation, kind of) are not FIPS-compliant.\n\nI wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\nciphers when the compiled against OpenSSL is running with FIPS mode enabled, or\nraise a WARNING when used? It seems rather unlikely that someone running\nOpenSSL with FIPS=yes want to use our DES cipher without there being an error\nor misconfiguration somewhere.\n\nSomething like the below untested pseudocode.\n\ndiff --git a/contrib/pgcrypto/pgcrypto.c b/contrib/pgcrypto/pgcrypto.c\nindex 96447c5757..3d4391ebe1 100644\n--- a/contrib/pgcrypto/pgcrypto.c\n+++ b/contrib/pgcrypto/pgcrypto.c\n@@ -187,6 +187,14 @@ pg_crypt(PG_FUNCTION_ARGS)\n \t\t\t *resbuf;\n \ttext\t *res;\n \n+#if defined FIPS_mode\n+\tif (FIPS_mode())\n+#else\n+\tif (EVP_default_properties_is_fips_enabled(OSSL_LIB_CTX_get0_global_default()))\n+#endif\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"not available when using OpenSSL in FIPS mode\")));\n+\n \tbuf0 = text_to_cstring(arg0);\n \tbuf1 = text_to_cstring(arg1);\n\nGreenplum implemented similar functionality but with a GUC, fips_mode=<bool>.\nThe problem with that is that it gives the illusion that enabling such a GUC\ngives any guarantees about FIPS which isn't really the case since postgres\nisn't FIPS certified.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 10:16:37 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "Dear Peter\r\n\r\nThanks for the replying\r\n\r\n> 1. All the block ciphers currently supported by crypt() and gen_salt() are not\r\n> FIPS-compliant.\r\n>\r\n> 2. The crypt() and gen_salt() methods built on top of them (modes of operation,\r\n> kind of) are not FIPS-compliant.\r\n> \r\n> 3. The implementations (crypt-blowfish.c, crypt-des.c, etc.) are not structured\r\n> in a way that OpenSSL calls can easily be patched in.\r\n\r\nIndeed, all the algorithm could not be used in FIPS and huge engineering might \r\nbe needed for the replacement. If the benefit is smaller than the cost, we \r\nshould consider another way - e.g., prohibit to call these functions in FIPS \r\nmode as in the pseudocode Daniel sent. Replacing OpenSSL is a way, the objective\r\nis to eliminate the user's error in choosing an encryption algorithm.\r\n\r\n\r\n-----------------------------------------------\r\nFujitsu Limited\r\nShibagaki Koshi\r\[email protected]\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 16 Feb 2024 11:32:44 +0000",
"msg_from": "\"Koshi Shibagaki (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "Dear Daniel\n\nThanks for your reply.\n\n> I wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\n> ciphers when the compiled against OpenSSL is running with FIPS mode\n> enabled, or raise a WARNING when used? It seems rather unlikely that\n> someone running OpenSSL with FIPS=yes want to use our DES cipher without\n> there being an error or misconfiguration somewhere.\n\nIndeed, users do not use non-FIPS compliant ciphers in crypt() and gen_salt() \nsuch as DES with FIPS mode enabled.\nHowever, can we reduce human error by having these functions make the judgment \nas to whether ciphers can or cannot be used?\n\nIf pgcrypto checks if FIPS enabled or not as in the pseudocode, it is easier to \nachieve than replacing to OpenSSL.\nCurrently, OpenSSL internally determines if it is in FIPS mode or not, but would\nit be a problem to have PostgreSQL take on that role?\n\n-----------------------------------------------\nFujitsu Limited\nShibagaki Koshi\[email protected]\n\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 11:35:41 +0000",
"msg_from": "\"Koshi Shibagaki (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 2/16/24 04:16, Daniel Gustafsson wrote:\n>> On 15 Feb 2024, at 16:49, Peter Eisentraut <[email protected]> wrote:\n> \n>> 1. All the block ciphers currently supported by crypt() and gen_salt() are not FIPS-compliant.\n>> \n>> 2. The crypt() and gen_salt() methods built on top of them (modes of operation, kind of) are not FIPS-compliant.\n> \n> I wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\n> ciphers when the compiled against OpenSSL is running with FIPS mode enabled, or\n> raise a WARNING when used? It seems rather unlikely that someone running\n> OpenSSL with FIPS=yes want to use our DES cipher without there being an error\n> or misconfiguration somewhere.\n> \n> Something like the below untested pseudocode.\n> \n> diff --git a/contrib/pgcrypto/pgcrypto.c b/contrib/pgcrypto/pgcrypto.c\n> index 96447c5757..3d4391ebe1 100644\n> --- a/contrib/pgcrypto/pgcrypto.c\n> +++ b/contrib/pgcrypto/pgcrypto.c\n> @@ -187,6 +187,14 @@ pg_crypt(PG_FUNCTION_ARGS)\n> \t\t\t *resbuf;\n> \ttext\t *res;\n> \n> +#if defined FIPS_mode\n> +\tif (FIPS_mode())\n> +#else\n> +\tif (EVP_default_properties_is_fips_enabled(OSSL_LIB_CTX_get0_global_default()))\n> +#endif\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"not available when using OpenSSL in FIPS mode\")));\n\nMakes sense +1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 06:56:13 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 16.02.24 10:16, Daniel Gustafsson wrote:\n>> 2. The crypt() and gen_salt() methods built on top of them (modes of operation, kind of) are not FIPS-compliant.\n> I wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\n> ciphers when the compiled against OpenSSL is running with FIPS mode enabled, or\n> raise a WARNING when used? It seems rather unlikely that someone running\n> OpenSSL with FIPS=yes want to use our DES cipher without there being an error\n> or misconfiguration somewhere.\n\nI wonder on what level this kind of check would be done. For example, \nthe password hashing done for SCRAM is not FIPS-compliant either, but \nsurely we don't want to disallow that. Maybe this should be done on the \nlevel of block ciphers. So if someone wanted to add a \"crypt-aes\" \nmodule, that would then continue to work.\n\n\n",
"msg_date": "Fri, 16 Feb 2024 13:57:56 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 16 Feb 2024, at 13:57, Peter Eisentraut <[email protected]> wrote:\n> \n> On 16.02.24 10:16, Daniel Gustafsson wrote:\n>>> 2. The crypt() and gen_salt() methods built on top of them (modes of operation, kind of) are not FIPS-compliant.\n>> I wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\n>> ciphers when the compiled against OpenSSL is running with FIPS mode enabled, or\n>> raise a WARNING when used? It seems rather unlikely that someone running\n>> OpenSSL with FIPS=yes want to use our DES cipher without there being an error\n>> or misconfiguration somewhere.\n> \n> I wonder on what level this kind of check would be done. For example, the password hashing done for SCRAM is not FIPS-compliant either, but surely we don't want to disallow that.\n\nCan you elaborate? When building with OpenSSL all SCRAM hashing will use the\nOpenSSL implementation of pg_hmac and pg_cryptohash, so it would be subject to\nOpenSSL FIPS configuration no?\n\n> Maybe this should be done on the level of block ciphers. So if someone wanted to add a \"crypt-aes\" module, that would then continue to work.\n\nThat's a fair point, we can check individual ciphers. I'll hack up a version\ndoing this.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 14:30:38 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 16.02.24 14:30, Daniel Gustafsson wrote:\n>> On 16 Feb 2024, at 13:57, Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 16.02.24 10:16, Daniel Gustafsson wrote:\n>>>> 2. The crypt() and gen_salt() methods built on top of them (modes of operation, kind of) are not FIPS-compliant.\n>>> I wonder if it's worth trying to make pgcrypto disallow non-FIPS compliant\n>>> ciphers when the compiled against OpenSSL is running with FIPS mode enabled, or\n>>> raise a WARNING when used? It seems rather unlikely that someone running\n>>> OpenSSL with FIPS=yes want to use our DES cipher without there being an error\n>>> or misconfiguration somewhere.\n>>\n>> I wonder on what level this kind of check would be done. For example, the password hashing done for SCRAM is not FIPS-compliant either, but surely we don't want to disallow that.\n> \n> Can you elaborate? When building with OpenSSL all SCRAM hashing will use the\n> OpenSSL implementation of pg_hmac and pg_cryptohash, so it would be subject to\n> OpenSSL FIPS configuration no?\n\nYes, but the overall methods of composing all this into secrets and \nprotocol messages etc. are not covered by FIPS.\n\n>> Maybe this should be done on the level of block ciphers. So if someone wanted to add a \"crypt-aes\" module, that would then continue to work.\n> \n> That's a fair point, we can check individual ciphers. I'll hack up a version\n> doing this.\n\nLike, if we did a \"crypt-aes\", would that be FIPS-compliant? I don't know.\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 15:49:01 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 16 Feb 2024, at 15:49, Peter Eisentraut <[email protected]> wrote:\n\n> Like, if we did a \"crypt-aes\", would that be FIPS-compliant? I don't know.\n\nIf I remember my FIPS correct: Only if it used a FIPS certified implementation,\nlike the one in OpenSSL when the fips provider has been loaded. The cipher\nmust be allowed *and* the implementation must be certified.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 16:09:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "Let me confirm the discussion in threads. I think there are two topics.\n1. prohibit the use of ciphers disallowed in FIPS mode at the level of block \ncipher (crypt-bf, etc...) in crypt() and gen_salt()\n2. adding new \"crypt-aes\" module.\n\nIf this is correct, I would like to make a patch for the first topic, as I think\nI can handle it. \nDaniel, please let me know if you have been making a patch based on the idea.\n\n\nAlso, I think the second one should be discussed in a separate thread, so could \nyou split it into a separate thread?\n\nThank you\n\n-----------------------------------------------\nFujitsu Limited\nShibagaki Koshi\[email protected]\n\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 09:56:27 +0000",
"msg_from": "\"Koshi Shibagaki (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 20 Feb 2024, at 10:56, Koshi Shibagaki (Fujitsu) <[email protected]> wrote:\n\n> Let me confirm the discussion in threads. I think there are two topics.\n> 1. prohibit the use of ciphers disallowed in FIPS mode at the level of block \n> cipher (crypt-bf, etc...) in crypt() and gen_salt()\n\nThat level might be overkill given that any cipher not in the FIPS certfied\nmodule mustn't be used, but it's also not the wrong place to put it IMHO.\n\n> 2. adding new \"crypt-aes\" module.\n\nI think this was a hypothetical scenario and not a concrete proposal.\n\n> If this is correct, I would like to make a patch for the first topic, as I think\n> I can handle it. \n> Daniel, please let me know if you have been making a patch based on the idea.\n\nI haven't yet started on that so feel free to take a stab at it, I'd be happy\nto review it. Note that there are different API's for doing this in OpenSSL\n1.0.2 and OpenSSL 3.x, so a solution must take both into consideration.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 11:09:46 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 20.02.24 11:09, Daniel Gustafsson wrote:\n>> On 20 Feb 2024, at 10:56, Koshi Shibagaki (Fujitsu) <[email protected]> wrote:\n> \n>> Let me confirm the discussion in threads. I think there are two topics.\n>> 1. prohibit the use of ciphers disallowed in FIPS mode at the level of block\n>> cipher (crypt-bf, etc...) in crypt() and gen_salt()\n> \n> That level might be overkill given that any cipher not in the FIPS certfied\n> module mustn't be used, but it's also not the wrong place to put it IMHO.\n\nI think we are going about this the wrong way. It doesn't make sense to \nask OpenSSL what a piece of code that doesn't use OpenSSL should do. \n(And would that even give a sensible answer? Like, you can configure \nOpenSSL to load the fips module, but you can also load the legacy module \nalongside it(??).) And as you say, even if this code supported modern \nblock ciphers, it wouldn't be FIPS compliant.\n\nI think there are several less weird ways to address this:\n\n* Just document it.\n\n* Make a pgcrypto-level GUC setting.\n\n* Split out these functions into a separate extension.\n\n* Deprecate these functions.\n\nOr some combination of these.\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:18:57 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:49 PM Peter Eisentraut <[email protected]> wrote:\n> I think there are several less weird ways to address this:\n>\n> * Just document it.\n>\n> * Make a pgcrypto-level GUC setting.\n>\n> * Split out these functions into a separate extension.\n>\n> * Deprecate these functions.\n>\n> Or some combination of these.\n\nI don't think the first two of these proposals help anything. AIUI,\nFIPS mode is supposed to be a system wide toggle that affects\neverything on the machine. The third one might help if you can be\ncompliant by just choosing not to install that extension, and the\nfourth one solves the problem by sledgehammer.\n\nDoes Linux provide some way of asking whether \"fips=1\" was specified\nat kernel boot time?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:57:02 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 20 Feb 2024, at 12:27, Robert Haas <[email protected]> wrote:\n> \n> On Tue, Feb 20, 2024 at 4:49 PM Peter Eisentraut <[email protected]> wrote:\n>> I think there are several less weird ways to address this:\n>> \n>> * Just document it.\n>> \n>> * Make a pgcrypto-level GUC setting.\n>> \n>> * Split out these functions into a separate extension.\n>> \n>> * Deprecate these functions.\n>> \n>> Or some combination of these.\n> \n> I don't think the first two of these proposals help anything. AIUI,\n> FIPS mode is supposed to be a system wide toggle that affects\n> everything on the machine. The third one might help if you can be\n> compliant by just choosing not to install that extension, and the\n> fourth one solves the problem by sledgehammer.\n\nA fifth option is to throw away our in-tree implementations and use the OpenSSL\nAPI's for everything, which is where this thread started. If the effort to\npayoff ratio is palatable to anyone then patches are for sure welcome.\n\n> Does Linux provide some way of asking whether \"fips=1\" was specified\n> at kernel boot time?\n\nThere is a crypto.fips_enabled sysctl but I have no idea how portable that is\nacross distributions etc.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:39:37 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 20 Feb 2024, at 12:18, Peter Eisentraut <[email protected]> wrote:\n\n> I think we are going about this the wrong way. It doesn't make sense to ask OpenSSL what a piece of code that doesn't use OpenSSL should do.\n\nGiven that pgcrypto cannot be built without OpenSSL, and ideally we should be\nusing the OpenSSL implementations for everything, I don't think it's too far\nfetched.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:51:24 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:09 PM Daniel Gustafsson <[email protected]> wrote:\n> A fifth option is to throw away our in-tree implementations and use the OpenSSL\n> API's for everything, which is where this thread started. If the effort to\n> payoff ratio is palatable to anyone then patches are for sure welcome.\n\nThat generally seems fine, although I'm fuzzy on what our policy\nactually is. We have fallback implementations for some things and not\nothers, IIRC.\n\n> > Does Linux provide some way of asking whether \"fips=1\" was specified\n> > at kernel boot time?\n>\n> There is a crypto.fips_enabled sysctl but I have no idea how portable that is\n> across distributions etc.\n\nMy guess would be that it's pretty portable, but my guesses about\nLinux might not be very good. Still, if we wanted to go this route, it\nprobably wouldn't be too hard to figure out how portable this is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 17:54:49 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 20.02.24 12:27, Robert Haas wrote:\n> I don't think the first two of these proposals help anything. AIUI,\n> FIPS mode is supposed to be a system wide toggle that affects\n> everything on the machine. The third one might help if you can be\n> compliant by just choosing not to install that extension, and the\n> fourth one solves the problem by sledgehammer.\n> \n> Does Linux provide some way of asking whether \"fips=1\" was specified\n> at kernel boot time?\n\nWhat you are describing only happens on Red Hat systems, I think. They \nhave built additional integration around this, which is great. But \nthat's not something you can rely on being the case on all systems, not \neven all Linux systems.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:34:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 20 Feb 2024, at 13:24, Robert Haas <[email protected]> wrote:\n> \n> On Tue, Feb 20, 2024 at 5:09 PM Daniel Gustafsson <[email protected]> wrote:\n>> A fifth option is to throw away our in-tree implementations and use the OpenSSL\n>> API's for everything, which is where this thread started. If the effort to\n>> payoff ratio is palatable to anyone then patches are for sure welcome.\n> \n> That generally seems fine, although I'm fuzzy on what our policy\n> actually is. We have fallback implementations for some things and not\n> others, IIRC.\n\nI'm not sure there is a well-formed policy, but IIRC the idea with cryptohash\nwas to provide in-core functionality iff OpenSSL isn't used, and only use the\nOpenSSL implementations if it is. Since pgcrypto cannot be built without\nOpenSSL (since db7d1a7b0530e8cbd045744e1c75b0e63fb6916f) I don't think it's a\nproblem to continue the work from that commit and replace more with OpenSSL\nimplementations.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:35:02 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "On 20.02.24 12:39, Daniel Gustafsson wrote:\n> A fifth option is to throw away our in-tree implementations and use the OpenSSL\n> API's for everything, which is where this thread started. If the effort to\n> payoff ratio is palatable to anyone then patches are for sure welcome.\n\nThe problem is that, as I understand it, these crypt routines are not \ndesigned in a way that you can just plug in a crypto library underneath. \n Effectively, the definition of what, say, blowfish crypt does, is \nwhatever is in that source file, and transitively, whatever OpenBSD \ndoes. (Fun question: Does OpenBSD care about FIPS?) Of course, you \ncould reimplement the same algorithms independently, using OpenSSL or \nwhatever. But I don't think this will really improve the state of the \nworld in aggregate, because to a large degree we are relying on the \nupstream to keep these implementations maintained, and if we rewrite \nthem, we become the upstream.\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:40:27 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
},
{
"msg_contents": "> On 20 Feb 2024, at 13:40, Peter Eisentraut <[email protected]> wrote:\n> \n> On 20.02.24 12:39, Daniel Gustafsson wrote:\n>> A fifth option is to throw away our in-tree implementations and use the OpenSSL\n>> API's for everything, which is where this thread started. If the effort to\n>> payoff ratio is palatable to anyone then patches are for sure welcome.\n> \n> The problem is that, as I understand it, these crypt routines are not designed in a way that you can just plug in a crypto library underneath. Effectively, the definition of what, say, blowfish crypt does, is whatever is in that source file, and transitively, whatever OpenBSD does. \n\nI don't disagree, but if the OP is willing to take a stab at it then..\n\n> (Fun question: Does OpenBSD care about FIPS?)\n\nNo, LibreSSL ripped out FIPS support early on.\n\n> Of course, you could reimplement the same algorithms independently, using OpenSSL or whatever. But I don't think this will really improve the state of the world in aggregate, because to a large degree we are relying on the upstream to keep these implementations maintained, and if we rewrite them, we become the upstream.\n\nAs a sidenote, we are already trailing behind upstream on this, the patch in\n[0] sits on my TODO, but given the lack of complaints over the years it's not\nbeen bumped to the top.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/flat/CAA-7PziyARoKi_9e2xdC75RJ068XPVk1CHDDdscu2BGrPuW9TQ%40mail.gmail.com#b20783dd6c72e95a8a0f6464d1228ed5\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:52:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace current implementations in crypt() and gen_salt() to\n OpenSSL"
}
] |
[
{
"msg_contents": "Hi,\n\nI remember Magnus making a comment many years ago to the effect that\nevery setting that is PGC_POSTMASTER is a bug, but some of those bugs\nare very difficult to fix. Perhaps the use of the word bug is\narguable, but I think the sentiment is apt, especially with regard to\nshared_buffers. Changing without a server restart would be really\nnice, but it's hard to figure out how to do it. I can think of a few\nbasic approaches, and I'd like to know (a) which ones people think are\ngood and which ones people think suck (maybe they all suck) and (b) if\nanybody's got any other ideas not mentioned here.\n\n1. Complicate the Buffer->pointer mapping. Right now, BufferGetBlock()\nis basically just BufferBlocks + (buffer - 1) * BLCKSZ, which means\nthat we're expecting to find all of the buffers in a single giant\narray. Years ago, somebody proposed changing the implementation to\nessentially WhereIsTheBuffer[buffer], which was heavily criticized on\nperformance grounds, because it requires an extra memory access. A\ngentler version of this might be something like\nWhereIsTheChunkOfBuffers[buffer/CHUNK_SIZE]+(buffer%CHUNK_SIZE)*BLCKSZ;\ni.e. instead of allowing every single buffer to be at some random\naddress, manage chunks of the buffer pool. This makes the lookup array\npotentially quite a lot smaller, which might mitigate performance\nconcerns. For example, if you had one chunk per GB of shared_buffers,\nyour mapping array would need only a handful of cache lines, or a few\nhandfuls on really big systems.\n\n(I am here ignoring the difficulties of how to orchestrate addition of\nor removal of chunks as a SMOP[1]. Feel free to criticize that\nhand-waving, but as of this writing, I feel like moderate\ndetermination would suffice.)\n\n2. Make a Buffer just a disguised pointer. Imagine something like\ntypedef struct { Page bp; } *buffer. WIth this approach,\nBufferGetBlock() becomes trivial. The tricky part with this approach\nis that you still need a cheap way of finding the buffer header. What\nI imagine might work here is to again have some kind of chunked\nrepresentation of shared_buffers, where each chunk contains a bunch of\nbuffer headers at, say, the beginning, followed by a bunch of buffers.\nTheoretically, if the chunks are sufficiently strong-aligned, you can\nfigure out what offset you're at within the chunk without any\nadditional information and the whole process of locating the buffer\nheader is just math, with no memory access. But in practice, getting\nthe chunks to be sufficiently strongly aligned sounds hard, and this\nalso makes a Buffer 64 bits rather than the current 32. A variant on\nthis concept might be to make the Buffer even wider and include two\npointers in it i.e. typedef struct { Page bp; BufferDesc *bd; }\nBuffer.\n\n3. Reserve lots of address space and then only use some of it. I hear\nrumors that some forks of PG have implemented something like this. The\nidea is that you convince the OS to give you a whole bunch of address\nspace, but you try to avoid having all of it be backed by physical\nmemory. If you later want to increase shared_buffers, you then get the\nOS to back more of it by physical memory, and if you later want to\ndecrease shared_buffers, you hopefully have some way of giving the OS\nthe memory back. As compared with the previous two approaches, this\nseems less likely to be noticeable to most PG code. Problems include\n(1) you have to somehow figure out how much address space to reserve,\nand that forms an upper bound on how big shared_buffers can grow at\nruntime and (2) you have to figure out ways to reserve address space\nand back more or less of it with physical memory that will work on all\nof the platforms that we currently support or might want to support in\nthe future.\n\n4. Give up on actually changing the size of shared_buffer per se, but\nstick some kind of resizable secondary cache in front of it. Data that\nis going to be manipulated gets brought into a (perhaps small?) \"real\"\nshared_buffers that behaves just like today, but you have some larger\ndata structure which is designed to be easier to resize and maybe\nsimpler in some other ways that sits between shared_buffers and the OS\ncache. This doesn't seem super-appealing because it requires a lot of\ndata copying, but maybe it's worth considering as a last resort.\n\nThoughts?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://en.wikipedia.org/wiki/Small_matter_of_programming\n\n\n",
"msg_date": "Fri, 16 Feb 2024 09:58:43 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-16 09:58:43 +0530, Robert Haas wrote:\n> I remember Magnus making a comment many years ago to the effect that\n> every setting that is PGC_POSTMASTER is a bug, but some of those bugs\n> are very difficult to fix. Perhaps the use of the word bug is\n> arguable, but I think the sentiment is apt, especially with regard to\n> shared_buffers. Changing without a server restart would be really\n> nice, but it's hard to figure out how to do it. I can think of a few\n> basic approaches, and I'd like to know (a) which ones people think are\n> good and which ones people think suck (maybe they all suck) and (b) if\n> anybody's got any other ideas not mentioned here.\n\nIMO the ability to *shrink* shared_buffers dynamically and cheaply is more\nimportant than growing it in a way, except that they are related of\ncourse. Idling hardware is expensive, thus overcommitting hardware is very\nattractive (I count \"serverless\" as part of that). To be able to overcommit\neffectively, unused long-lived memory has to be released. I.e. shared buffers\nneeds to be shrinkable.\n\n\n\nPerhaps worth noting that there are two things limiting the size of shared\nbuffers: 1) the available buffer space 2) the available buffer *mapping*\nspace. I think making the buffer mapping resizable is considerably harder than\nthe buffers themselves. Of course pre-reserving memory for a buffer mapping\nsuitable for a huge shared_buffers is more feasible than pre-allocating all\nthat memory for the buffers themselves. But it' still mean youd have a maximum\nset at server start.\n\n\n> 1. Complicate the Buffer->pointer mapping. Right now, BufferGetBlock()\n> is basically just BufferBlocks + (buffer - 1) * BLCKSZ, which means\n> that we're expecting to find all of the buffers in a single giant\n> array. Years ago, somebody proposed changing the implementation to\n> essentially WhereIsTheBuffer[buffer], which was heavily criticized on\n> performance grounds, because it requires an extra memory access. A\n> gentler version of this might be something like\n> WhereIsTheChunkOfBuffers[buffer/CHUNK_SIZE]+(buffer%CHUNK_SIZE)*BLCKSZ;\n> i.e. instead of allowing every single buffer to be at some random\n> address, manage chunks of the buffer pool. This makes the lookup array\n> potentially quite a lot smaller, which might mitigate performance\n> concerns. For example, if you had one chunk per GB of shared_buffers,\n> your mapping array would need only a handful of cache lines, or a few\n> handfuls on really big systems.\n\nSuch a scheme still leaves you with a dependend memory read for a quite\nfrequent operation. It could turn out to nto matter hugely if the mapping\narray is cache resident, but I don't know if we can realistically bank on\nthat.\n\nI'm also somewhat concerned about the coarse granularity being problematic. It\nseems like it'd lead to a desire to make the granule small, causing slowness.\n\n\nOne big advantage of a scheme like this is that it'd be a step towards a NUMA\naware buffer mapping and replacement. Practically everything beyond the size\nof a small consumer device these days has NUMA characteristics, even if not\n\"officially visible\". We could make clock sweeps (or a better victim buffer\nselection algorithm) happen within each \"chunk\", with some additional\ninfrastructure to choose which of the chunks to search a buffer in. Using a\nchunk on the current numa node, except when there is a lot of imbalance\nbetween buffer usage or replacement rate between chunks.\n\n\n\n> 2. Make a Buffer just a disguised pointer. Imagine something like\n> typedef struct { Page bp; } *buffer. WIth this approach,\n> BufferGetBlock() becomes trivial.\n\nYou also additionally need something that allows for efficient iteration over\nall shared buffers. Making buffer replacement and checkpointing more expensive\nisn't great.\n\n\n> 3. Reserve lots of address space and then only use some of it. I hear\n> rumors that some forks of PG have implemented something like this. The\n> idea is that you convince the OS to give you a whole bunch of address\n> space, but you try to avoid having all of it be backed by physical\n> memory. If you later want to increase shared_buffers, you then get the\n> OS to back more of it by physical memory, and if you later want to\n> decrease shared_buffers, you hopefully have some way of giving the OS\n> the memory back. As compared with the previous two approaches, this\n> seems less likely to be noticeable to most PG code.\n\nAnother advantage is that you can shrink shared buffers fairly granularly and\ncheaply with that approach, compared to having to move buffes entirely out of\na larger mapping to be able to unmap it.\n\n\n> Problems include (1) you have to somehow figure out how much address space\n> to reserve, and that forms an upper bound on how big shared_buffers can grow\n> at runtime and\n\nPresumably you'd normally not want to reserve more than the physical amount of\nmemory on the system. Sure, memory can be hot added, but IME that's quite\nrare.\n\n\n> (2) you have to figure out ways to reserve address space and\n> back more or less of it with physical memory that will work on all of the\n> platforms that we currently support or might want to support in the future.\n\nWe also could decide to only implement 2) on platforms with suitable APIs.\n\n\nA third issue is that it can confuse administrators inspecting the system with\nOS tools. \"Postgres uses many terabytes of memory on my system!\" due to VIRT\nbeing huge etc.\n\n\n> 4. Give up on actually changing the size of shared_buffer per se, but\n> stick some kind of resizable secondary cache in front of it. Data that\n> is going to be manipulated gets brought into a (perhaps small?) \"real\"\n> shared_buffers that behaves just like today, but you have some larger\n> data structure which is designed to be easier to resize and maybe\n> simpler in some other ways that sits between shared_buffers and the OS\n> cache. This doesn't seem super-appealing because it requires a lot of\n> data copying, but maybe it's worth considering as a last resort.\n\nYea, that seems quite unappealing. Needing buffer replacement to be able to\npin a buffer would be ... unattractive.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Feb 2024 11:08:51 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On 16/02/2024 06:28, Robert Haas wrote:\n> 3. Reserve lots of address space and then only use some of it. I hear\n> rumors that some forks of PG have implemented something like this. The\n> idea is that you convince the OS to give you a whole bunch of address\n> space, but you try to avoid having all of it be backed by physical\n> memory. If you later want to increase shared_buffers, you then get the\n> OS to back more of it by physical memory, and if you later want to\n> decrease shared_buffers, you hopefully have some way of giving the OS\n> the memory back. As compared with the previous two approaches, this\n> seems less likely to be noticeable to most PG code. Problems include\n> (1) you have to somehow figure out how much address space to reserve,\n> and that forms an upper bound on how big shared_buffers can grow at\n> runtime and (2) you have to figure out ways to reserve address space\n> and back more or less of it with physical memory that will work on all\n> of the platforms that we currently support or might want to support in\n> the future.\n\nA variant of this approach:\n\n5. Re-map the shared_buffers when needed.\n\nBetween transactions, a backend should not hold any buffer pins. When \nthere are no pins, you can munmap() the shared_buffers and mmap() it at \na different address.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 22:24:21 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 5:29 PM Robert Haas <[email protected]> wrote:\n> 3. Reserve lots of address space and then only use some of it. I hear\n> rumors that some forks of PG have implemented something like this. The\n> idea is that you convince the OS to give you a whole bunch of address\n> space, but you try to avoid having all of it be backed by physical\n> memory. If you later want to increase shared_buffers, you then get the\n> OS to back more of it by physical memory, and if you later want to\n> decrease shared_buffers, you hopefully have some way of giving the OS\n> the memory back. As compared with the previous two approaches, this\n> seems less likely to be noticeable to most PG code. Problems include\n> (1) you have to somehow figure out how much address space to reserve,\n> and that forms an upper bound on how big shared_buffers can grow at\n> runtime and (2) you have to figure out ways to reserve address space\n> and back more or less of it with physical memory that will work on all\n> of the platforms that we currently support or might want to support in\n> the future.\n\nFTR I'm aware of a working experimental prototype along these lines,\nthat will be presented in Vancouver:\n\nhttps://www.pgevents.ca/events/pgconfdev2024/sessions/session/31-enhancing-postgresql-plasticity-new-frontiers-in-memory-management/\n\n\n",
"msg_date": "Sat, 17 Feb 2024 09:37:46 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 21:24, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 16/02/2024 06:28, Robert Haas wrote:\n> > 3. Reserve lots of address space and then only use some of it. I hear\n> > rumors that some forks of PG have implemented something like this. The\n> > idea is that you convince the OS to give you a whole bunch of address\n> > space, but you try to avoid having all of it be backed by physical\n> > memory. If you later want to increase shared_buffers, you then get the\n> > OS to back more of it by physical memory, and if you later want to\n> > decrease shared_buffers, you hopefully have some way of giving the OS\n> > the memory back. As compared with the previous two approaches, this\n> > seems less likely to be noticeable to most PG code. Problems include\n> > (1) you have to somehow figure out how much address space to reserve,\n> > and that forms an upper bound on how big shared_buffers can grow at\n> > runtime and (2) you have to figure out ways to reserve address space\n> > and back more or less of it with physical memory that will work on all\n> > of the platforms that we currently support or might want to support in\n> > the future.\n>\n> A variant of this approach:\n>\n> 5. Re-map the shared_buffers when needed.\n>\n> Between transactions, a backend should not hold any buffer pins. When\n> there are no pins, you can munmap() the shared_buffers and mmap() it at\n> a different address.\n\nThis can quite realistically fail to find an unused memory region of\nsufficient size when the heap is sufficiently fragmented, e.g. through\nASLR, which would make it difficult to use this dynamic\nsingle-allocation shared_buffers in security-hardened environments.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Sat, 17 Feb 2024 23:40:51 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-17 23:40:51 +0100, Matthias van de Meent wrote:\n> > 5. Re-map the shared_buffers when needed.\n> >\n> > Between transactions, a backend should not hold any buffer pins. When\n> > there are no pins, you can munmap() the shared_buffers and mmap() it at\n> > a different address.\n\nI hadn't quite realized that we don't seem to rely on shared_buffers having a\nspecific address across processes. That does seem to make it a more viable to\nremap mappings in backends.\n\n\nHowever, I don't think this works with mmap(MAP_ANONYMOUS) - as long as we are\nusing the process model. To my knowledge there is no way to get the same\nmapping in multiple already existing processes. Even mmap()ing /dev/zero after\nsharing file descriptors across processes doesn't work, if I recall correctly.\n\nWe would have to use sysv/posix shared memory or such (or mmap() if files in\ntmpfs) for the shared buffers allocation.\n\n\n\n> This can quite realistically fail to find an unused memory region of\n> sufficient size when the heap is sufficiently fragmented, e.g. through\n> ASLR, which would make it difficult to use this dynamic\n> single-allocation shared_buffers in security-hardened environments.\n\nI haven't seen anywhere close to this bad fragmentation on 64bit machines so\nfar - have you?\n\nMost implementations of ASLR randomize mmap locations across multiple runs of\nthe same binary, not within the same binary. There are out-of-tree linux\npatches that make mmap() randomize every single allocation, but I am not sure\nthat we ought to care about such things.\n\nEven if we were to care, on 64bit platforms it doesn't seem likely that we'd\nrun out of space that quickly. AMD64 had 48bits of virtual address space from\nthe start, and on recent CPUs that has grown to 57bits [1], that's a lot of\nspace.\n\nAnd if you do run out of VM space, wouldn't that also affect lots of other\nthings, like mmap() for malloc?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://en.wikipedia.org/wiki/Intel_5-level_paging\n\n\n",
"msg_date": "Sat, 17 Feb 2024 17:03:13 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 12:38 AM Andres Freund <[email protected]> wrote:\n> IMO the ability to *shrink* shared_buffers dynamically and cheaply is more\n> important than growing it in a way, except that they are related of\n> course. Idling hardware is expensive, thus overcommitting hardware is very\n> attractive (I count \"serverless\" as part of that). To be able to overcommit\n> effectively, unused long-lived memory has to be released. I.e. shared buffers\n> needs to be shrinkable.\n\nI see your point, but people want to scale up, too. Of course, those\npeople will have to live with what we can practically implement.\n\n> Perhaps worth noting that there are two things limiting the size of shared\n> buffers: 1) the available buffer space 2) the available buffer *mapping*\n> space. I think making the buffer mapping resizable is considerably harder than\n> the buffers themselves. Of course pre-reserving memory for a buffer mapping\n> suitable for a huge shared_buffers is more feasible than pre-allocating all\n> that memory for the buffers themselves. But it' still mean youd have a maximum\n> set at server start.\n\nWe size the fsync queue based on shared_buffers too. That's a lot less\nimportant, though, and could be worked around in other ways.\n\n> Such a scheme still leaves you with a dependend memory read for a quite\n> frequent operation. It could turn out to nto matter hugely if the mapping\n> array is cache resident, but I don't know if we can realistically bank on\n> that.\n\nI don't know, either. I was hoping you did. :-)\n\nBut we can rig up a test pretty easily, I think. We can just create a\nfake mapping that gives the same answers as the current calculation\nand then beat on it. Of course, if testing shows no difference, there\nis the small problem of knowing whether the test scenario was right;\nand it's also possible that an initial impact could be mitigated by\nremoving some gratuitously repeated buffer # -> buffer address\nmappings. Still, I think it could provide us with a useful baseline.\nI'll throw something together when I have time, unless someone beats\nme to it.\n\n> I'm also somewhat concerned about the coarse granularity being problematic. It\n> seems like it'd lead to a desire to make the granule small, causing slowness.\n\nHow many people set shared_buffers to something that's not a whole\nnumber of GB these days? I mean I bet it happens, but in practice if\nyou rounded to the nearest GB, or even the nearest 2GB, I bet almost\nnobody would really care. I think it's fine to be opinionated here and\nhold the line at a relatively large granule, even though in theory\npeople could want something else.\n\nAlternatively, maybe there could be a provision for the last granule\nto be partial, and if you extend further, you throw away the partial\ngranule and replace it with a whole one. But I'm not even sure that's\nworth doing.\n\n> One big advantage of a scheme like this is that it'd be a step towards a NUMA\n> aware buffer mapping and replacement. Practically everything beyond the size\n> of a small consumer device these days has NUMA characteristics, even if not\n> \"officially visible\". We could make clock sweeps (or a better victim buffer\n> selection algorithm) happen within each \"chunk\", with some additional\n> infrastructure to choose which of the chunks to search a buffer in. Using a\n> chunk on the current numa node, except when there is a lot of imbalance\n> between buffer usage or replacement rate between chunks.\n\nI also wondered whether this might be a useful step toward allowing\ndifferent-sized buffers in the same buffer pool (ducks, runs away\nquickly). I don't have any particular use for that myself, but it's a\nthing some people probably want for some reason or other.\n\n> > 2. Make a Buffer just a disguised pointer. Imagine something like\n> > typedef struct { Page bp; } *buffer. WIth this approach,\n> > BufferGetBlock() becomes trivial.\n>\n> You also additionally need something that allows for efficient iteration over\n> all shared buffers. Making buffer replacement and checkpointing more expensive\n> isn't great.\n\nTrue, but I don't really see what the problem with this would be in\nthis approach.\n\n> > 3. Reserve lots of address space and then only use some of it. I hear\n> > rumors that some forks of PG have implemented something like this. The\n> > idea is that you convince the OS to give you a whole bunch of address\n> > space, but you try to avoid having all of it be backed by physical\n> > memory. If you later want to increase shared_buffers, you then get the\n> > OS to back more of it by physical memory, and if you later want to\n> > decrease shared_buffers, you hopefully have some way of giving the OS\n> > the memory back. As compared with the previous two approaches, this\n> > seems less likely to be noticeable to most PG code.\n>\n> Another advantage is that you can shrink shared buffers fairly granularly and\n> cheaply with that approach, compared to having to move buffes entirely out of\n> a larger mapping to be able to unmap it.\n\nDon't you have to still move buffers entirely out of the region you\nwant to unmap?\n\n> > Problems include (1) you have to somehow figure out how much address space\n> > to reserve, and that forms an upper bound on how big shared_buffers can grow\n> > at runtime and\n>\n> Presumably you'd normally not want to reserve more than the physical amount of\n> memory on the system. Sure, memory can be hot added, but IME that's quite\n> rare.\n\nI would think that might not be so rare in a virtualized environment,\nwhich would seem to be one of the most important use cases for this\nkind of thing.\n\nPlus, this would mean we'd need to auto-detect system RAM. I'd rather\nnot go there, and just fix the upper limit via a GUC.\n\n> > (2) you have to figure out ways to reserve address space and\n> > back more or less of it with physical memory that will work on all of the\n> > platforms that we currently support or might want to support in the future.\n>\n> We also could decide to only implement 2) on platforms with suitable APIs.\n\nYep, fair.\n\n> A third issue is that it can confuse administrators inspecting the system with\n> OS tools. \"Postgres uses many terabytes of memory on my system!\" due to VIRT\n> being huge etc.\n\nMmph. That's disagreeable but probably not a reason to entirely\nabandon any particular approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 18 Feb 2024 17:06:09 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 1:54 AM Heikki Linnakangas <[email protected]> wrote:\n> A variant of this approach:\n>\n> 5. Re-map the shared_buffers when needed.\n>\n> Between transactions, a backend should not hold any buffer pins. When\n> there are no pins, you can munmap() the shared_buffers and mmap() it at\n> a different address.\n\nI really like this idea, but I think Andres has latched onto the key\nissue, which is that it supposes that the underlying shared memory\nobject upon which shared_buffers is based can be made bigger and\nsmaller, and that doesn't work for anonymous mappings AFAIK.\n\nMaybe that's not really a problem any more, though. If we don't depend\non the address of shared_buffers anywhere, we could move it into a\nDSM. Now that the stats collector uses DSM, it's surely already a\nrequirement that DSM works on every machine that runs PostgreSQL.\n\nWe'd still need to do something about the buffer mapping table,\nthough, and I bet dshash is not a reasonable answer on performance\ngrounds.\n\nAlso, it would be nice if the granularity of resizing could be\nsomething less than a whole transaction, because transactions can run\nfor a long time. We don't really need to wait for a transaction\nboundary, probably -- a time when we hold zero buffer pins will\nprobably happen a lot sooner, and at least some of those should be\nsafe points at which to remap.\n\nThen again, somebody can open a cursor, read from it until it holds a\npin, and then either idle the connection or make it do arbitrary\namounts of unrelated work, forcing the remapping to be postponed for\nan arbitrarily long time. But some version of this problem will exist\nin any approach to this problem, and long-running pins are a nuisance\nfor other reasons, too. We probably just have to accept this sort of\nissue as a limitation of our implementation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 18 Feb 2024 17:23:43 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On 16/02/2024 10:37 pm, Thomas Munro wrote:\n> On Fri, Feb 16, 2024 at 5:29 PM Robert Haas<[email protected]> wrote:\n>> 3. Reserve lots of address space and then only use some of it. I hear\n>> rumors that some forks of PG have implemented something like this. The\n>> idea is that you convince the OS to give you a whole bunch of address\n>> space, but you try to avoid having all of it be backed by physical\n>> memory. If you later want to increase shared_buffers, you then get the\n>> OS to back more of it by physical memory, and if you later want to\n>> decrease shared_buffers, you hopefully have some way of giving the OS\n>> the memory back. As compared with the previous two approaches, this\n>> seems less likely to be noticeable to most PG code. Problems include\n>> (1) you have to somehow figure out how much address space to reserve,\n>> and that forms an upper bound on how big shared_buffers can grow at\n>> runtime and (2) you have to figure out ways to reserve address space\n>> and back more or less of it with physical memory that will work on all\n>> of the platforms that we currently support or might want to support in\n>> the future.\n> FTR I'm aware of a working experimental prototype along these lines,\n> that will be presented in Vancouver:\n>\n> https://www.pgevents.ca/events/pgconfdev2024/sessions/session/31-enhancing-postgresql-plasticity-new-frontiers-in-memory-management/\n\nIf you are interested - this is my attempt to implement resizable shared \nbuffers based on ballooning:\n\nhttps://github.com/knizhnik/postgres/pull/2\n\nUnused memory is returned to OS using `madvise` (so it is not so \nportable solution).\n\nUnfortunately there are really many data structure in Postgres which \nsize depends on number of buffers.\nIn my PR I am using `GetAvailableBuffers()` function instead of \n`NBuffers`. But it doesn't always help because many of this data \nstructures can not be reallocated.\n\nAnother important limitation of this approach are:\n\n1. It is necessary to specify maximal number of shared buffers2. Only \n`BufferBlocks` space is shrinked but not buffer descriptors and buffer \nhash. Estimated memory fooyprint for one page is 132 bytes. If we want \nto scale shared buffers from 100Mb to 100Gb, size of use memory will be \n1.6Gb. And it is quite large.\n3. Our CLOCK algorithm becomes very inefficient for large number of \nshared buffers.\n\nBelow are first results (pgbench database with scale 100, pgbench -c 32 \n-j 4 -T 100 -P1 -M prepared -S ) I get:\n\n| shared_buffers | available_buffers | TPS |\n| ------------------| ---------------------------- | ---- |\n| 128MB | -1 | 280k |\n| 1GB | -1 | 324k |\n| 2GB | -1 | 358k |\n| 32GB | -1 | 350k |\n| 2GB | 128Mb | 130k |\n| 2GB | 1Gb | 311k |\n| 32GB | 128Mb | 13k |\n| 32GB | 1Gb | 140k |\n| 32GB | 2Gb | 348k |\n\n`shared_buffers` specifies maximal shared buffers size and \n`avaiable_buffer` - current limit.\n\nSo when shared_buffers >> available_buffers and dataset doesn't fit in \nthem, we get awful degrade of performance (> 20 times).\nThanks to CLOCK algorithm.\nMy first thought is to replace clock with LRU based in double-linked \nlist. As far as there is no lockless double-list implementation,\nit need some global lock. This lock can become bottleneck. The standard \nsolution is partitioning: use N LRU lists instead of 1.\nJust as partitioned has table used by buffer manager to lockup buffers. \nActually we can use the same partitions locks to protect LRU list.\nBut it not clear what to do with ring buffers (strategies).So I decided \nnot to perform such revolution in bufmgr, but optimize clock to more \nefficiently split reserved buffers.\nJust add|skip_count|field to buffer descriptor. And it helps! Now the \nworst case shared_buffer/available_buffers = 32Gb/128Mb\nshows the same performance 280k as shared_buffers=128Mb without ballooning.\n\n\n\n\n\n\n\n\n\n\nOn 16/02/2024 10:37 pm, Thomas Munro\n wrote:\n\n\nOn Fri, Feb 16, 2024 at 5:29 PM Robert Haas <[email protected]> wrote:\n\n\n3. Reserve lots of address space and then only use some of it. I hear\nrumors that some forks of PG have implemented something like this. The\nidea is that you convince the OS to give you a whole bunch of address\nspace, but you try to avoid having all of it be backed by physical\nmemory. If you later want to increase shared_buffers, you then get the\nOS to back more of it by physical memory, and if you later want to\ndecrease shared_buffers, you hopefully have some way of giving the OS\nthe memory back. As compared with the previous two approaches, this\nseems less likely to be noticeable to most PG code. Problems include\n(1) you have to somehow figure out how much address space to reserve,\nand that forms an upper bound on how big shared_buffers can grow at\nruntime and (2) you have to figure out ways to reserve address space\nand back more or less of it with physical memory that will work on all\nof the platforms that we currently support or might want to support in\nthe future.\n\n\n\nFTR I'm aware of a working experimental prototype along these lines,\nthat will be presented in Vancouver:\n\nhttps://www.pgevents.ca/events/pgconfdev2024/sessions/session/31-enhancing-postgresql-plasticity-new-frontiers-in-memory-management/\n\n\nIf you are interested - this is my attempt to implement resizable\n shared buffers based on ballooning:\nhttps://github.com/knizhnik/postgres/pull/2\nUnused memory is returned to OS using `madvise` (so it is not so\n portable solution).\nUnfortunately there are really many data structure in Postgres\n which size depends on number of buffers.\n In my PR I am using `GetAvailableBuffers()`\n function instead of `NBuffers`. But it doesn't always help because\n many of this data structures can not be reallocated.\nAnother important limitation of this approach are:\n1. It is necessary to specify maximal number of shared buffers\n2. Only `BufferBlocks` space is shrinked but not buffer\n descriptors and buffer hash. Estimated memory fooyprint for one\n page is 132 bytes. If we want to scale shared buffers from 100Mb\n to 100Gb, size of use memory will be 1.6Gb. And it is quite large.\n 3. Our CLOCK algorithm becomes very inefficient for large number\n of shared buffers.\n\nBelow are first results (pgbench database with scale 100, pgbench -c 32 -j 4 -T 100 -P1 -M prepared -S\n ) I get:\n| shared_buffers | available_buffers | TPS |\n| ------------------| ---------------------------- | ---- |\n| 128MB | -1 | 280k |\n| 1GB | -1 | 324k |\n| 2GB | -1 | 358k |\n| 32GB | -1 | 350k |\n| 2GB | 128Mb | 130k |\n| 2GB | 1Gb | 311k |\n| 32GB | 128Mb | 13k |\n| 32GB | 1Gb | 140k |\n| 32GB | 2Gb | 348k |\n\n`shared_buffers` specifies maximal shared buffers size and\n `avaiable_buffer` - current limit.\nSo\n when shared_buffers >> available_buffers and dataset\n doesn't fit in them, we get awful degrade of performance (>\n 20 times).\nThanks\n to CLOCK algorithm.\nMy\n first thought is to replace clock with LRU based in\n double-linked list. As far as there is no lockless double-list\n implementation,\nit \n need some global lock. This lock can become bottleneck. The\n standard solution is partitioning: use N LRU lists instead of\n 1.\nJust\n as partitioned has table used by buffer manager to lockup\n buffers. Actually we can use the same partitions locks to\n protect LRU list.\nBut\n it not clear what to do with ring buffers (strategies).So\n I decided not to perform such revolution in bufmgr, but optimize\n clock to more efficiently split reserved buffers.\nJust\n add skip_count field\n to buffer descriptor. And it helps! Now the worst case\n shared_buffer/available_buffers = 32Gb/128Mb\nshows\n the same performance 280k as shared_buffers=128Mb without\n ballooning.",
"msg_date": "Sun, 18 Feb 2024 15:33:30 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Sun, 18 Feb 2024 at 02:03, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-02-17 23:40:51 +0100, Matthias van de Meent wrote:\n> > > 5. Re-map the shared_buffers when needed.\n> > >\n> > > Between transactions, a backend should not hold any buffer pins. When\n> > > there are no pins, you can munmap() the shared_buffers and mmap() it at\n> > > a different address.\n>\n> I hadn't quite realized that we don't seem to rely on shared_buffers having a\n> specific address across processes. That does seem to make it a more viable to\n> remap mappings in backends.\n>\n>\n> However, I don't think this works with mmap(MAP_ANONYMOUS) - as long as we are\n> using the process model. To my knowledge there is no way to get the same\n> mapping in multiple already existing processes. Even mmap()ing /dev/zero after\n> sharing file descriptors across processes doesn't work, if I recall correctly.\n>\n> We would have to use sysv/posix shared memory or such (or mmap() if files in\n> tmpfs) for the shared buffers allocation.\n>\n>\n>\n> > This can quite realistically fail to find an unused memory region of\n> > sufficient size when the heap is sufficiently fragmented, e.g. through\n> > ASLR, which would make it difficult to use this dynamic\n> > single-allocation shared_buffers in security-hardened environments.\n>\n> I haven't seen anywhere close to this bad fragmentation on 64bit machines so\n> far - have you?\n\nNo.\n\n> Most implementations of ASLR randomize mmap locations across multiple runs of\n> the same binary, not within the same binary. There are out-of-tree linux\n> patches that make mmap() randomize every single allocation, but I am not sure\n> that we ought to care about such things.\n\nAfter looking into ASLR a bit more, I realise I was under the mistaken\nimpression that ASLR would implicate randomized mmaps(), too.\nApparently, that's wrong; ASLR only does some randomization for the\ninitialization of the process memory layout, and not the process'\nallocations.\n\n> Even if we were to care, on 64bit platforms it doesn't seem likely that we'd\n> run out of space that quickly. AMD64 had 48bits of virtual address space from\n> the start, and on recent CPUs that has grown to 57bits [1], that's a lot of\n> space.\n\nYeah, that's a lot of space, but it seems to me it's also easily\nconsumed; one only needs to allocate one allocation in every 4GB of\naddress space to make allocations of 8GB impossible; a utilization of\n~1 byte/MiB. Applying this to 48 bits of virtual address space, a\nprocess only needs to use ~256MB of memory across the address space to\nblock out any 8GB allocations; for 57 bits that's still \"only\" 128GB.\nBut after looking at ASLR a bit more, it is unrealistic that a normal\nOS and process stack would get to allocating memory in such a pattern.\n\n> And if you do run out of VM space, wouldn't that also affect lots of other\n> things, like mmap() for malloc?\n\nYes. But I would usually expect that the main shared memory allocation\nwould be the single largest uninterrupted allocation, so I'd also\nexpect it to see more such issues than any current user of memory if\nwe were to start moving (reallocating) that allocation.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Sun, 18 Feb 2024 14:48:21 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-18 17:06:09 +0530, Robert Haas wrote:\n> On Sat, Feb 17, 2024 at 12:38 AM Andres Freund <[email protected]> wrote:\n> > IMO the ability to *shrink* shared_buffers dynamically and cheaply is more\n> > important than growing it in a way, except that they are related of\n> > course. Idling hardware is expensive, thus overcommitting hardware is very\n> > attractive (I count \"serverless\" as part of that). To be able to overcommit\n> > effectively, unused long-lived memory has to be released. I.e. shared buffers\n> > needs to be shrinkable.\n> \n> I see your point, but people want to scale up, too. Of course, those\n> people will have to live with what we can practically implement.\n\nSure, I didn't intend to say that scaling up isn't useful.\n\n\n> > Perhaps worth noting that there are two things limiting the size of shared\n> > buffers: 1) the available buffer space 2) the available buffer *mapping*\n> > space. I think making the buffer mapping resizable is considerably harder than\n> > the buffers themselves. Of course pre-reserving memory for a buffer mapping\n> > suitable for a huge shared_buffers is more feasible than pre-allocating all\n> > that memory for the buffers themselves. But it' still mean youd have a maximum\n> > set at server start.\n> \n> We size the fsync queue based on shared_buffers too. That's a lot less\n> important, though, and could be worked around in other ways.\n\nWe probably should address that independently of making shared_buffers\nPGC_SIGHUP. The queue gets absurdly large once s_b hits a few GB. It's not\nthat much memory compared to the buffer blocks themselves, but a sync queue of\nmany millions of entries just doesn't make sense. And a few hundred MB for\nthat isn't nothing either, even if it's just a fraction of the space for the\nbuffers. It makes checkpointer more susceptible to OOM as well, because\nAbsorbSyncRequests() allocates an array to copy all requests into local\nmemory.\n\n\n> > Such a scheme still leaves you with a dependend memory read for a quite\n> > frequent operation. It could turn out to nto matter hugely if the mapping\n> > array is cache resident, but I don't know if we can realistically bank on\n> > that.\n> \n> I don't know, either. I was hoping you did. :-)\n> \n> But we can rig up a test pretty easily, I think. We can just create a\n> fake mapping that gives the same answers as the current calculation\n> and then beat on it. Of course, if testing shows no difference, there\n> is the small problem of knowing whether the test scenario was right;\n> and it's also possible that an initial impact could be mitigated by\n> removing some gratuitously repeated buffer # -> buffer address\n> mappings. Still, I think it could provide us with a useful baseline.\n> I'll throw something together when I have time, unless someone beats\n> me to it.\n\nI think such a test would be useful, although I also don't know how confident\nwe would be if we saw positive results. Probably depends a bit on the\ngenerated code and how plausible it is to not see regressions.\n\n\n> > I'm also somewhat concerned about the coarse granularity being problematic. It\n> > seems like it'd lead to a desire to make the granule small, causing slowness.\n> \n> How many people set shared_buffers to something that's not a whole\n> number of GB these days?\n\nI'd say the vast majority of postgres instances in production run with less\nthan 1GB of s_b. Just because numbers wise the majority of instances are\nrunning on small VMs and/or many PG instances are running on one larger\nmachine. There are a lot of instances where the total available memory is\nless than 2GB.\n\n\n> I mean I bet it happens, but in practice if you rounded to the nearest GB,\n> or even the nearest 2GB, I bet almost nobody would really care. I think it's\n> fine to be opinionated here and hold the line at a relatively large granule,\n> even though in theory people could want something else.\n\nI don't believe that at all unfortunately.\n\n\n> > One big advantage of a scheme like this is that it'd be a step towards a NUMA\n> > aware buffer mapping and replacement. Practically everything beyond the size\n> > of a small consumer device these days has NUMA characteristics, even if not\n> > \"officially visible\". We could make clock sweeps (or a better victim buffer\n> > selection algorithm) happen within each \"chunk\", with some additional\n> > infrastructure to choose which of the chunks to search a buffer in. Using a\n> > chunk on the current numa node, except when there is a lot of imbalance\n> > between buffer usage or replacement rate between chunks.\n> \n> I also wondered whether this might be a useful step toward allowing\n> different-sized buffers in the same buffer pool (ducks, runs away\n> quickly). I don't have any particular use for that myself, but it's a\n> thing some people probably want for some reason or other.\n\nI still think that that's something that will just cause a significant cost in\ncomplexity, and secondarily also runtime overhead, at a comparatively marginal\ngain.\n\n\n> > > 2. Make a Buffer just a disguised pointer. Imagine something like\n> > > typedef struct { Page bp; } *buffer. WIth this approach,\n> > > BufferGetBlock() becomes trivial.\n> >\n> > You also additionally need something that allows for efficient iteration over\n> > all shared buffers. Making buffer replacement and checkpointing more expensive\n> > isn't great.\n> \n> True, but I don't really see what the problem with this would be in\n> this approach.\n\nIt's a bit hard to tell at this level of detail :). At the extreme end, if you\nend up with a large number of separate allocations for s_b, it surely would.\n\n\n> > > 3. Reserve lots of address space and then only use some of it. I hear\n> > > rumors that some forks of PG have implemented something like this. The\n> > > idea is that you convince the OS to give you a whole bunch of address\n> > > space, but you try to avoid having all of it be backed by physical\n> > > memory. If you later want to increase shared_buffers, you then get the\n> > > OS to back more of it by physical memory, and if you later want to\n> > > decrease shared_buffers, you hopefully have some way of giving the OS\n> > > the memory back. As compared with the previous two approaches, this\n> > > seems less likely to be noticeable to most PG code.\n> >\n> > Another advantage is that you can shrink shared buffers fairly granularly and\n> > cheaply with that approach, compared to having to move buffes entirely out of\n> > a larger mapping to be able to unmap it.\n> \n> Don't you have to still move buffers entirely out of the region you\n> want to unmap?\n\nSure. But you can unmap at the granularity of a hardware page (there is some\nfragmentation cost on the OS / hardware page table level\nthough). Theoretically you could unmap individual 8kB pages.\n\n\n> > > Problems include (1) you have to somehow figure out how much address space\n> > > to reserve, and that forms an upper bound on how big shared_buffers can grow\n> > > at runtime and\n> >\n> > Presumably you'd normally not want to reserve more than the physical amount of\n> > memory on the system. Sure, memory can be hot added, but IME that's quite\n> > rare.\n> \n> I would think that might not be so rare in a virtualized environment,\n> which would seem to be one of the most important use cases for this\n> kind of thing.\n\nI've not seen it in production in a long time - but that might be because I've\nbeen out of the consulting game for too long. To my knowledge none of the\ncommon cloud providers support it, which of course restricts where it could be\nused significantly. I have far more commonly seen use of \"balooning\" to\nremove unused/rarely used memory from running instances though.\n\n\n> Plus, this would mean we'd need to auto-detect system RAM. I'd rather\n> not go there, and just fix the upper limit via a GUC.\n\nI'd have assumed we'd want a GUC that auto-determines the amount of RAM if set\nto -1. I don't think it's that hard to detect the available memory.\n\n\n> > A third issue is that it can confuse administrators inspecting the system with\n> > OS tools. \"Postgres uses many terabytes of memory on my system!\" due to VIRT\n> > being huge etc.\n> \n> Mmph. That's disagreeable but probably not a reason to entirely\n> abandon any particular approach.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 18 Feb 2024 12:35:16 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 2:05 AM Andres Freund <[email protected]> wrote:\n> We probably should address that independently of making shared_buffers\n> PGC_SIGHUP. The queue gets absurdly large once s_b hits a few GB. It's not\n> that much memory compared to the buffer blocks themselves, but a sync queue of\n> many millions of entries just doesn't make sense. And a few hundred MB for\n> that isn't nothing either, even if it's just a fraction of the space for the\n> buffers. It makes checkpointer more susceptible to OOM as well, because\n> AbsorbSyncRequests() allocates an array to copy all requests into local\n> memory.\n\nSure, that could just be capped, if it makes sense. Although given the\nthrust of this discussion, it might be even better to couple it to\nsomething other than the size of shared_buffers.\n\n> I'd say the vast majority of postgres instances in production run with less\n> than 1GB of s_b. Just because numbers wise the majority of instances are\n> running on small VMs and/or many PG instances are running on one larger\n> machine. There are a lot of instances where the total available memory is\n> less than 2GB.\n\nWhoa. That is not my experience at all. If I've ever seen such a small\nsystem since working at EDB (since 2010!) it was just one where the\ninitdb-time default was never changed.\n\nI can't help wondering if we should have some kind of memory_model\nGUC, measured in T-shirt sizes or something. We've coupled a bunch of\nthings to shared_buffers mostly as a way of distinguishing small\nsystems from large ones. But if we want to make shared_buffers\ndynamically changeable and we don't want to make all that other stuff\ndynamically changeable, decoupling those calculations might be an\nimportant thing to do.\n\nOn a really small system, do we even need the ability to dynamically\nchange shared_buffers at all? If we do, then I suspect the granule\nneeds to be small. But does someone want to take a system with <1GB of\nshared_buffers and then scale it way, way up? I suppose it would be\nnice to have the option. But you might have to make some choices, like\npick either a 16MB granule or a 128MB granule or a 1GB granule at\nstartup time and then stick with it? I don't know, I'm just\nspitballing here, because I don't know what the real design is going\nto look like yet.\n\n> > Don't you have to still move buffers entirely out of the region you\n> > want to unmap?\n>\n> Sure. But you can unmap at the granularity of a hardware page (there is some\n> fragmentation cost on the OS / hardware page table level\n> though). Theoretically you could unmap individual 8kB pages.\n\nI thought there were problems, at least on some operating systems, if\nthe address space mappings became too fragmented. At least, I wouldn't\nexpect that you could use huge pages for shared_buffers and still\nunmap little tiny bits. How would that even work?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:28:38 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On 2/18/24 15:35, Andres Freund wrote:\n> On 2024-02-18 17:06:09 +0530, Robert Haas wrote:\n>> How many people set shared_buffers to something that's not a whole\n>> number of GB these days?\n> \n> I'd say the vast majority of postgres instances in production run with less\n> than 1GB of s_b. Just because numbers wise the majority of instances are\n> running on small VMs and/or many PG instances are running on one larger\n> machine. There are a lot of instances where the total available memory is\n> less than 2GB.\n> \n>> I mean I bet it happens, but in practice if you rounded to the nearest GB,\n>> or even the nearest 2GB, I bet almost nobody would really care. I think it's\n>> fine to be opinionated here and hold the line at a relatively large granule,\n>> even though in theory people could want something else.\n> \n> I don't believe that at all unfortunately.\n\nCouldn't we scale the rounding, e.g. allow small allocations as we do \nnow, but above some number always round? E.g. maybe >= 2GB round to the \nnearest 256MB, >= 4GB round to the nearest 512MB, >= 8GB round to the \nnearest 1GB, etc?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 19 Feb 2024 09:19:16 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-19 09:19:16 -0500, Joe Conway wrote:\n> On 2/18/24 15:35, Andres Freund wrote:\n> > On 2024-02-18 17:06:09 +0530, Robert Haas wrote:\n> > > How many people set shared_buffers to something that's not a whole\n> > > number of GB these days?\n> > \n> > I'd say the vast majority of postgres instances in production run with less\n> > than 1GB of s_b. Just because numbers wise the majority of instances are\n> > running on small VMs and/or many PG instances are running on one larger\n> > machine. There are a lot of instances where the total available memory is\n> > less than 2GB.\n> > \n> > > I mean I bet it happens, but in practice if you rounded to the nearest GB,\n> > > or even the nearest 2GB, I bet almost nobody would really care. I think it's\n> > > fine to be opinionated here and hold the line at a relatively large granule,\n> > > even though in theory people could want something else.\n> > \n> > I don't believe that at all unfortunately.\n> \n> Couldn't we scale the rounding, e.g. allow small allocations as we do now,\n> but above some number always round? E.g. maybe >= 2GB round to the nearest\n> 256MB, >= 4GB round to the nearest 512MB, >= 8GB round to the nearest 1GB,\n> etc?\n\nThat'd make the translation considerably more expensive. Which is important,\ngiven how common an operation this is.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:13:09 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "On 2/19/24 13:13, Andres Freund wrote:\n> On 2024-02-19 09:19:16 -0500, Joe Conway wrote:\n>> Couldn't we scale the rounding, e.g. allow small allocations as we do now,\n>> but above some number always round? E.g. maybe >= 2GB round to the nearest\n>> 256MB, >= 4GB round to the nearest 512MB, >= 8GB round to the nearest 1GB,\n>> etc?\n> \n> That'd make the translation considerably more expensive. Which is important,\n> given how common an operation this is.\n\n\nPerhaps it is not practical, doesn't help, or maybe I misunderstand, but \nmy intent was that the rounding be done/enforced when setting the GUC \nvalue which surely cannot be that often.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 19 Feb 2024 13:54:01 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-19 13:54:01 -0500, Joe Conway wrote:\n> On 2/19/24 13:13, Andres Freund wrote:\n> > On 2024-02-19 09:19:16 -0500, Joe Conway wrote:\n> > > Couldn't we scale the rounding, e.g. allow small allocations as we do now,\n> > > but above some number always round? E.g. maybe >= 2GB round to the nearest\n> > > 256MB, >= 4GB round to the nearest 512MB, >= 8GB round to the nearest 1GB,\n> > > etc?\n> > \n> > That'd make the translation considerably more expensive. Which is important,\n> > given how common an operation this is.\n> \n> \n> Perhaps it is not practical, doesn't help, or maybe I misunderstand, but my\n> intent was that the rounding be done/enforced when setting the GUC value\n> which surely cannot be that often.\n\nIt'd be used for something like\n\n WhereIsTheChunkOfBuffers[buffer/CHUNK_SIZE]+(buffer%CHUNK_SIZE)*BLCKSZ;\n\nIf CHUNK_SIZE isn't a compile time constant this gets a good bit more\nexpensive. A lot more, if implemented naively (i.e as actual modulo/division\noperations, instead of translating to shifts and masks).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:46:06 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGC_SIGHUP shared_buffers?"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently, a role with the createrole attribute can create roles, set and change their password,\nbut can't see the password. Can't even see if the password is set or not.\nIn this case, you can mistakenly set the Valid until attribute to roles without a password.\nAnd there is no way to detect such a situation.\n\nIn the patch for changing the \\du command, I want to give the opportunity to show\nincorrect values of the Valid until attribute. [1]\n\nI suggest changing the pg_roles view to allow a role with the createrole attribute to see\ninformation about the password of the roles that this role manages\n(has membership with admin option).\n\nThere are several ways to implement it.\n\n1.\nChange the values of the rolpassword column. Now it always shows '********'.\nThe values should depend on the role executing the query.\nIf the query is executed by a superuser or a role with create role and admin membership,\nthen show '********' instead of password or NULL (no password).\nFor other roles, show '<insufficient privileges>'.\n\nThis is implemented in the attached patch.\n\n2.\nChange the values of the rolpassword column.\nIf the query is executed by a superuser or a role with create role and admin membership,\nthen show real password or NULL (no password).\nFor other roles, show '********'.\n\n3.\nLeave the rolpassword column as it is for backward compatibility, but add\na new logical rolhaspassword column.\nIf the query is executed by a superuser or a role with create role and admin membership,\nthen show true/false depending on the password existence.\nFor other roles, show NULL.\n\nAlthough it is possible that for security reasons such changes should not be made.\n\n1.https://www.postgresql.org/message-id/ef4d000f-6766-4ae1-9f69-0d0caa8130d6%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Fri, 16 Feb 2024 13:00:53 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Show password presence in pg_roles for authorized roles"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18348\nLogged by: Michael Bondarenko\nEmail address: [email protected]\nPostgreSQL version: 14.10\nOperating system: macOS\nDescription: \n\nHello,\r\n\r\nI'm building a random semantically-correct SQL code generator for PostgreSQL\nand I stumbled upon an inconsistency:\r\n\r\ntpch=# select extract(year from interval '3 years');\r\n extract \r\n---------\r\n 3\r\n(1 row)\r\n\r\ntpch=# select extract(week from interval '3 weeks');\r\nERROR: interval units \"week\" not supported\r\n\r\nIn the documentation it's mentioned that 'week' is an ISO 8601 week, so it\nmakes sense why it's not applicable to INTERVAL, which is the same for\nisoyear. However, the field is named week and not isoweek, so I expect it to\nwork like the `select extract(year from interval '3 years');` does.\nMoreover, the documentation does not mention that the field cannot be\nextracted from INTERVAL, like it does for isoyear:\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n.",
"msg_date": "Fri, 16 Feb 2024 12:06:55 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "Adding another inconsistency I found in the docs to this thread (\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n):\n\nThe docs say: \"source must be a value expression of type *timestamp*, *time*,\nor *interval*. (Expressions of type *date* are *cast to timestamp* and can\ntherefore be used as well.)\"\n\nWhich implies that the following two results must be the same:\n\ntpch=# select extract(microseconds from date '1924.01.01');\nERROR: date units \"microseconds\" not supported\n\ntpch=# select extract(microseconds from (date '1924.01.01')::timestamp);\n extract\n---------\n 0\n(1 row)\n\nHowever, the behaviour is different, which suggests that the date is indeed\ntreated as its own type in EXTRACT, and not cast to timestamp.\n\nOn Fri, Feb 16, 2024 at 2:07 PM PG Bug reporting form <\[email protected]> wrote:\n\n> The following bug has been logged on the website:\n>\n> Bug reference: 18348\n> Logged by: Michael Bondarenko\n> Email address: [email protected]\n> PostgreSQL version: 14.10\n> Operating system: macOS\n> Description:\n>\n> Hello,\n>\n> I'm building a random semantically-correct SQL code generator for\n> PostgreSQL\n> and I stumbled upon an inconsistency:\n>\n> tpch=# select extract(year from interval '3 years');\n> extract\n> ---------\n> 3\n> (1 row)\n>\n> tpch=# select extract(week from interval '3 weeks');\n> ERROR: interval units \"week\" not supported\n>\n> In the documentation it's mentioned that 'week' is an ISO 8601 week, so it\n> makes sense why it's not applicable to INTERVAL, which is the same for\n> isoyear. However, the field is named week and not isoweek, so I expect it\n> to\n> work like the `select extract(year from interval '3 years');` does.\n> Moreover, the documentation does not mention that the field cannot be\n> extracted from INTERVAL, like it does for isoyear:\n>\n> https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n> .\n>\n>\n\nAdding another inconsistency I found in the docs to this thread (https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT):The docs say: \"source must be a value expression of type timestamp, time, or interval. (Expressions of type date are cast to timestamp and can therefore be used as well.)\"Which implies that the following two results must be the same:tpch=# select extract(microseconds from date '1924.01.01');ERROR: date units \"microseconds\" not supportedtpch=# select extract(microseconds from (date '1924.01.01')::timestamp); extract --------- 0(1 row)However, the behaviour is different, which suggests that the date is indeed treated as its own type in EXTRACT, and not cast to timestamp.On Fri, Feb 16, 2024 at 2:07 PM PG Bug reporting form <[email protected]> wrote:The following bug has been logged on the website:\n\nBug reference: 18348\nLogged by: Michael Bondarenko\nEmail address: [email protected]\nPostgreSQL version: 14.10\nOperating system: macOS\nDescription: \n\nHello,\n\nI'm building a random semantically-correct SQL code generator for PostgreSQL\nand I stumbled upon an inconsistency:\n\ntpch=# select extract(year from interval '3 years');\n extract \n---------\n 3\n(1 row)\n\ntpch=# select extract(week from interval '3 weeks');\nERROR: interval units \"week\" not supported\n\nIn the documentation it's mentioned that 'week' is an ISO 8601 week, so it\nmakes sense why it's not applicable to INTERVAL, which is the same for\nisoyear. However, the field is named week and not isoweek, so I expect it to\nwork like the `select extract(year from interval '3 years');` does.\nMoreover, the documentation does not mention that the field cannot be\nextracted from INTERVAL, like it does for isoyear:\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n.",
"msg_date": "Fri, 16 Feb 2024 14:21:57 +0200",
"msg_from": "Michael Bondarenko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Sat, 17 Feb 2024 at 01:27, Michael Bondarenko\n<[email protected]> wrote:\n>\n> Adding another inconsistency I found in the docs to this thread (https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT):\n>\n> The docs say: \"source must be a value expression of type timestamp, time, or interval. (Expressions of type date are cast to timestamp and can therefore be used as well.)\"\n>\n> Which implies that the following two results must be the same:\n>\n> tpch=# select extract(microseconds from date '1924.01.01');\n> ERROR: date units \"microseconds\" not supported\n>\n> tpch=# select extract(microseconds from (date '1924.01.01')::timestamp);\n> extract\n> ---------\n> 0\n\nIt looks like a2da77cdb should have updated the documentation for this.\n\nDavid\n\n\n",
"msg_date": "Sat, 17 Feb 2024 01:44:01 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Sat, 17 Feb 2024 at 01:27, PG Bug reporting form\n<[email protected]> wrote:\n> tpch=# select extract(week from interval '3 weeks');\n> ERROR: interval units \"week\" not supported\n>\n> In the documentation it's mentioned that 'week' is an ISO 8601 week, so it\n> makes sense why it's not applicable to INTERVAL, which is the same for\n> isoyear. However, the field is named week and not isoweek, so I expect it to\n> work like the `select extract(year from interval '3 years');` does.\n> Moreover, the documentation does not mention that the field cannot be\n> extracted from INTERVAL, like it does for isoyear:\n> https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\nMaybe that table should specify which type(s) each of the items listed\nis applicable to. Seems better than mentioning which types they're not\napplicable to.\n\nDavid\n\n\n",
"msg_date": "Sat, 17 Feb 2024 02:02:20 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "in `9.9.1. EXTRACT, date_part`\nEXTRACT(field FROM source)\n\nI saw more inconsistencies with the doc when `source` is an interval.\n\nthe `minute` field\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11 ms');\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\n\nthe `hour` field:\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11 ms');\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11111111111 ms');\n\nthe `quarter` field:\nselect extract(quarter from interval '2011 year 12 month 48 hour 1005\nmin 2 sec 11 ms');\nSELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-12-16 20:38:40');\n\n\n",
"msg_date": "Sat, 17 Feb 2024 09:47:58 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "When testing I stumbled upon that too, but I thought no calculation was\nhappening in the interval field. However, it's different with the days and\nmonths etc. It seems no calculation for day and month and more:\n\ntpch=# select extract(day from interval '86400000 seconds');\n extract\n---------\n 0\n(1 row)\n\ntpch=# select extract(month from interval '86400000 seconds');\n extract\n---------\n 0\n(1 row)\n\ntpch=# select extract(year from interval '86400000 seconds');\n extract\n---------\n 0\n(1 row)\n\nBut calculation is present for hour, and minutes and seconds (90061 sec is\n1 day 1 hour 1 minute 1 second):\n\ntpch=# select extract(minute from interval '90061 seconds');\n extract\n---------\n 1\n(1 row)\n\ntpch=# select extract(hour from interval '90061 seconds');\n extract\n---------\n 25\n(1 row)\n\ntpch=# select extract(second from interval '90061 seconds');\n extract\n----------\n 1.000000\n(1 row)\n\nThe docs mention *The hour field (0–23)* for the hours, which is not true\nbecause it's not the field at all, but the calculated amount, and the value\nis not 0-23.\n\nOn Sat, Feb 17, 2024 at 3:48 AM jian he <[email protected]> wrote:\n\n> in `9.9.1. EXTRACT, date_part`\n> EXTRACT(field FROM source)\n>\n> I saw more inconsistencies with the doc when `source` is an interval.\n>\n> the `minute` field\n> select extract(minute from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 71 sec 11 ms');\n> select extract(minute from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 2 sec 11 ms');\n> select extract(minute from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 2 sec 11 ms');\n>\n> the `hour` field:\n> select extract(hour from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 71 sec 11 ms');\n> select extract(hour from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 2 sec 11 ms');\n> select extract(hour from interval '2011 year 16 month 35 day 48 hour\n> 1005 min 71 sec 11111111111 ms');\n>\n> the `quarter` field:\n> select extract(quarter from interval '2011 year 12 month 48 hour 1005\n> min 2 sec 11 ms');\n> SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-12-16 20:38:40');\n>\n\nWhen testing I stumbled upon that too, but I thought no calculation was happening in the interval field. However, it's different with the days and months etc. It seems no calculation for day and month and more:tpch=# select extract(day from interval '86400000 seconds'); extract --------- 0(1 row)tpch=# select extract(month from interval '86400000 seconds'); extract --------- 0(1 row)tpch=# select extract(year from interval '86400000 seconds'); extract --------- 0(1 row)But calculation is present for hour, and minutes and seconds (90061 sec is 1 day 1 hour 1 minute 1 second):tpch=# select extract(minute from interval '90061 seconds'); extract --------- 1(1 row)tpch=# select extract(hour from interval '90061 seconds'); extract --------- 25(1 row)tpch=# select extract(second from interval '90061 seconds'); extract ---------- 1.000000(1 row)The docs mention The hour field (0–23) for the hours, which is not true because it's not the field at all, but the calculated amount, and the value is not 0-23.On Sat, Feb 17, 2024 at 3:48 AM jian he <[email protected]> wrote:in `9.9.1. EXTRACT, date_part`\nEXTRACT(field FROM source)\n\nI saw more inconsistencies with the doc when `source` is an interval.\n\nthe `minute` field\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11 ms');\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\nselect extract(minute from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\n\nthe `hour` field:\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11 ms');\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 2 sec 11 ms');\nselect extract(hour from interval '2011 year 16 month 35 day 48 hour\n1005 min 71 sec 11111111111 ms');\n\nthe `quarter` field:\nselect extract(quarter from interval '2011 year 12 month 48 hour 1005\nmin 2 sec 11 ms');\nSELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-12-16 20:38:40');",
"msg_date": "Sat, 17 Feb 2024 10:00:39 +0200",
"msg_from": "Michael Bondarenko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Sat, 17 Feb 2024 at 09:01, Michael Bondarenko\n<[email protected]> wrote:\n> When testing I stumbled upon that too, but I thought no calculation was happening in the interval field. However, it's different with the days and months etc. It seems no calculation for day and month and more:\n...\n> But calculation is present for hour, and minutes and seconds (90061 sec is 1 day 1 hour 1 minute 1 second):\n\nNo, intervals have seconds, days and months. This is because not all\ndays have 24 hours, due to DST they can have 23 or 25, or even more\nextreme values if some country decides to change its time zone\ndefinition. And not all months have 30 days, so 90061 is 0 months, 0\ndays, 25 hours, 1 minute, 1 second ( IIRC leap second are not handled\n).\n\nIt is done that way so when you add one day across a dst jump you get\nthe same hour on the next day, and when you add one month you get the\nsame day in the next month independent of how many days the month has.\nThis is great for things like \"schedule a meeting one month and one\nweek from now\", but it bites you sometimes, like when you need a\nduration to bill for a long event like a phone call, where I always\nend up extracting epoch and substracting them.\n\nFrancisco Olarte.\n\n\n",
"msg_date": "Sat, 17 Feb 2024 16:12:15 +0100",
"msg_from": "Francisco Olarte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 17 Feb 2024 at 01:27, PG Bug reporting form\n> <[email protected]> wrote:\n>> Moreover, the documentation does not mention that the field cannot be\n>> extracted from INTERVAL, like it does for isoyear:\n>> https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\n> Maybe that table should specify which type(s) each of the items listed\n> is applicable to. Seems better than mentioning which types they're not\n> applicable to.\n\nThe thing's not laid out as a table though, and converting it seems\nlike more trouble than this is worth. The rejected cases hardly seem\nsurprising. I propose just mentioning that not all fields apply for\nall data types, as in 0001 attached.\n\n(Parenthetically, one case that perhaps is surprising is\n\tERROR: unit \"week\" not supported for type interval\nWhy not just return the day field divided by 7?)\n\nUnrelated but adjacent, the discussion of the century field seems\nmore than a bit flippant when I read it now. In other places we\nare typically content to use examples to make similar points.\nI propose doing so here too, as in 0002 attached.\n\nLastly, the entire page is quite schizophrenic about whether to leave\na blank line between adjacent examples. I could go either way on\nwhether to have that whitespace or not, but I do think it would be\nbetter to make it uniform. Any votes on what to do there?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 17 Feb 2024 13:14:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "Francisco Olarte <[email protected]> writes:\n> On Sat, 17 Feb 2024 at 09:01, Michael Bondarenko\n> <[email protected]> wrote:\n>> When testing I stumbled upon that too, but I thought no calculation was happening in the interval field. However, it's different with the days and months etc. It seems no calculation for day and month and more:\n>> ...\n>> But calculation is present for hour, and minutes and seconds (90061 sec is 1 day 1 hour 1 minute 1 second):\n\n> No, intervals have seconds, days and months.\n\nYeah. I think much of the confusion here comes from starting with\nnon-normalized interval input. Sure you can write \"2011 year 12 month\n48 hour 1005 min 2 sec 11 ms\", but that's not how it's stored:\n\nregression=# select interval '2011 year 12 month 48 hour 1005 min 2 sec 11 ms';\n interval \n-------------------------\n 2012 years 64:45:02.011\n(1 row)\n\n(Actually, what's stored is 2012*12 months, 0 days, and some number\nof microseconds that I don't feel like working out. Conversion of\nthe microseconds to HH:MM:SS.SSS happens on output.)\n\nOnce you look at the normalized value, the results of extract()\nare far less surprising.\n\nProbably the right place to enlarge on this point is not in the\nextract() section at all, but in 8.5.4. Interval Input. That does\nmention the months/days/microseconds representation, but it doesn't\nfollow through by illustrating how other input is combined. Perhaps\nwe'd want to adopt something like the attached (this is separate from\nthe other patches I posted in the thread).\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 17 Feb 2024 15:30:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 4:30 AM Tom Lane <[email protected]> wrote:\n>\n>\n>\n> Once you look at the normalized value, the results of extract()\n> are far less surprising.\n>\n> Probably the right place to enlarge on this point is not in the\n> extract() section at all, but in 8.5.4. Interval Input. That does\n> mention the months/days/microseconds representation, but it doesn't\n> follow through by illustrating how other input is combined. Perhaps\n> we'd want to adopt something like the attached (this is separate from\n> the other patches I posted in the thread).\n>\n\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -10040,13 +10040,19 @@ EXTRACT(<replaceable>field</replaceable>\nFROM <replaceable>source</replaceable>)\n The <function>extract</function> function retrieves subfields\n such as year or hour from date/time values.\n <replaceable>source</replaceable> must be a value expression of\n- type <type>timestamp</type>, <type>time</type>, or <type>interval</type>.\n- (Expressions of type <type>date</type> are\n- cast to <type>timestamp</type> and can therefore be used as\n- well.) <replaceable>field</replaceable> is an identifier or\n+ type <type>timestamp</type>, <type>date</type>, <type>time</type>,\n+ or <type>interval</type>. (Timestamps and times can be with or\n+ without time zone.)\n+ <replaceable>field</replaceable> is an identifier or\n string that selects what field to extract from the source value.\n+ Not all fields are valid for every input data type; for example, fields\n+ smaller than a day cannot be extracted from a <type>date</type>, while\n+ fields of a day or more cannot be extracted from a <type>time</type>.\n The <function>extract</function> function returns values of type\n <type>numeric</type>.\n+ </para>\n\nyou already mentioned \"Not all fields are valid for every input data type\".\ninterval data type don't even have a unit \"quarter\",\nso the following should generate an error?\nselect extract(quarter from interval '2011 year 12 month 48 hour\n1005min 2 sec 11 ms');\n\n9.9.1. EXTRACT, date_part\nhour field description as\n`\nThe hour field (0–23)\n`\nDo we need to update for the EXTRACT(INTERVAL) case?\n\n\n",
"msg_date": "Sun, 18 Feb 2024 09:48:27 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> you already mentioned \"Not all fields are valid for every input data type\".\n> interval data type don't even have a unit \"quarter\",\n> so the following should generate an error?\n> select extract(quarter from interval '2011 year 12 month 48 hour\n> 1005min 2 sec 11 ms');\n\nI'm not especially persuaded by that reasoning. Intervals don't have\ncentury or millisecond fields either, but we allow extracting those.\n\nIf your argument is that we shouldn't allow it because we don't take\nthe input INTERVAL '1 quarter', I'd be much more inclined to add that\nas valid input than to take away existing extract functionality.\nBut I'm dubious about the proposition that extract's list of valid\nfields should exactly match the set of allowed input units. The\nsemantics aren't really the same (as per the '80 minutes' example)\nso such a restriction doesn't seem to have much basis in reality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Feb 2024 21:19:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> 9.9.1. EXTRACT, date_part\n> hour field description as\n> `\n> The hour field (0–23)\n> `\n> Do we need to update for the EXTRACT(INTERVAL) case?\n\nYeah, probably. I did a bit more wordsmithing too.\nHere's a rolled-up patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 19 Feb 2024 12:22:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 2:14 AM Tom Lane <[email protected]> wrote:\n>\n>\n> (Parenthetically, one case that perhaps is surprising is\n> ERROR: unit \"week\" not supported for type interval\n> Why not just return the day field divided by 7?)\n>\nseems pretty simple?\ndiff --git a/src/backend/utils/adt/timestamp.c\nb/src/backend/utils/adt/timestamp.c\nindex ed03c50a..5e69e258 100644\n--- a/src/backend/utils/adt/timestamp.c\n+++ b/src/backend/utils/adt/timestamp.c\n@@ -5992,6 +5992,10 @@ interval_part_common(PG_FUNCTION_ARGS, bool retnumeric)\n intresult = tm->tm_mday;\n break;\n\n+ case DTK_WEEK:\n+ intresult = (tm->tm_mday - 1) / 7 + 1;\n+ break;\nbut I am not sure not sure how to write the doc.\n\nOn Sun, Feb 18, 2024 at 10:19 AM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > you already mentioned \"Not all fields are valid for every input data type\".\n> > interval data type don't even have a unit \"quarter\",\n> > so the following should generate an error?\n> > select extract(quarter from interval '2011 year 12 month 48 hour\n> > 1005min 2 sec 11 ms');\n>\n> I'm not especially persuaded by that reasoning. Intervals don't have\n> century or millisecond fields either, but we allow extracting those.\n>\n> If your argument is that we shouldn't allow it because we don't take\n> the input INTERVAL '1 quarter', I'd be much more inclined to add that\n> as valid input than to take away existing extract functionality.\n> But I'm dubious about the proposition that extract's list of valid\n> fields should exactly match the set of allowed input units. The\n> semantics aren't really the same (as per the '80 minutes' example)\n> so such a restriction doesn't seem to have much basis in reality.\n>\n\nin interval_part_common:\ncase DTK_QUARTER:\nintresult = (tm->tm_mon / 3) + 1;\nbreak;\n\nin timestamp_part_common:\ncase DTK_QUARTER:\nintresult = (tm->tm_mon - 1) / 3 + 1;\nbreak;\n\nSo in section 9.9.1. EXTRACT, date_part\nwe may need to document extract(quarter from interval) case.\nintervals can be negative, which will make the issue more complicated.\nexcept the \"quarter\" field , EXTRACT other fields from intervals, the\noutput seems sane.\n\nfor example:\ndrop table s;\ncreate table s(a interval);\ninsert into s select ( g * 1000 || 'year ' || g || 'month ' || g || '\nday ' || g || 'hour ' || g || 'min ' || g || 'sec' )::interval\nfrom generate_series(-20, 20) g;\n\nselect\n extract(century from a) as century,\n extract(millennium from a) as millennium,\n extract(decade from a) as decade,\n extract(year from a) as year,\n extract(quarter from a) as quarter,\n extract(month from a) as mon,\n extract(day from a) as day,\n extract(hour from a) as hour,\n extract(min from a) as min,\n extract(second from a) as sec,\n extract(microseconds from a) as microseconds\n -- a\nfrom s order by 2 asc;\n\n\n",
"msg_date": "Tue, 20 Feb 2024 11:56:29 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Sun, Feb 18, 2024 at 2:14 AM Tom Lane <[email protected]> wrote:\n>> (Parenthetically, one case that perhaps is surprising is\n>> ERROR: unit \"week\" not supported for type interval\n>> Why not just return the day field divided by 7?)\n\n> seems pretty simple?\n\nHm, maybe, but does this behave desirably for zero or negative days?\n\n> So in section 9.9.1. EXTRACT, date_part\n> we may need to document extract(quarter from interval) case.\n> intervals can be negative, which will make the issue more complicated.\n> except the \"quarter\" field , EXTRACT other fields from intervals, the\n> output seems sane.\n\nYeah, I see what you mean: the output for negative month counts is\nvery bizarre, whereas other fields seem to all produce the negative\nof what they'd produce for the absolute value of the interval.\nWe could either try to fix that or decide that rejecting \"quarter\"\nfor intervals is the saner answer.\n\nI went ahead and pushed the docs changes after adding more explicit\ndescriptions of interval's behavior for the field types where it\nseemed important. If we make any changes to the behavior for\nweek or quarter fields, ISTM that should be a HEAD-only change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 14:42:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "I wrote:\n> jian he <[email protected]> writes:\n>> On Sun, Feb 18, 2024 at 2:14 AM Tom Lane <[email protected]> wrote:\n>>> (Parenthetically, one case that perhaps is surprising is\n>>> ERROR: unit \"week\" not supported for type interval\n>>> Why not just return the day field divided by 7?)\n\n>> seems pretty simple?\n\n> Hm, maybe, but does this behave desirably for zero or negative days?\n\n>> So in section 9.9.1. EXTRACT, date_part\n>> we may need to document extract(quarter from interval) case.\n>> intervals can be negative, which will make the issue more complicated.\n>> except the \"quarter\" field , EXTRACT other fields from intervals, the\n>> output seems sane.\n\n> Yeah, I see what you mean: the output for negative month counts is\n> very bizarre, whereas other fields seem to all produce the negative\n> of what they'd produce for the absolute value of the interval.\n> We could either try to fix that or decide that rejecting \"quarter\"\n> for intervals is the saner answer.\n\nAfter fooling with these cases for a little I'm inclined to think\nwe should do it as attached (no test or docs changes yet).\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 20 Feb 2024 15:56:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 4:56 AM Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > jian he <[email protected]> writes:\n> >> On Sun, Feb 18, 2024 at 2:14 AM Tom Lane <[email protected]> wrote:\n> >>> (Parenthetically, one case that perhaps is surprising is\n> >>> ERROR: unit \"week\" not supported for type interval\n> >>> Why not just return the day field divided by 7?)\n>\n> >> seems pretty simple?\n>\n> > Hm, maybe, but does this behave desirably for zero or negative days?\n>\n> >> So in section 9.9.1. EXTRACT, date_part\n> >> we may need to document extract(quarter from interval) case.\n> >> intervals can be negative, which will make the issue more complicated.\n> >> except the \"quarter\" field , EXTRACT other fields from intervals, the\n> >> output seems sane.\n>\n> > Yeah, I see what you mean: the output for negative month counts is\n> > very bizarre, whereas other fields seem to all produce the negative\n> > of what they'd produce for the absolute value of the interval.\n> > We could either try to fix that or decide that rejecting \"quarter\"\n> > for intervals is the saner answer.\n>\n> After fooling with these cases for a little I'm inclined to think\n> we should do it as attached (no test or docs changes yet).\n>\n> regards, tom lane\n>\n\nfor `week`, we can do following for the doc:\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex e5fa82c1..a21eb9f8 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -10422,7 +10422,7 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5');\n The number of the <acronym>ISO</acronym> 8601 week-numbering week of\n the year. By definition, ISO weeks start on Mondays and the first\n week of a year contains January 4 of that year. In other words, the\n- first Thursday of a year is in week 1 of that year.\n+ first Thursday of a year is in week 1 of that year. For\n<type>interval</type> values, divide the number of days by 7.\n\nActually, it's not totally correct, since \"the number of days is a\nnumeric value. need to cast \"the number of days\" to int.\n\nfor positive interval value, we can\n+ For positive <type>interval</type> values, divide the number of days\nby 3 then plus 1.\nI don't know how to write the documentation for the `quarter` when\nit's negative.\n\n\n",
"msg_date": "Thu, 29 Feb 2024 18:30:17 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> On Wed, Feb 21, 2024 at 4:56 AM Tom Lane <[email protected]> wrote:\n>>> Yeah, I see what you mean: the output for negative month counts is\n>>> very bizarre, whereas other fields seem to all produce the negative\n>>> of what they'd produce for the absolute value of the interval.\n>>> We could either try to fix that or decide that rejecting \"quarter\"\n>>> for intervals is the saner answer.\n\n>> After fooling with these cases for a little I'm inclined to think\n>> we should do it as attached (no test or docs changes yet).\n\n> ... I don't know how to write the documentation for the `quarter` when\n> it's negative.\n\nAfter poking at it some more, I realized that my draft patch was still\nwrong about that. We really have to look at interval->month if we\nwant to behave plausibly for negative months.\n\nHere's a more fleshed-out patch. I don't think we really need to\ndocument the behavior for negative intervals; at least, we haven't\ndone that so far for any other fields. I did add testing of such\ncases though.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 07 May 2024 17:27:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "Devils advocating here, feel free to ignore.\r\n\r\nIs there a real need for a negative month? Sounds like high level this could be disastrous if I screw up the syntax. (Ah, memories of DD)\r\n\r\nI have done this in data warehousing with dimensions tables.\r\n\r\nJust process on the INT and translate into the name.\r\n\r\nI was thinking on how a negative month could impact this side (data warehousing) side of querying. \r\n\r\nI could be chicken little on this, but wanted it in the conversation.\r\n\r\nworkaround for negative months:\r\n\r\nCREATE TABLE dim_biz_hours( year INT(4)\r\n, doy INT(3)\r\n, dow INT(7)\r\n, month INT(2)\r\n, day INT(2)\r\n, hour INT(2)\r\n, minute INT(2)\r\n, second INT(2)\r\n, utc_offset INT(2)\r\n, utc_offset_dst INT(2)\r\n);\r\n\r\nINSERT INTO biz_hours (year)\r\nSELECT * FROM generate_series(2000, 2099);\r\n\r\nINSERT INTO biz_hours (doy)\r\nSELECT * FROM generate_series(1, 366);\r\n\r\nINSERT INTO biz_hours (dow)\r\nSELECT * FROM generate_series(1, 7);\r\n\r\nINSERT INTO biz_hours (month)\r\nSELECT * FROM generate_series(1, 12);\r\n\r\nINSERT INTO biz_hours (day)\r\nSELECT * FROM generate_series(1, 31) ;\r\n\r\nINSERT INTO biz_hours (hour)\r\nSELECT * FROM generate_series(1, 24);\r\n\r\nINSERT INTO biz_hours (minute)\r\nSELECT * FROM generate_series(1, 60);\r\n\r\nINSERT INTO biz_hours (second\r\nSELECT * FROM generate_series(1, 60);\r\n\r\nINSERT INTO biz_hours (utc_offset)\r\nSELECT * FROM generate_series(1, 24);\r\n\r\nINSERT INTO biz_hours (utc_offset_dst)\r\nSELECT * FROM generate_series(1, 24);\r\n\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <[email protected]> \r\nSent: Tuesday, May 7, 2024 2:27 PM\r\nTo: jian he <[email protected]>\r\nCc: Francisco Olarte <[email protected]>; Michael Bondarenko <[email protected]>; [email protected]; [email protected]; Peter Eisentraut <[email protected]>\r\nSubject: [EXTERNAL] Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);\r\n\r\njian he <[email protected]> writes:\r\n> On Wed, Feb 21, 2024 at 4:56 AM Tom Lane <[email protected]> wrote:\r\n>>> Yeah, I see what you mean: the output for negative month counts is \r\n>>> very bizarre, whereas other fields seem to all produce the negative \r\n>>> of what they'd produce for the absolute value of the interval.\r\n>>> We could either try to fix that or decide that rejecting \"quarter\"\r\n>>> for intervals is the saner answer.\r\n\r\n>> After fooling with these cases for a little I'm inclined to think we \r\n>> should do it as attached (no test or docs changes yet).\r\n\r\n> ... I don't know how to write the documentation for the `quarter` when \r\n> it's negative.\r\n\r\nAfter poking at it some more, I realized that my draft patch was still wrong about that. We really have to look at interval->month if we want to behave plausibly for negative months.\r\n\r\nHere's a more fleshed-out patch. I don't think we really need to document the behavior for negative intervals; at least, we haven't done that so far for any other fields. I did add testing of such cases though.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n",
"msg_date": "Tue, 7 May 2024 22:09:34 +0000",
"msg_from": "\"Wetmore, Matthew (CTR)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Wed, May 8, 2024 at 5:27 AM Tom Lane <[email protected]> wrote:\n>\n> Here's a more fleshed-out patch. I don't think we really need to\n> document the behavior for negative intervals; at least, we haven't\n> done that so far for any other fields. I did add testing of such\n> cases though.\n>\n\nthe doc looks good to me.\nextract quarter from the interval makes sense to me.\n\nbut in real life, for week, we generally begin with 1?\nlike \"the first week\", \"second week\"\n\nso should\nselect extract(week from interval '1 day');\nreturn 1\n?\n\n\n",
"msg_date": "Wed, 8 May 2024 09:03:56 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "\"Wetmore, Matthew (CTR)\" <[email protected]> writes:\n> Devils advocating here, feel free to ignore.\n> Is there a real need for a negative month? Sounds like high level this could be disastrous if I screw up the syntax. (Ah, memories of DD)\n\nWhat are you objecting to the \"need for\"? That intervals can store\nnegative months at all? I think that ship sailed a couple decades\nago. It's hard to use interval as the output of, say,\ntimestamp minus timestamp if it refuses to allow negative values.\n\nThe next fallback position perhaps could be that extract(quarter ...)\ncould throw error for negative input, but that seems like mostly a\nfoot-gun. We've striven elsewhere to not have it throw error, even\nif there's not any very sane choice to make. For instance, these\nare pre-existing behaviors:\n\nregression=# select extract(quarter from interval 'infinity');\n extract \n---------\n \n(1 row)\n\nregression=# select extract(quarter from interval '-infinity');\n extract \n---------\n \n(1 row)\n\nMaybe there's a case for returning null for \"quarter\" for any negative\nmonths value, but that seems inconsistent with other behaviors of\nextract(). The pattern I see for finite values is that negating\nthe input interval negates each output of extract().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2024 09:55:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> but in real life, for week, we generally begin with 1?\n> like \"the first week\", \"second week\"\n> so should\n> select extract(week from interval '1 day');\n> return 1\n> ?\n\nHmm, I read it as being \"the number of (whole) weeks in the\ninterval\". Starting with week 1 is what happens in the timestamp\ncase, true, but I don't find that appropriate for interval.\nBy analogy,\n\nregression=# select extract(day from interval '23 hours');\n extract \n---------\n 0\n(1 row)\n\nThere's no such thing as \"day 0\" in the timestamp case,\nbut that doesn't make this wrong.\n\nIn any case, I'm starting to wonder why this issue is on the v17\nopen items list. These are hardly new bugs in 17. If there's\nstill differences of opinion about what the definition should be,\nI think cramming in a change post-feature-freeze is not appropriate.\nLet's just queue the issue for the next commitfest (already done\nat [1]) and take it off the open items list.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/48/4979/\n\n\n",
"msg_date": "Wed, 08 May 2024 10:10:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nHi, works out well everything. This is my first review, so if I should add more content here let me know.\r\nCheers, Martijn.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Sat, 18 May 2024 14:02:17 +0000",
"msg_from": "Martijn Wallet <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "For some reason the review indicated \"failed\".\r\nIt should of course read:\r\nmake installcheck-world: tested, passed\r\nImplements feature: tested, passed\r\nSpec compliant: tested, passed\r\nDocumentation: tested, passed",
"msg_date": "Sat, 18 May 2024 14:47:29 +0000",
"msg_from": "Martijn Wallet <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "I took another look at this issue and got annoyed by the fact that the\nproposed coding for \"quarter\" still doesn't satisfy the rule that\nthe output for a negative interval should be the negative of the\noutput for the sign-reversed interval. Specifically, if the month\nfield is zero, the v2 patch always emits 1:\n\nregression=# select extract(quarter from interval '1 day');\n extract \n---------\n 1\n(1 row)\n\nregression=# select extract(quarter from interval '-1 day');\n extract \n---------\n 1\n(1 row)\n\nWe could fix that by examining the sign of the lower-order fields\nwhen month is zero, as in the v3 patch attached. However, I'm not\nat all sure this is really better than v2. Notably, it makes the\ndocumentation's statement that the result is \"the month field\ndivided by 3 plus 1\" even more incomplete. I still don't really\nwant to go into details about the behavior for negative intervals.\nOTOH if we did do that, I'd rather write a blanket statement\nabout the result being the negative of the result for a positive\ninterval.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 08 Jul 2024 13:03:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Tue, Jul 9, 2024 at 1:03 AM Tom Lane <[email protected]> wrote:\n>\n> I took another look at this issue and got annoyed by the fact that the\n> proposed coding for \"quarter\" still doesn't satisfy the rule that\n> the output for a negative interval should be the negative of the\n> output for the sign-reversed interval. Specifically, if the month\n> field is zero, the v2 patch always emits 1:\n>\n> regression=# select extract(quarter from interval '1 day');\n> extract\n> ---------\n> 1\n> (1 row)\n>\n> regression=# select extract(quarter from interval '-1 day');\n> extract\n> ---------\n> 1\n> (1 row)\n>\n> We could fix that by examining the sign of the lower-order fields\n> when month is zero, as in the v3 patch attached. However, I'm not\n> at all sure this is really better than v2. Notably, it makes the\n> documentation's statement that the result is \"the month field\n> divided by 3 plus 1\" even more incomplete. I still don't really\n> want to go into details about the behavior for negative intervals.\n> OTOH if we did do that, I'd rather write a blanket statement\n> about the result being the negative of the result for a positive\n> interval.\n>\n> Thoughts?\n>\n> regards, tom lane\n>\n\n\n+ <para>\n+ For <type>interval</type> values, the week field is simply the number\n+ of integral days divided by 7.\n+ </para>\n\n\n+SELECT EXTRACT(WEEK FROM INTERVAL '13 days 24 hours');\n+<lineannotation>Result: </lineannotation><computeroutput>1</computeroutput>\n\nnot sure the doc example will vividly demonstrate the explanation (\"integral\")\nor confuse people, given that\nSELECT EXTRACT(WEEK FROM INTERVAL '14 days');\nreturns 2.\n\nand\nSELECT INTERVAL '14 days' = INTERVAL '13 days 24 hours';\nis true.\n\n\n",
"msg_date": "Sat, 13 Jul 2024 00:35:19 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Mon, Jul 8, 2024 at 01:03:28PM -0400, Tom Lane wrote:\n> I took another look at this issue and got annoyed by the fact that the\n> proposed coding for \"quarter\" still doesn't satisfy the rule that\n> the output for a negative interval should be the negative of the\n> output for the sign-reversed interval. Specifically, if the month\n> field is zero, the v2 patch always emits 1:\n> \n> regression=# select extract(quarter from interval '1 day');\n> extract \n> ---------\n> 1\n> (1 row)\n> \n> regression=# select extract(quarter from interval '-1 day');\n> extract \n> ---------\n> 1\n> (1 row)\n> \n> We could fix that by examining the sign of the lower-order fields\n> when month is zero, as in the v3 patch attached. However, I'm not\n> at all sure this is really better than v2. Notably, it makes the\n> documentation's statement that the result is \"the month field\n> divided by 3 plus 1\" even more incomplete. I still don't really\n> want to go into details about the behavior for negative intervals.\n> OTOH if we did do that, I'd rather write a blanket statement\n> about the result being the negative of the result for a positive\n> interval.\n> \n> Thoughts?\n\nI tested master, patch version 2 and patch version 3 with some sample\nextract() queires, attached. I like patch version 2. Patch version 3\nbothers me because \"-600 days\" is ignored if months is non-zero, and\nused for its sign for zero month values, which seems odd to me; better\nto ignore it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 15 Aug 2024 22:45:58 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 10:45:58PM -0400, Bruce Momjian wrote:\n> > We could fix that by examining the sign of the lower-order fields\n> > when month is zero, as in the v3 patch attached. However, I'm not\n> > at all sure this is really better than v2. Notably, it makes the\n> > documentation's statement that the result is \"the month field\n> > divided by 3 plus 1\" even more incomplete. I still don't really\n> > want to go into details about the behavior for negative intervals.\n> > OTOH if we did do that, I'd rather write a blanket statement\n> > about the result being the negative of the result for a positive\n> > interval.\n> > \n> > Thoughts?\n> \n> I tested master, patch version 2 and patch version 3 with some sample\n> extract() queires, attached. I like patch version 2. Patch version 3\n> bothers me because \"-600 days\" is ignored if months is non-zero, and\n> used for its sign for zero month values, which seems odd to me; better\n> to ignore it.\n\nI think there are two more issues. In patch version 3, when months is\nzero and you check days, you should also check seconds if days is zero.\n\nI think the other issue is that zero months is a valid Q1 value, since\nmonths 0-2 are Q1; from master:\n\n\tSELECT extract(quarter FROM interval '0 months');\n\t extract\n\t---------\n\t 1\n\t\n\tSELECT extract(quarter FROM interval '2 months');\n\t extract\n\t---------\n\t 1\n\t\n\tSELECT extract(quarter FROM interval '3 months');\n\t extract\n\t---------\n\t 2\n\nso the idea that we should adjust the sign for zero months quarter\nextract doesn't seem logical to me.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:26:58 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Thu, Aug 15, 2024 at 10:45:58PM -0400, Bruce Momjian wrote:\n>> I tested master, patch version 2 and patch version 3 with some sample\n>> extract() queires, attached. I like patch version 2.\n\nI'm still pretty dissatisfied with both versions :-(\n\n> I think there are two more issues. In patch version 3, when months is\n> zero and you check days, you should also check seconds if days is zero.\n\nEh? v3 does that:\n\n+ else if (interval->day > 0 ||\n+ (interval->day == 0 && interval->time >= 0))\n\nBut I'm starting to despair of reaching a solution that's actually\nself-consistent. Maybe we should leave the DTK_QUARTER behavior\nalone, and content ourselves with adding DTK_WEEK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:37:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 11:37:55AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Thu, Aug 15, 2024 at 10:45:58PM -0400, Bruce Momjian wrote:\n> >> I tested master, patch version 2 and patch version 3 with some sample\n> >> extract() queries, attached. I like patch version 2.\n> \n> I'm still pretty dissatisfied with both versions :-(\n> \n> > I think there are two more issues. In patch version 3, when months is\n> > zero and you check days, you should also check seconds if days is zero.\n> \n> Eh? v3 does that:\n> \n> + else if (interval->day > 0 ||\n> + (interval->day == 0 && interval->time >= 0))\n\nOh, sorry, I missed that detail.\n\n> But I'm starting to despair of reaching a solution that's actually\n> self-consistent. Maybe we should leave the DTK_QUARTER behavior\n> alone, and content ourselves with adding DTK_WEEK.\n\nWell, I liked that -4 months actually was in -2 quarter. I see your\npoint that if 0-2 is Q1, why is only -1 to -2 in minus Q1, but I think I\ncan live with that on the assumption that negative months can be handled\ndifferently.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:52:38 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Fri, Aug 16, 2024 at 11:37:55AM -0400, Tom Lane wrote:\n>> But I'm starting to despair of reaching a solution that's actually\n>> self-consistent. Maybe we should leave the DTK_QUARTER behavior\n>> alone, and content ourselves with adding DTK_WEEK.\n\n> Well, I liked that -4 months actually was in -2 quarter.\n\nYeah. On further reflection, I agree it's a bad idea for the\nDTK_QUARTER computation to depend on anything but the months field.\nSo that lets out v3. However, what we have historically is\n\nregression=# select n, extract(quarter from interval '1 mon' * n) from generate_series(-12,12) n;\n n | extract \n-----+---------\n -12 | 1\n -11 | -2\n -10 | -2\n -9 | -2\n -8 | -1\n -7 | -1\n -6 | -1\n -5 | 0\n -4 | 0\n -3 | 0\n -2 | 1\n -1 | 1\n 0 | 1\n 1 | 1\n 2 | 1\n 3 | 2\n 4 | 2\n 5 | 2\n 6 | 3\n 7 | 3\n 8 | 3\n 9 | 4\n 10 | 4\n 11 | 4\n 12 | 1\n(25 rows)\n\nwhich is fine on the positive side but it's hard to describe the\nresults for negative months as anything but wacko. The v2 patch\ngives\n\nregression=# select n, extract(quarter from interval '1 mon' * n) from generate_series(-12,12) n;\n n | extract \n-----+---------\n -12 | -1\n -11 | -4\n -10 | -4\n -9 | -4\n -8 | -3\n -7 | -3\n -6 | -3\n -5 | -2\n -4 | -2\n -3 | -2\n -2 | -1\n -1 | -1\n 0 | 1\n 1 | 1\n 2 | 1\n 3 | 2\n 4 | 2\n 5 | 2\n 6 | 3\n 7 | 3\n 8 | 3\n 9 | 4\n 10 | 4\n 11 | 4\n 12 | 1\n(25 rows)\n\nwhich is a whole lot saner. So let's run with v2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Aug 2024 12:06:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 12:06:35PM -0400, Tom Lane wrote:\n> regression=# select n, extract(quarter from interval '1 mon' * n) from generate_series(-12,12) n;\n> n | extract \n> -----+---------\n> -12 | 1\n> -11 | -2\n> -10 | -2\n> -9 | -2\n\nWow, that \"1\" is weird to see.\n\n> which is fine on the positive side but it's hard to describe the\n> results for negative months as anything but wacko. The v2 patch\n> gives\n> \n> regression=# select n, extract(quarter from interval '1 mon' * n) from generate_series(-12,12) n;\n> n | extract \n> -----+---------\n> -12 | -1\n> -11 | -4\n> -10 | -4\n> -9 | -4\n> -8 | -3\n> -7 | -3\n> -6 | -3\n> -5 | -2\n> -4 | -2\n> -3 | -2\n> -2 | -1\n> -1 | -1\n> 0 | 1\n> 1 | 1\n> 2 | 1\n> 3 | 2\n> 4 | 2\n> 5 | 2\n> 6 | 3\n> 7 | 3\n> 8 | 3\n> 9 | 4\n> 10 | 4\n> 11 | 4\n> 12 | 1\n> (25 rows)\n> \n> which is a whole lot saner. So let's run with v2.\n\nYes, that v2 output looks very clean. I had to really dig my head into\nthis so I am not surprised it was confusing to find the right solution.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 12:15:58 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18348: Inconsistency with EXTRACT([field] from INTERVAL);"
}
] |
[
{
"msg_contents": "Hi,\n\nThe following assertion failure was seen while testing one scenario\nfor other patch:\nTRAP: failed Assert(\"s->data.confirmed_flush >=\ns->last_saved_confirmed_flush\"), File: \"slot.c\", Line: 1760, PID:\n545314\npostgres: checkpointer performing shutdown\ncheckpoint(ExceptionalCondition+0xbb)[0x564ee6870c58]\npostgres: checkpointer performing shutdown\ncheckpoint(CheckPointReplicationSlots+0x18e)[0x564ee65e9c71]\npostgres: checkpointer performing shutdown checkpoint(+0x1e1403)[0x564ee61be403]\npostgres: checkpointer performing shutdown\ncheckpoint(CreateCheckPoint+0x78a)[0x564ee61bdace]\npostgres: checkpointer performing shutdown\ncheckpoint(ShutdownXLOG+0x150)[0x564ee61bc735]\npostgres: checkpointer performing shutdown checkpoint(+0x5ae28c)[0x564ee658b28c]\npostgres: checkpointer performing shutdown\ncheckpoint(CheckpointerMain+0x31e)[0x564ee658ad55]\npostgres: checkpointer performing shutdown\ncheckpoint(AuxiliaryProcessMain+0x1d1)[0x564ee65888d9]\npostgres: checkpointer performing shutdown checkpoint(+0x5b7200)[0x564ee6594200]\npostgres: checkpointer performing shutdown\ncheckpoint(PostmasterMain+0x14da)[0x564ee658f12f]\npostgres: checkpointer performing shutdown checkpoint(+0x464fc6)[0x564ee6441fc6]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7ff6afa29d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7ff6afa29e40]\npostgres: checkpointer performing shutdown\ncheckpoint(_start+0x25)[0x564ee60b8e05]\n\nI was able to reproduce this issue with the following steps:\n-- Setup\n-- Publisher node:\ncreate table t1(c1 int);\ncreate table t2(c1 int);\ncreate publication pub1 for table t1;\ncreate publication pub2 for table t2;\n\n-- Subscriber node:\ncreate table t1(c1 int);\ncreate table t2(c1 int);\ncreate subscription test1 connection 'dbname=postgres host=localhost\nport=5432' publication pub1, pub2;\nselect * from pg_subscription;\n\n-- Actual test\ninsert into t1 values(10);\ninsert into t2 values(20);\nselect pg_sleep(10);\ndrop publication pub2;\ninsert into t1 values(10);\ninsert into t2 values(20);\n\nStop the publisher to see the assertion.\n\nFor me the issue reproduces about twice in five times using the\nassert_failure.sh script attached.\n\nAfter the insert operation is replicated to the subscriber, the\nsubscriber will set the lsn value sent by the publisher in the\nreplication origin (in my case it was 0/1510978). publisher will then\nsend keepalive messages with the current WAL position in the publisher\n(in my case it was 0/15109B0), but subscriber will simply send this\nposition as the flush_lsn to the publisher as there are no ongoing\ntransactions. Then since the publisher is started, it will identify\nthat publication does not exist and stop the walsender/apply worker\nprocess. When the apply worker is restarted, we will get the\nremote_lsn(in my case it was 0/1510978) of the origin and set it to\norigin_startpos. We will start the apply worker with this\norigin_startpos (origin's remote_lsn). This position will be sent as\nfeedback to the walsender process from the below stack:\nrun_apply_worker->start_apply->LogicalRepApplyLoop->send_feedback.\nIt will use the following send_feedback function call of\nLogicalRepApplyLoop function as in below code here as nothing is\nreceived from walsender:\nLogicalRepApplyLoop function\n.......\nlen = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);\nif (len != 0)\n{\n /* Loop to process all available data (without blocking). */\n for (;;)\n {\n CHECK_FOR_INTERRUPTS();\n ...\n }\n}\n\n/* confirm all writes so far */\nsend_feedback(last_received, false, false);\n.......\n\nIn send_feedback, we will set flushpos to replication origin's\nremote_lsn and send it to the walsender process. Walsender process\nwill receive this information and set confirmed_flush in:\nProcessStandbyReplyMessage->LogicalConfirmReceivedLocation\n\nThen immediately we are trying to stop the publisher instance,\nshutdown checkpoint process will be triggered. In this case:\nconfirmed_flush = 0/1510978 will be lesser than\nlast_saved_confirmed_flush = 0/15109B0 which will result in Assertion\nfailure.\n\nThis issue is happening because we allow setting the confirmed_flush\nto a backward position.\nThere are a couple of ways to fix this:\na) One way it not to update the confirm_flush if the lsn sent is an\nolder value like in Confirm_flush_dont_allow_backward.patch\nb) Another way is to remove the assertion in\nCheckPointReplicationSlots and marking the slot as dirty only if\nconfirmed_flush is greater than last_saved_confirmed_flush like in\nAssert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch\n\nI preferred the first approach.\n\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Fri, 16 Feb 2024 17:39:47 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "confirmed flush lsn seems to be move backward in certain error cases"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 5:53 PM vignesh C <[email protected]> wrote:\n>\n>\n> After the insert operation is replicated to the subscriber, the\n> subscriber will set the lsn value sent by the publisher in the\n> replication origin (in my case it was 0/1510978). publisher will then\n> send keepalive messages with the current WAL position in the publisher\n> (in my case it was 0/15109B0), but subscriber will simply send this\n> position as the flush_lsn to the publisher as there are no ongoing\n> transactions. Then since the publisher is started, it will identify\n> that publication does not exist and stop the walsender/apply worker\n> process. When the apply worker is restarted, we will get the\n> remote_lsn(in my case it was 0/1510978) of the origin and set it to\n> origin_startpos. We will start the apply worker with this\n> origin_startpos (origin's remote_lsn). This position will be sent as\n> feedback to the walsender process from the below stack:\n> run_apply_worker->start_apply->LogicalRepApplyLoop->send_feedback.\n> It will use the following send_feedback function call of\n> LogicalRepApplyLoop function as in below code here as nothing is\n> received from walsender:\n> LogicalRepApplyLoop function\n> .......\n> len = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);\n> if (len != 0)\n> {\n> /* Loop to process all available data (without blocking). */\n> for (;;)\n> {\n> CHECK_FOR_INTERRUPTS();\n> ...\n> }\n> }\n>\n> /* confirm all writes so far */\n> send_feedback(last_received, false, false);\n> .......\n>\n> In send_feedback, we will set flushpos to replication origin's\n> remote_lsn and send it to the walsender process. Walsender process\n> will receive this information and set confirmed_flush in:\n> ProcessStandbyReplyMessage->LogicalConfirmReceivedLocation\n>\n> Then immediately we are trying to stop the publisher instance,\n> shutdown checkpoint process will be triggered. In this case:\n> confirmed_flush = 0/1510978 will be lesser than\n> last_saved_confirmed_flush = 0/15109B0 which will result in Assertion\n> failure.\n>\n> This issue is happening because we allow setting the confirmed_flush\n> to a backward position.\n>\n\nI see your point.\n\n> There are a couple of ways to fix this:\n> a) One way it not to update the confirm_flush if the lsn sent is an\n> older value like in Confirm_flush_dont_allow_backward.patch\n>\n\n@@ -1839,7 +1839,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n\n SpinLockAcquire(&MyReplicationSlot->mutex);\n\n- MyReplicationSlot->data.confirmed_flush = lsn;\n+ if (lsn > MyReplicationSlot->data.confirmed_flush)\n+ MyReplicationSlot->data.confirmed_flush = lsn;\n\n /* if we're past the location required for bumping xmin, do so */\n if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&\n@@ -1904,7 +1905,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n else\n {\n SpinLockAcquire(&MyReplicationSlot->mutex);\n- MyReplicationSlot->data.confirmed_flush = lsn;\n+ if (lsn > MyReplicationSlot->data.confirmed_flush)\n+ MyReplicationSlot->data.confirmed_flush = lsn;\n\nBTW, from which code path does it update the prior value of\nconfirmed_flush? If it is through the else check, then can we see if\nit may change the confirm_flush to the prior position via the first\ncode path? I am asking because in the first code path, we can even\nflush the re-treated value of confirm_flush LSN.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 17 Feb 2024 12:02:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Sat, 17 Feb 2024 at 12:03, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Feb 16, 2024 at 5:53 PM vignesh C <[email protected]> wrote:\n> >\n> >\n> > After the insert operation is replicated to the subscriber, the\n> > subscriber will set the lsn value sent by the publisher in the\n> > replication origin (in my case it was 0/1510978). publisher will then\n> > send keepalive messages with the current WAL position in the publisher\n> > (in my case it was 0/15109B0), but subscriber will simply send this\n> > position as the flush_lsn to the publisher as there are no ongoing\n> > transactions. Then since the publisher is started, it will identify\n> > that publication does not exist and stop the walsender/apply worker\n> > process. When the apply worker is restarted, we will get the\n> > remote_lsn(in my case it was 0/1510978) of the origin and set it to\n> > origin_startpos. We will start the apply worker with this\n> > origin_startpos (origin's remote_lsn). This position will be sent as\n> > feedback to the walsender process from the below stack:\n> > run_apply_worker->start_apply->LogicalRepApplyLoop->send_feedback.\n> > It will use the following send_feedback function call of\n> > LogicalRepApplyLoop function as in below code here as nothing is\n> > received from walsender:\n> > LogicalRepApplyLoop function\n> > .......\n> > len = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);\n> > if (len != 0)\n> > {\n> > /* Loop to process all available data (without blocking). */\n> > for (;;)\n> > {\n> > CHECK_FOR_INTERRUPTS();\n> > ...\n> > }\n> > }\n> >\n> > /* confirm all writes so far */\n> > send_feedback(last_received, false, false);\n> > .......\n> >\n> > In send_feedback, we will set flushpos to replication origin's\n> > remote_lsn and send it to the walsender process. Walsender process\n> > will receive this information and set confirmed_flush in:\n> > ProcessStandbyReplyMessage->LogicalConfirmReceivedLocation\n> >\n> > Then immediately we are trying to stop the publisher instance,\n> > shutdown checkpoint process will be triggered. In this case:\n> > confirmed_flush = 0/1510978 will be lesser than\n> > last_saved_confirmed_flush = 0/15109B0 which will result in Assertion\n> > failure.\n> >\n> > This issue is happening because we allow setting the confirmed_flush\n> > to a backward position.\n> >\n>\n> I see your point.\n>\n> > There are a couple of ways to fix this:\n> > a) One way it not to update the confirm_flush if the lsn sent is an\n> > older value like in Confirm_flush_dont_allow_backward.patch\n> >\n>\n> @@ -1839,7 +1839,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n>\n> SpinLockAcquire(&MyReplicationSlot->mutex);\n>\n> - MyReplicationSlot->data.confirmed_flush = lsn;\n> + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> + MyReplicationSlot->data.confirmed_flush = lsn;\n>\n> /* if we're past the location required for bumping xmin, do so */\n> if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&\n> @@ -1904,7 +1905,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> else\n> {\n> SpinLockAcquire(&MyReplicationSlot->mutex);\n> - MyReplicationSlot->data.confirmed_flush = lsn;\n> + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> + MyReplicationSlot->data.confirmed_flush = lsn;\n>\n> BTW, from which code path does it update the prior value of\n> confirmed_flush?\n\nThe confirmed_flush is getting set in the else condition for this scenario.\n\nIf it is through the else check, then can we see if\n> it may change the confirm_flush to the prior position via the first\n> code path? I am asking because in the first code path, we can even\n> flush the re-treated value of confirm_flush LSN.\n\nI was not able to find any scenario to set a prior position with the\nfirst code path. I tried various scenarios like adding delay in\nwalsender, add delay in apply worker, restart the instances and with\nvarious DML operations. It was always setting it to either to the same\nvalue as previous or greater value.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:35:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 17:39, vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> The following assertion failure was seen while testing one scenario\n> for other patch:\n> TRAP: failed Assert(\"s->data.confirmed_flush >=\n> s->last_saved_confirmed_flush\"), File: \"slot.c\", Line: 1760, PID:\n> 545314\n> postgres: checkpointer performing shutdown\n> checkpoint(ExceptionalCondition+0xbb)[0x564ee6870c58]\n> postgres: checkpointer performing shutdown\n> checkpoint(CheckPointReplicationSlots+0x18e)[0x564ee65e9c71]\n> postgres: checkpointer performing shutdown checkpoint(+0x1e1403)[0x564ee61be403]\n> postgres: checkpointer performing shutdown\n> checkpoint(CreateCheckPoint+0x78a)[0x564ee61bdace]\n> postgres: checkpointer performing shutdown\n> checkpoint(ShutdownXLOG+0x150)[0x564ee61bc735]\n> postgres: checkpointer performing shutdown checkpoint(+0x5ae28c)[0x564ee658b28c]\n> postgres: checkpointer performing shutdown\n> checkpoint(CheckpointerMain+0x31e)[0x564ee658ad55]\n> postgres: checkpointer performing shutdown\n> checkpoint(AuxiliaryProcessMain+0x1d1)[0x564ee65888d9]\n> postgres: checkpointer performing shutdown checkpoint(+0x5b7200)[0x564ee6594200]\n> postgres: checkpointer performing shutdown\n> checkpoint(PostmasterMain+0x14da)[0x564ee658f12f]\n> postgres: checkpointer performing shutdown checkpoint(+0x464fc6)[0x564ee6441fc6]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7ff6afa29d90]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7ff6afa29e40]\n> postgres: checkpointer performing shutdown\n> checkpoint(_start+0x25)[0x564ee60b8e05]\n>\n> I was able to reproduce this issue with the following steps:\n> -- Setup\n> -- Publisher node:\n> create table t1(c1 int);\n> create table t2(c1 int);\n> create publication pub1 for table t1;\n> create publication pub2 for table t2;\n>\n> -- Subscriber node:\n> create table t1(c1 int);\n> create table t2(c1 int);\n> create subscription test1 connection 'dbname=postgres host=localhost\n> port=5432' publication pub1, pub2;\n> select * from pg_subscription;\n>\n> -- Actual test\n> insert into t1 values(10);\n> insert into t2 values(20);\n> select pg_sleep(10);\n> drop publication pub2;\n> insert into t1 values(10);\n> insert into t2 values(20);\n>\n> Stop the publisher to see the assertion.\n>\n> For me the issue reproduces about twice in five times using the\n> assert_failure.sh script attached.\n>\n> After the insert operation is replicated to the subscriber, the\n> subscriber will set the lsn value sent by the publisher in the\n> replication origin (in my case it was 0/1510978). publisher will then\n> send keepalive messages with the current WAL position in the publisher\n> (in my case it was 0/15109B0), but subscriber will simply send this\n> position as the flush_lsn to the publisher as there are no ongoing\n> transactions. Then since the publisher is started, it will identify\n> that publication does not exist and stop the walsender/apply worker\n> process. When the apply worker is restarted, we will get the\n> remote_lsn(in my case it was 0/1510978) of the origin and set it to\n> origin_startpos. We will start the apply worker with this\n> origin_startpos (origin's remote_lsn). This position will be sent as\n> feedback to the walsender process from the below stack:\n> run_apply_worker->start_apply->LogicalRepApplyLoop->send_feedback.\n> It will use the following send_feedback function call of\n> LogicalRepApplyLoop function as in below code here as nothing is\n> received from walsender:\n> LogicalRepApplyLoop function\n> .......\n> len = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);\n> if (len != 0)\n> {\n> /* Loop to process all available data (without blocking). */\n> for (;;)\n> {\n> CHECK_FOR_INTERRUPTS();\n> ...\n> }\n> }\n>\n> /* confirm all writes so far */\n> send_feedback(last_received, false, false);\n> .......\n>\n> In send_feedback, we will set flushpos to replication origin's\n> remote_lsn and send it to the walsender process. Walsender process\n> will receive this information and set confirmed_flush in:\n> ProcessStandbyReplyMessage->LogicalConfirmReceivedLocation\n>\n> Then immediately we are trying to stop the publisher instance,\n> shutdown checkpoint process will be triggered. In this case:\n> confirmed_flush = 0/1510978 will be lesser than\n> last_saved_confirmed_flush = 0/15109B0 which will result in Assertion\n> failure.\n>\n> This issue is happening because we allow setting the confirmed_flush\n> to a backward position.\n> There are a couple of ways to fix this:\n> a) One way it not to update the confirm_flush if the lsn sent is an\n> older value like in Confirm_flush_dont_allow_backward.patch\n> b) Another way is to remove the assertion in\n> CheckPointReplicationSlots and marking the slot as dirty only if\n> confirmed_flush is greater than last_saved_confirmed_flush like in\n> Assert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch\n>\n> I preferred the first approach.\n\nI have created the following commitfest entry for this:\nhttps://commitfest.postgresql.org/47/4845/\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 20 Feb 2024 18:56:42 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 12:35 PM vignesh C <[email protected]> wrote:\n>\n> On Sat, 17 Feb 2024 at 12:03, Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > @@ -1839,7 +1839,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> >\n> > SpinLockAcquire(&MyReplicationSlot->mutex);\n> >\n> > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > + MyReplicationSlot->data.confirmed_flush = lsn;\n> >\n> > /* if we're past the location required for bumping xmin, do so */\n> > if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&\n> > @@ -1904,7 +1905,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> > else\n> > {\n> > SpinLockAcquire(&MyReplicationSlot->mutex);\n> > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > + MyReplicationSlot->data.confirmed_flush = lsn;\n> >\n> > BTW, from which code path does it update the prior value of\n> > confirmed_flush?\n>\n> The confirmed_flush is getting set in the else condition for this scenario.\n>\n> If it is through the else check, then can we see if\n> > it may change the confirm_flush to the prior position via the first\n> > code path? I am asking because in the first code path, we can even\n> > flush the re-treated value of confirm_flush LSN.\n>\n> I was not able to find any scenario to set a prior position with the\n> first code path. I tried various scenarios like adding delay in\n> walsender, add delay in apply worker, restart the instances and with\n> various DML operations. It was always setting it to either to the same\n> value as previous or greater value.\n>\n\nFair enough. This means that in the prior versions, it was never\npossible to move confirmed_flush LSN in the slot to a backward\nposition on the disk. So, moving it backward temporarily (in the\nmemory) shouldn't create any problem. I would prefer your\nAssert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch\nto fix this issue.\n\nThoughts?\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jun 2024 16:38:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Mon, 10 Jun 2024 at 16:39, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 12:35 PM vignesh C <[email protected]> wrote:\n> >\n> > On Sat, 17 Feb 2024 at 12:03, Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > @@ -1839,7 +1839,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> > >\n> > > SpinLockAcquire(&MyReplicationSlot->mutex);\n> > >\n> > > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > > + MyReplicationSlot->data.confirmed_flush = lsn;\n> > >\n> > > /* if we're past the location required for bumping xmin, do so */\n> > > if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&\n> > > @@ -1904,7 +1905,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> > > else\n> > > {\n> > > SpinLockAcquire(&MyReplicationSlot->mutex);\n> > > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > > + MyReplicationSlot->data.confirmed_flush = lsn;\n> > >\n> > > BTW, from which code path does it update the prior value of\n> > > confirmed_flush?\n> >\n> > The confirmed_flush is getting set in the else condition for this scenario.\n> >\n> > If it is through the else check, then can we see if\n> > > it may change the confirm_flush to the prior position via the first\n> > > code path? I am asking because in the first code path, we can even\n> > > flush the re-treated value of confirm_flush LSN.\n> >\n> > I was not able to find any scenario to set a prior position with the\n> > first code path. I tried various scenarios like adding delay in\n> > walsender, add delay in apply worker, restart the instances and with\n> > various DML operations. It was always setting it to either to the same\n> > value as previous or greater value.\n> >\n>\n> Fair enough. This means that in the prior versions, it was never\n> possible to move confirmed_flush LSN in the slot to a backward\n> position on the disk. So, moving it backward temporarily (in the\n> memory) shouldn't create any problem. I would prefer your\n> Assert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch\n> to fix this issue.\n>\n> Thoughts?\n\nI was able to reproduce the issue with the test script provided in\n[1]. I ran the script 10 times and I was able to reproduce the issue\n4 times. I also tested the patch\nAssert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch.\nand it resolves the issue. I ran the test script 20 times and I was\nnot able to reproduce the issue.\n\n[1]: https://www.postgresql.org/message-id/CALDaNm3hgow2%2BoEov5jBk4iYP5eQrUCF1yZtW7%2BdV3J__p4KLQ%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n",
"msg_date": "Mon, 10 Jun 2024 17:59:12 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Mon, 10 Jun 2024 at 16:38, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 12:35 PM vignesh C <[email protected]> wrote:\n> >\n> > On Sat, 17 Feb 2024 at 12:03, Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > @@ -1839,7 +1839,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> > >\n> > > SpinLockAcquire(&MyReplicationSlot->mutex);\n> > >\n> > > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > > + MyReplicationSlot->data.confirmed_flush = lsn;\n> > >\n> > > /* if we're past the location required for bumping xmin, do so */\n> > > if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&\n> > > @@ -1904,7 +1905,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)\n> > > else\n> > > {\n> > > SpinLockAcquire(&MyReplicationSlot->mutex);\n> > > - MyReplicationSlot->data.confirmed_flush = lsn;\n> > > + if (lsn > MyReplicationSlot->data.confirmed_flush)\n> > > + MyReplicationSlot->data.confirmed_flush = lsn;\n> > >\n> > > BTW, from which code path does it update the prior value of\n> > > confirmed_flush?\n> >\n> > The confirmed_flush is getting set in the else condition for this scenario.\n> >\n> > If it is through the else check, then can we see if\n> > > it may change the confirm_flush to the prior position via the first\n> > > code path? I am asking because in the first code path, we can even\n> > > flush the re-treated value of confirm_flush LSN.\n> >\n> > I was not able to find any scenario to set a prior position with the\n> > first code path. I tried various scenarios like adding delay in\n> > walsender, add delay in apply worker, restart the instances and with\n> > various DML operations. It was always setting it to either to the same\n> > value as previous or greater value.\n> >\n>\n> Fair enough. This means that in the prior versions, it was never\n> possible to move confirmed_flush LSN in the slot to a backward\n> position on the disk. So, moving it backward temporarily (in the\n> memory) shouldn't create any problem. I would prefer your\n> Assert_confirmed_flush_will_always_not_be_less_than_last_saved_confirmed_flush.patch\n> to fix this issue.\n\nI have re-verified the issue by running the tests in a loop of 150\ntimes and found it to be working fine. Also patch applies neatly,\nthere was no pgindent issue and all the regression/tap tests run were\nsuccessful.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 10 Jun 2024 19:23:50 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Mon, Jun 10, 2024 at 7:24 PM vignesh C <[email protected]> wrote:\n>\n> I have re-verified the issue by running the tests in a loop of 150\n> times and found it to be working fine. Also patch applies neatly,\n> there was no pgindent issue and all the regression/tap tests run were\n> successful.\n>\n\nThanks, I have pushed the fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jun 2024 14:09:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "Hi,\n\nOn 6/11/24 10:39, Amit Kapila wrote:\n> On Mon, Jun 10, 2024 at 7:24 PM vignesh C <[email protected]> wrote:\n>>\n>> I have re-verified the issue by running the tests in a loop of 150\n>> times and found it to be working fine. Also patch applies neatly,\n>> there was no pgindent issue and all the regression/tap tests run were\n>> successful.\n>>\n> \n> Thanks, I have pushed the fix.\n> \n\nSorry for not responding to this thread earlier (two conferences in two\nweeks), but isn't the pushed fix addressing a symptom instead of the\nactual root cause?\n\nWhy should it be OK for the subscriber to confirm a flush LSN and then\nlater take that back and report a lower LSN? Seems somewhat against my\nunderstanding of what \"flush LSN\" means.\n\nThe commit message explains this happens when the subscriber does not\nneed to do anything for - but then why shouldn't it just report the\nprior LSN, in such cases?\n\nI haven't looked into the details, but my concern is this removes an\nuseful assert, protecting us against certain type of bugs. And now we'll\njust happily ignore them. Is that a good idea?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Jun 2024 15:42:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Why should it be OK for the subscriber to confirm a flush LSN and then\n> later take that back and report a lower LSN? Seems somewhat against my\n> understanding of what \"flush LSN\" means.\n> The commit message explains this happens when the subscriber does not\n> need to do anything for - but then why shouldn't it just report the\n> prior LSN, in such cases?\n\nYeah, I was wondering about that too when I saw the commit go by.\n\n> I haven't looked into the details, but my concern is this removes an\n> useful assert, protecting us against certain type of bugs. And now we'll\n> just happily ignore them. Is that a good idea?\n\nIf we think this is a real protection, then it shouldn't be an Assert\nanyway, because it will not protect production systems that way.\nIt needs to be regular test-and-elog. Or maybe test-and-ignore-the-\nbogus-value? If you want to take this seriously then you need to\ndefine a recovery procedure after the problem is detected.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jun 2024 15:14:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
},
{
"msg_contents": "On Tue, Jun 11, 2024 at 7:12 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> Sorry for not responding to this thread earlier (two conferences in two\n> weeks), but isn't the pushed fix addressing a symptom instead of the\n> actual root cause?\n>\n> Why should it be OK for the subscriber to confirm a flush LSN and then\n> later take that back and report a lower LSN? Seems somewhat against my\n> understanding of what \"flush LSN\" means.\n>\n\nThe reason is that the subscriber doesn't persistently store/advance\nthe LSN for which it doesn't have to do anything like DDLs. But still,\nthe subscriber has to acknowledge such LSNs for synchronous\nreplication. We have comments/code at various places to deal with this\n[1][2]. Now, after the restart, the subscriber won't know of such LSNs\nso it will use its origin LSN which is the LSN of the last applied\ntransaction (and it can be before the LSN that last time the\nsubscriber had acknowledged). I had once thought to persist such LSNs\non subscriber by advancing the origin but that could be overhead in\ncertain workloads where logical decoding doesn't yield anything\nmeaningful for subscribers. So it needs more thought.\n\n> The commit message explains this happens when the subscriber does not\n> need to do anything for - but then why shouldn't it just report the\n> prior LSN, in such cases?\n>\n\nIt is required for synchronous replication.\n\n> I haven't looked into the details, but my concern is this removes an\n> useful assert, protecting us against certain type of bugs. And now we'll\n> just happily ignore them. Is that a good idea?\n>\n\nThe assert was added in this release. I was also having the same\nunderstanding as yours which is why I added it. However, the case\npresented by Vignesh has revealed that I was wrong.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 12 Jun 2024 09:24:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confirmed flush lsn seems to be move backward in certain error\n cases"
}
] |
[
{
"msg_contents": "Here is a prototype implementation of SQL property graph queries\n(SQL/PGQ), following SQL:2023. This was talked about briefly at the\nFOSDEM developer meeting, and a few people were interested, so I\nwrapped up what I had in progress into a presentable form.\n\nThere is some documentation to get started in doc/src/sgml/ddl.sgml\nand doc/src/sgml/queries.sgml.\n\nTo learn more about this facility, here are some external resources:\n\n* An article about a competing product:\n https://oracle-base.com/articles/23c/sql-property-graphs-and-sql-pgq-23c\n (All the queries in the article work, except the ones using\n vertex_id() and edge_id(), which are non-standard, and the JSON\n examples at the end, which require some of the in-progress JSON\n functionality for PostgreSQL.)\n\n* An academic paper related to another competing product:\n https://www.cidrdb.org/cidr2023/papers/p66-wolde.pdf (The main part\n of this paper discusses advanced functionality that my patch doesn't\n have.)\n\n* A 2019 presentation about graph databases:\n https://www.pgcon.org/2019/schedule/events/1300.en.html (There is\n also a video.)\n\n* (Vik has a recent presentation \"Property Graphs: When the Relational\n Model Is Not Enough\", but I haven't found the content posted\n online.)\n\nThe patch is quite fragile, and treading outside the tested paths will\nlikely lead to grave misbehavior. Use with caution. But I feel that\nthe general structure is ok, and we just need to fill in the\nproverbial few thousand lines of code in the designated areas.",
"msg_date": "Fri, 16 Feb 2024 15:53:11 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-16 15:53:11 +0100, Peter Eisentraut wrote:\n> The patch is quite fragile, and treading outside the tested paths will\n> likely lead to grave misbehavior. Use with caution. But I feel that\n> the general structure is ok, and we just need to fill in the\n> proverbial few thousand lines of code in the designated areas.\n\nOne aspect that I m concerned with structurally is that the transformation,\nfrom property graph queries to something postgres understands, is done via the\nrewrite system. I doubt that that is a good idea. For one it bars the planner\nfrom making plans that benefit from the graph query formulation. But more\nimportantly, we IMO should reduce usage of the rewrite system, not increase\nit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Feb 2024 11:23:01 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On 16.02.24 20:23, Andres Freund wrote:\n> One aspect that I m concerned with structurally is that the transformation,\n> from property graph queries to something postgres understands, is done via the\n> rewrite system. I doubt that that is a good idea. For one it bars the planner\n> from making plans that benefit from the graph query formulation. But more\n> importantly, we IMO should reduce usage of the rewrite system, not increase\n> it.\n\nPGQ is meant to be implemented like that, like views expanding to joins \nand unions. This is what I have gathered during the specification \nprocess, and from other implementations, and from academics. There are \ncertainly other ways to combine relational and graph database stuff, \nlike with native graph storage and specialized execution support, but \nthis is not that, and to some extent PGQ was created to supplant those \nother approaches.\n\nMany people will agree that the rewriter is sort of weird and archaic at \nthis point. But I'm not aware of any plans or proposals to do anything \nabout it. As long as the view expansion takes place there, it makes \nsense to align with that. For example, all the view security stuff \n(privileges, security barriers, etc.) will eventually need to be \nconsidered, and it would make sense to do that in a consistent way. So \nfor now, I'm working with what we have, but let's see where it goes.\n\n(Note to self: Check that graph inside view inside graph inside view ... \nworks.)\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 17:15:41 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On 2/23/24 17:15, Peter Eisentraut wrote:\n> On 16.02.24 20:23, Andres Freund wrote:\n>> One aspect that I m concerned with structurally is that the\n>> transformation,\n>> from property graph queries to something postgres understands, is done\n>> via the\n>> rewrite system. I doubt that that is a good idea. For one it bars the\n>> planner\n>> from making plans that benefit from the graph query formulation. But more\n>> importantly, we IMO should reduce usage of the rewrite system, not\n>> increase\n>> it.\n> \n> PGQ is meant to be implemented like that, like views expanding to joins\n> and unions. This is what I have gathered during the specification\n> process, and from other implementations, and from academics. There are\n> certainly other ways to combine relational and graph database stuff,\n> like with native graph storage and specialized execution support, but\n> this is not that, and to some extent PGQ was created to supplant those\n> other approaches.\n> \n\nI understand PGQ was meant to be implemented as a bit of a \"syntactic\nsugar\" on top of relations, instead of inventing some completely new\nways to store/query graph data.\n\nBut does that really mean it needs to be translated to relations this\nearly / in rewriter? I haven't thought about it very deeply, but won't\nthat discard useful information about semantics of the query, which\nmight be useful when planning/executing the query?\n\nI've somehow imagined we'd be able to invent some new index types, or\nutilize some other type of auxiliary structure, maybe some special\nexecutor node, but it seems harder without this extra info ...\n\n> Many people will agree that the rewriter is sort of weird and archaic at\n> this point. But I'm not aware of any plans or proposals to do anything\n> about it. As long as the view expansion takes place there, it makes\n> sense to align with that. For example, all the view security stuff\n> (privileges, security barriers, etc.) will eventually need to be\n> considered, and it would make sense to do that in a consistent way. So\n> for now, I'm working with what we have, but let's see where it goes.\n> \n> (Note to self: Check that graph inside view inside graph inside view ...\n> works.)\n> \n\nAFAIK the \"policy\" regarding rewriter was that we don't want to use it\nfor user stuff (e.g. people using it for partitioning), but I'm not sure\nabout internal stuff.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 23 Feb 2024 18:37:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 11:08 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/23/24 17:15, Peter Eisentraut wrote:\n> > On 16.02.24 20:23, Andres Freund wrote:\n> >> One aspect that I m concerned with structurally is that the\n> >> transformation,\n> >> from property graph queries to something postgres understands, is done\n> >> via the\n> >> rewrite system. I doubt that that is a good idea. For one it bars the\n> >> planner\n> >> from making plans that benefit from the graph query formulation. But more\n> >> importantly, we IMO should reduce usage of the rewrite system, not\n> >> increase\n> >> it.\n> >\n> > PGQ is meant to be implemented like that, like views expanding to joins\n> > and unions. This is what I have gathered during the specification\n> > process, and from other implementations, and from academics. There are\n> > certainly other ways to combine relational and graph database stuff,\n> > like with native graph storage and specialized execution support, but\n> > this is not that, and to some extent PGQ was created to supplant those\n> > other approaches.\n> >\n>\n> I understand PGQ was meant to be implemented as a bit of a \"syntactic\n> sugar\" on top of relations, instead of inventing some completely new\n> ways to store/query graph data.\n>\n> But does that really mean it needs to be translated to relations this\n> early / in rewriter? I haven't thought about it very deeply, but won't\n> that discard useful information about semantics of the query, which\n> might be useful when planning/executing the query?\n>\n> I've somehow imagined we'd be able to invent some new index types, or\n> utilize some other type of auxiliary structure, maybe some special\n> executor node, but it seems harder without this extra info ...\n\nI am yet to look at the implementation but ...\n1. If there are optimizations that improve performance of some path\npatterns, they are likely to improve the performance of joins used to\nimplement those. In such cases, loosing some information might be ok.\n2. Explicit graph annotatiion might help to automate some things like\ncreating indexes automatically on columns that appear in specific\npatterns OR create extended statistics automatically on the columns\nparticipating in specific patterns. OR interpreting statistics/costing\nin differently than normal query execution. Those kind of things will\nrequire retaining annotations in views, planner/execution trees etc.\n3. There are some things like aggregates/operations on paths which\nmight require stuff like new execution nodes. But I am not sure we\nhave reached that stage yet.\n\nThere might be things we may not see right now in the standard e.g.\nindexes on graph properties. For those mapping the graph objects unto\ndatabase objects might prove useful. That goes back to Peter's comment\n--- quote\nAs long as the view expansion takes place there, it makes\nsense to align with that. For example, all the view security stuff\n(privileges, security barriers, etc.) will eventually need to be\nconsidered, and it would make sense to do that in a consistent way.\n--- unquote\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 26 Feb 2024 11:46:11 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Patch conflicted with changes in ef5e2e90859a39efdd3a78e528c544b585295a78.\nAttached patch with the conflict resolved.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 5 Mar 2024 17:38:30 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Here is a new version of this patch. I have been working together with \nAshutosh on this. While the version 0 was more of a fragile demo, this \nversion 1 has a fairly complete minimal feature set and should be useful \nfor playing around with. We do have a long list of various internal \nbits that still need to be fixed or revised or looked at again, so there \nis by no means a claim that everything is completed.\n\nDocumentation to get started is included (ddl.sgml and queries.sgml). \n(Of course, feedback on the getting-started documentation would be most \nwelcome.)",
"msg_date": "Thu, 27 Jun 2024 14:31:00 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "In the ddl.sgml, I’d swap the first two paragraphs.\nI find the first one a bit confusing as-is. As far as I can tell, it’s an implementation detail.\nThe first paragraph should answer, “I have some data modeled as a graph G=(V, E). Can Postgres help me?”.\n\nThen, introducing property graphs makes more sense. \n\nI'd also use the examples and fake data in `graph_table.sql` in ddl/queries.sgml).\nI was bummed that that copy-pasting didn't work as is.\nI’d keep explaining how a graph query translates to a relational one later in the page.\n\nAs for the implementation, I can’t have an opinion yet,\nbut for those not familiar, Apache Age uses a slightly different approach\nthat mimics jsonpath (parses a sublanguage expression into an internal execution engine etc.).\nHowever, the standard requires mapping this to the relational model, which makes sense for core Postgres.\n\n\n> On 27 Jun 2024, at 3:31 PM, Peter Eisentraut <[email protected]> wrote:\n> \n> Here is a new version of this patch. I have been working together with Ashutosh on this. While the version 0 was more of a fragile demo, this version 1 has a fairly complete minimal feature set and should be useful for playing around with. We do have a long list of various internal bits that still need to be fixed or revised or looked at again, so there is by no means a claim that everything is completed.\n> \n> Documentation to get started is included (ddl.sgml and queries.sgml). (Of course, feedback on the getting-started documentation would be most welcome.)\n> <v1-0001-WIP-SQL-Property-Graph-Queries-SQL-PGQ.patch>\n\n\n\n",
"msg_date": "Thu, 4 Jul 2024 11:19:46 +0300",
"msg_from": "Florents Tselai <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Thu, Jun 27, 2024 at 6:01 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> Here is a new version of this patch. I have been working together with\n> Ashutosh on this. While the version 0 was more of a fragile demo, this\n> version 1 has a fairly complete minimal feature set and should be useful\n> for playing around with. We do have a long list of various internal\n> bits that still need to be fixed or revised or looked at again, so there\n> is by no means a claim that everything is completed.\n>\n\nPFA the patchset fixing compilation error reported by CI bot.\n0001 - same as previous one\n0002 - fixes compilation error\n0003 - adds support for WHERE clause in graph pattern missing in the first\npatch.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 8 Jul 2024 19:07:48 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Mon, Jul 8, 2024 at 7:07 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n>\n>\n> On Thu, Jun 27, 2024 at 6:01 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> Here is a new version of this patch. I have been working together with\n>> Ashutosh on this. While the version 0 was more of a fragile demo, this\n>> version 1 has a fairly complete minimal feature set and should be useful\n>> for playing around with. We do have a long list of various internal\n>> bits that still need to be fixed or revised or looked at again, so there\n>> is by no means a claim that everything is completed.\n>\n>\n> PFA the patchset fixing compilation error reported by CI bot.\n> 0001 - same as previous one\n> 0002 - fixes compilation error\n> 0003 - adds support for WHERE clause in graph pattern missing in the first patch.\n>\n\nThere's a test failure reported by CI. Property graph related tests\nare failing when regression is run from perl tests. The failure is\nreported only on Free BSD. I have added one patch in the series which\nwill help narrow the failure. The patch changes the code to report the\nlocation of an error reported when handling implicit properties or\nlabels.\n0001 - same as previous one\n0002 - fixes pgperltidy complaints\n0003 - fixes compilation failure\n0004 - same as 0003 in previous set\n0005 - patch to report parse location of error\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 17 Jul 2024 11:04:48 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 11:04 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Mon, Jul 8, 2024 at 7:07 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> >\n> >\n> > On Thu, Jun 27, 2024 at 6:01 PM Peter Eisentraut <[email protected]> wrote:\n> >>\n> >> Here is a new version of this patch. I have been working together with\n> >> Ashutosh on this. While the version 0 was more of a fragile demo, this\n> >> version 1 has a fairly complete minimal feature set and should be useful\n> >> for playing around with. We do have a long list of various internal\n> >> bits that still need to be fixed or revised or looked at again, so there\n> >> is by no means a claim that everything is completed.\n> >\n> >\n> > PFA the patchset fixing compilation error reported by CI bot.\n> > 0001 - same as previous one\n> > 0002 - fixes compilation error\n> > 0003 - adds support for WHERE clause in graph pattern missing in the first patch.\n> >\n>\n> There's a test failure reported by CI. Property graph related tests\n> are failing when regression is run from perl tests. The failure is\n> reported only on Free BSD.\n\nI thought it's related to FreeBSD but the bug could be observed\nanywhere with -DRELCACHE_FORCE_RELEASE. It's also reported indirectly\nby valgrind.\n\nWhen infering properties of an element from the underlying table's\nattributes, the attribute name pointed to the memory in the heap tuple\nof pg_attribute row. Thus when the tuple was released, it pointed to a\ngarbage instead of actual column name resulting in column not found\nerror.\n\nAttached set of patches with an additional patch to fix the bug.\n\n0001 - same as previous one\n0002 - fixes pgperltidy complaints\n0003 - fixes compilation failure\n0004 - fixes issue seen on CI\n0005 - adds support for WHERE clause in graph pattern missing in the\nfirst patch.\n\nOnce reviewed, patches 0002 to 0005 should be merged into 0001.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 22 Jul 2024 17:31:42 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hi\nI am attaching a new patch for a minor feature addition.\n\n- Adding support for 'Labels and properties: EXCEPT list'\n\nPlease let me know if something is missing.\n\nThanks and Regards\nImran Zaheer\n\nOn Mon, Jul 22, 2024 at 9:02 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jul 17, 2024 at 11:04 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Mon, Jul 8, 2024 at 7:07 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > On Thu, Jun 27, 2024 at 6:01 PM Peter Eisentraut <[email protected]> wrote:\n> > >>\n> > >> Here is a new version of this patch. I have been working together with\n> > >> Ashutosh on this. While the version 0 was more of a fragile demo, this\n> > >> version 1 has a fairly complete minimal feature set and should be useful\n> > >> for playing around with. We do have a long list of various internal\n> > >> bits that still need to be fixed or revised or looked at again, so there\n> > >> is by no means a claim that everything is completed.\n> > >\n> > >\n> > > PFA the patchset fixing compilation error reported by CI bot.\n> > > 0001 - same as previous one\n> > > 0002 - fixes compilation error\n> > > 0003 - adds support for WHERE clause in graph pattern missing in the first patch.\n> > >\n> >\n> > There's a test failure reported by CI. Property graph related tests\n> > are failing when regression is run from perl tests. The failure is\n> > reported only on Free BSD.\n>\n> I thought it's related to FreeBSD but the bug could be observed\n> anywhere with -DRELCACHE_FORCE_RELEASE. It's also reported indirectly\n> by valgrind.\n>\n> When infering properties of an element from the underlying table's\n> attributes, the attribute name pointed to the memory in the heap tuple\n> of pg_attribute row. Thus when the tuple was released, it pointed to a\n> garbage instead of actual column name resulting in column not found\n> error.\n>\n> Attached set of patches with an additional patch to fix the bug.\n>\n> 0001 - same as previous one\n> 0002 - fixes pgperltidy complaints\n> 0003 - fixes compilation failure\n> 0004 - fixes issue seen on CI\n> 0005 - adds support for WHERE clause in graph pattern missing in the\n> first patch.\n>\n> Once reviewed, patches 0002 to 0005 should be merged into 0001.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat",
"msg_date": "Sun, 4 Aug 2024 16:02:07 +0900",
"msg_from": "Imran Zaheer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 5:31 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n\nI found that the patches do not support cyclic paths correctly. A\ncyclic path pattern is a path patterns where an element pattern\nvariable repeats e.g. (a)->(b)->(a). In such a path pattern the\nelement patterns with the same variable indicate the same element in\nthe path. In the given example (a) specifies that the path should\nstart and end with the same vertex. Patch 0006 supports cyclic path\npatterns.\n\nElements which share the variable name should have the same element\ntype. The element patterns sharing the same variable name should have\nsame label expression. They may be constrained by different conditions\nwhich are finally ANDed since they all represent the same element. The\npatch creates a separate abstraction \"path_factor\" which combines all\nthe GraphElementPatterns into one element pattern. SQL/PGQ standard\nuses path_factor for such an entity, so I chose that as the structure\nname. But suggestions are welcome.\n\nA path_factor is further expanded into a list of path_element objects\neach representing a vertex or edge table that satisfies the label\nexpression in GraphElementPattern. In the previous patch set, the\nconsecutive elements were considered to be connected to each other.\nCyclic paths change that. For example, in path pattern (a)->(b)->(a),\n(b) is connected to the first element on both sides (forming a cycle)\ninstead of first and third element. Patch 0006 has code changes to\nappropriately link the elements. As a side effect, I have eliminated\nthe confusion between variables with name gep and gpe.\n\nWhile it's easy to imagine a repeated vertex pattern, a repeated edge\npattern is slightly complex. An edge connects only two vertices, and\nthus a repeated edge pattern constrains the adjacent vertex patterns\neven if they have different variable names. Such patterns are not\nsupported. E.g. (a)-[b]->(c)-[b]->(d) would mean that (d) and (a)\nrepresent the same vertex even if the variable names are different.\nSuch patterns are not supported for now. But (a)-[b]->(a)-[b]->(a) OR\n(a)-[b]->(c)<-[b]-(a) are supported since the vertices adjacent to\nrepeated edges are constrained by the variable name anyway.\n\nThe patch also changes many foreach() to use foreach_* macros as appropriate.\n\n> 0001 - same as previous one\n> 0002 - fixes pgperltidy complaints\n> 0003 - fixes compilation failure\n> 0004 - fixes issue seen on CI\n> 0005 - adds support for WHERE clause in graph pattern missing in the\n> first patch.\n0006 - adds full support for cyclic path patterns\n\nOnce reviewed, patches 0002 to 0006 should be merged into 0001.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 5 Aug 2024 18:11:20 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hi Imran,\n\nOn Sun, Aug 4, 2024 at 12:32 PM Imran Zaheer <[email protected]> wrote:\n>\n> Hi\n> I am attaching a new patch for a minor feature addition.\n>\n> - Adding support for 'Labels and properties: EXCEPT list'\n\nDo you intend to support EXCEPT in the label expression as well or\njust properties?\n\n>\n> Please let me know if something is missing.\n\nI think the code changes are in the right place. I didn't review the\npatch thoroughly. But here are some comments and some advice.\n\nPlease do not top-post on hackers.\n\nAlways sent the whole patchset. Otherwise, CI bot gets confused. It\ndoesn't pick up patchset from the previous emails.\n\nAbout the functionality: It's not clear to me whether an EXCEPT should\nbe applicable only at the time of property graph creation or it should\nbe applicable always. I.e. when a property graph is dumped, should it\nhave EXCEPT in it or have a list of columns surviving except list?\nWhat if a column in except list is dropped after creating a property\ngraph?\n\nSome comments on the code\n1. You could use list_member() in insert_property_records() to check\nwhether a given column is in the list of exceptions after you have\nenveloped in String node.\n2. The SELECT with GRAPH_TABLE queries are tested in graph_table.sql.\nWe don't include those in create_property_graph.sql\n3. Instead of creating a new property graph in the test, you may\nmodify one of the existing property graphs to have a label with except\nlist and then query it.\n\nWe are aiming a minimal set of features in the first version. I will\nlet Peter E. decide whether to consider this as minimal set feature or\nnot. The feature looks useful to me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 5 Aug 2024 18:42:46 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hi Ashutosh,\n\nThanks for the feedback.\n\n> Do you intend to support EXCEPT in the label expression as well or\n> just properties?\n>\n\nI only implemented it for the properties because I couldn't find any\nexample for Label expression using EXCEPT clause. So I thought it was\nonly meant to be for the properties.\nBut if you can confirm that we do use EXCEPT clauses with label\nexpressions as well then I can try supporting that too.\n\n>\n> Please do not top-post on hackers.\n>\n> Always sent the whole patchset. Otherwise, CI bot gets confused. It\n> doesn't pick up patchset from the previous emails.\n>\nOkay, I will take care of that.\n\n> About the functionality: It's not clear to me whether an EXCEPT should\n> be applicable only at the time of property graph creation or it should\n> be applicable always. I.e. when a property graph is dumped, should it\n> have EXCEPT in it or have a list of columns surviving except list?\n> What if a column in except list is dropped after creating a property\n> graph?\n>\n\nI did some testing on that, for now we are just dumping the columns\nsurviving the except list.\nIf an exceptional table column is deleted afterwards it doesn't show\nany effect on the graph. I also tested this scenario with duckdb pgq\nextension [1], deleting the col doesn't affect the graph.\n\n> Some comments on the code\n\nI am attaching a new patch after trying to fix according to you comments\n\n> 1. You could use list_member() in insert_property_records() to check\n> whether a given column is in the list of exceptions after you have\n> enveloped in String node.\n\n* I have changed to code to use list_member(), but I have to make\nResTarget->name from `pstrdup(NameStr(att->attname));` to `NULL`\nWe are using `xml_attribute_list` for our columns list and while\nmaking this list in gram.y we are assigning `rt->name` as NULL [2],\nthis causes list_member() func to fail while comparing except_list\nnodes. That's why I am changing rt->name from string value to NULL in\npropgraphcmds.c in this patch.\n\n* Also, in order to use list_member() func I have to add a separate\nfor loop to iterate through the exceptional columns to generate the\nerror message if col is not valid. My question is, is it ok to use two\nseparate for loops (one to check except cols validity &\nother(list_memeber) to check existence of scanned col in except list).\nIn the previous patch I was using single for loop to validate both\nthings.\n\n> 2. The SELECT with GRAPH_TABLE queries are tested in graph_table.sql.\n> We don't include those in create_property_graph.sql\n\n* I have moved the graph_table queries from create_property_graph.sql\nto graph_table.sql.\n* But in graph_table.sql I didn't use the existing graphs because\nthose graphs and tables look like there for some specific test\nscenario, so I created my separate graph and table for my test\nscenario. I didn't drop the graph and the table as we will be dropping\nthe schema at the end but Peter E has this comment \"-- leave for\npg_upgrade/pg_dump tests\".\n\n> 3. Instead of creating a new property graph in the test, you may\n> modify one of the existing property graphs to have a label with except\n> list and then query it.\n>\n\n* I have modified the graphs in create_property_graph.sql in order to\ntest except list cols in the alter command and create graph command.\n\n> We are aiming a minimal set of features in the first version. I will\n> let Peter E. decide whether to consider this as minimal set feature or\n> not. The feature looks useful to me.\n\nThanks if you find this patch useful. I am attaching the modified patch.\n\n> 0001 - same as previous one\n> 0002 - fixes pgperltidy complaints\n> 0003 - fixes compilation failure\n> 0004 - fixes issue seen on CI\n> 0005 - adds support for WHERE clause in graph pattern missing in the\n> first patch.\n> 0006 - adds full support for cyclic path patterns\n\n0007 - adds support for except cols list in graph properties\n\nThanks\nImran Zaheer\n\n[1]: https://github.com/cwida/duckpgq-extension\n[2]: https://github.com/postgres/postgres/blob/f5a1311fccd2ed24a9fb42aa47a17d1df7126039/src/backend/parser/gram.y#L16166",
"msg_date": "Sat, 10 Aug 2024 18:21:43 +0900",
"msg_from": "Imran Zaheer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hello,\n\nWith the attached patch found below error when try to use \"Any\ndirected edge\" syntax.\n\npostgres=# SELECT * FROM GRAPH_TABLE (students_graph\npostgres(# MATCH\npostgres(# (a IS person ) - [] - (b IS person)\npostgres(# COLUMNS (a.name AS person_a, b.name AS person_b)\npostgres(# );\nERROR: unsupported element pattern kind: undirected edge\n\nIf this syntax is supported then should behave as below,\n\nPERSON_A PERSON_B\n---------- ----------\nBob John\nJohn Mary\nAlice Mary\nMary Bob\nMary John\nBob Mary\nJohn Bob\nMary Alice\n\n8 rows selected.\n\nAttaching the sql file for reference.\n\nThanks\nAjay\n\nOn Sat, Aug 10, 2024 at 2:52 PM Imran Zaheer <[email protected]> wrote:\n>\n> Hi Ashutosh,\n>\n> Thanks for the feedback.\n>\n> > Do you intend to support EXCEPT in the label expression as well or\n> > just properties?\n> >\n>\n> I only implemented it for the properties because I couldn't find any\n> example for Label expression using EXCEPT clause. So I thought it was\n> only meant to be for the properties.\n> But if you can confirm that we do use EXCEPT clauses with label\n> expressions as well then I can try supporting that too.\n>\n> >\n> > Please do not top-post on hackers.\n> >\n> > Always sent the whole patchset. Otherwise, CI bot gets confused. It\n> > doesn't pick up patchset from the previous emails.\n> >\n> Okay, I will take care of that.\n>\n> > About the functionality: It's not clear to me whether an EXCEPT should\n> > be applicable only at the time of property graph creation or it should\n> > be applicable always. I.e. when a property graph is dumped, should it\n> > have EXCEPT in it or have a list of columns surviving except list?\n> > What if a column in except list is dropped after creating a property\n> > graph?\n> >\n>\n> I did some testing on that, for now we are just dumping the columns\n> surviving the except list.\n> If an exceptional table column is deleted afterwards it doesn't show\n> any effect on the graph. I also tested this scenario with duckdb pgq\n> extension [1], deleting the col doesn't affect the graph.\n>\n> > Some comments on the code\n>\n> I am attaching a new patch after trying to fix according to you comments\n>\n> > 1. You could use list_member() in insert_property_records() to check\n> > whether a given column is in the list of exceptions after you have\n> > enveloped in String node.\n>\n> * I have changed to code to use list_member(), but I have to make\n> ResTarget->name from `pstrdup(NameStr(att->attname));` to `NULL`\n> We are using `xml_attribute_list` for our columns list and while\n> making this list in gram.y we are assigning `rt->name` as NULL [2],\n> this causes list_member() func to fail while comparing except_list\n> nodes. That's why I am changing rt->name from string value to NULL in\n> propgraphcmds.c in this patch.\n>\n> * Also, in order to use list_member() func I have to add a separate\n> for loop to iterate through the exceptional columns to generate the\n> error message if col is not valid. My question is, is it ok to use two\n> separate for loops (one to check except cols validity &\n> other(list_memeber) to check existence of scanned col in except list).\n> In the previous patch I was using single for loop to validate both\n> things.\n>\n> > 2. The SELECT with GRAPH_TABLE queries are tested in graph_table.sql.\n> > We don't include those in create_property_graph.sql\n>\n> * I have moved the graph_table queries from create_property_graph.sql\n> to graph_table.sql.\n> * But in graph_table.sql I didn't use the existing graphs because\n> those graphs and tables look like there for some specific test\n> scenario, so I created my separate graph and table for my test\n> scenario. I didn't drop the graph and the table as we will be dropping\n> the schema at the end but Peter E has this comment \"-- leave for\n> pg_upgrade/pg_dump tests\".\n>\n> > 3. Instead of creating a new property graph in the test, you may\n> > modify one of the existing property graphs to have a label with except\n> > list and then query it.\n> >\n>\n> * I have modified the graphs in create_property_graph.sql in order to\n> test except list cols in the alter command and create graph command.\n>\n> > We are aiming a minimal set of features in the first version. I will\n> > let Peter E. decide whether to consider this as minimal set feature or\n> > not. The feature looks useful to me.\n>\n> Thanks if you find this patch useful. I am attaching the modified patch.\n>\n> > 0001 - same as previous one\n> > 0002 - fixes pgperltidy complaints\n> > 0003 - fixes compilation failure\n> > 0004 - fixes issue seen on CI\n> > 0005 - adds support for WHERE clause in graph pattern missing in the\n> > first patch.\n> > 0006 - adds full support for cyclic path patterns\n>\n> 0007 - adds support for except cols list in graph properties\n>\n> Thanks\n> Imran Zaheer\n>\n> [1]: https://github.com/cwida/duckpgq-extension\n> [2]: https://github.com/postgres/postgres/blob/f5a1311fccd2ed24a9fb42aa47a17d1df7126039/src/backend/parser/gram.y#L16166",
"msg_date": "Tue, 13 Aug 2024 15:22:43 +0530",
"msg_from": "Ajay Pal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hello,\n\nFurther testing found that using a property graph with the plpgsql\nfunction crashed the server. Please take a look at the attached SQL\nfile for reference tables.\n\npostgres=# create or replace function func() returns int as\npostgres-# $$\npostgres$# declare person_av varchar;\npostgres$# begin\npostgres$#\npostgres$# SELECT person_a into person_av FROM GRAPH_TABLE\n(students_graph\npostgres$# MATCH\npostgres$# (a IS person) -[e IS friends]-> (b IS person\nWHERE b.name = 'Bob')\npostgres$# WHERE a.name='John'\npostgres$# COLUMNS (a.name AS person_a, b.name AS person_b)\npostgres$# );\npostgres$#\npostgres$# return person_av;\npostgres$# end\npostgres$# $$ language plpgsql;\nCREATE FUNCTION\npostgres=# select func();\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\nPlease let me know if you need more details.\n\nThanks\nAjay\n\nOn Tue, Aug 13, 2024 at 3:22 PM Ajay Pal <[email protected]> wrote:\n>\n> Hello,\n>\n> With the attached patch found below error when try to use \"Any\n> directed edge\" syntax.\n>\n> postgres=# SELECT * FROM GRAPH_TABLE (students_graph\n> postgres(# MATCH\n> postgres(# (a IS person ) - [] - (b IS person)\n> postgres(# COLUMNS (a.name AS person_a, b.name AS person_b)\n> postgres(# );\n> ERROR: unsupported element pattern kind: undirected edge\n>\n> If this syntax is supported then should behave as below,\n>\n> PERSON_A PERSON_B\n> ---------- ----------\n> Bob John\n> John Mary\n> Alice Mary\n> Mary Bob\n> Mary John\n> Bob Mary\n> John Bob\n> Mary Alice\n>\n> 8 rows selected.\n>\n> Attaching the sql file for reference.\n>\n> Thanks\n> Ajay\n>\n> On Sat, Aug 10, 2024 at 2:52 PM Imran Zaheer <[email protected]> wrote:\n> >\n> > Hi Ashutosh,\n> >\n> > Thanks for the feedback.\n> >\n> > > Do you intend to support EXCEPT in the label expression as well or\n> > > just properties?\n> > >\n> >\n> > I only implemented it for the properties because I couldn't find any\n> > example for Label expression using EXCEPT clause. So I thought it was\n> > only meant to be for the properties.\n> > But if you can confirm that we do use EXCEPT clauses with label\n> > expressions as well then I can try supporting that too.\n> >\n> > >\n> > > Please do not top-post on hackers.\n> > >\n> > > Always sent the whole patchset. Otherwise, CI bot gets confused. It\n> > > doesn't pick up patchset from the previous emails.\n> > >\n> > Okay, I will take care of that.\n> >\n> > > About the functionality: It's not clear to me whether an EXCEPT should\n> > > be applicable only at the time of property graph creation or it should\n> > > be applicable always. I.e. when a property graph is dumped, should it\n> > > have EXCEPT in it or have a list of columns surviving except list?\n> > > What if a column in except list is dropped after creating a property\n> > > graph?\n> > >\n> >\n> > I did some testing on that, for now we are just dumping the columns\n> > surviving the except list.\n> > If an exceptional table column is deleted afterwards it doesn't show\n> > any effect on the graph. I also tested this scenario with duckdb pgq\n> > extension [1], deleting the col doesn't affect the graph.\n> >\n> > > Some comments on the code\n> >\n> > I am attaching a new patch after trying to fix according to you comments\n> >\n> > > 1. You could use list_member() in insert_property_records() to check\n> > > whether a given column is in the list of exceptions after you have\n> > > enveloped in String node.\n> >\n> > * I have changed to code to use list_member(), but I have to make\n> > ResTarget->name from `pstrdup(NameStr(att->attname));` to `NULL`\n> > We are using `xml_attribute_list` for our columns list and while\n> > making this list in gram.y we are assigning `rt->name` as NULL [2],\n> > this causes list_member() func to fail while comparing except_list\n> > nodes. That's why I am changing rt->name from string value to NULL in\n> > propgraphcmds.c in this patch.\n> >\n> > * Also, in order to use list_member() func I have to add a separate\n> > for loop to iterate through the exceptional columns to generate the\n> > error message if col is not valid. My question is, is it ok to use two\n> > separate for loops (one to check except cols validity &\n> > other(list_memeber) to check existence of scanned col in except list).\n> > In the previous patch I was using single for loop to validate both\n> > things.\n> >\n> > > 2. The SELECT with GRAPH_TABLE queries are tested in graph_table.sql.\n> > > We don't include those in create_property_graph.sql\n> >\n> > * I have moved the graph_table queries from create_property_graph.sql\n> > to graph_table.sql.\n> > * But in graph_table.sql I didn't use the existing graphs because\n> > those graphs and tables look like there for some specific test\n> > scenario, so I created my separate graph and table for my test\n> > scenario. I didn't drop the graph and the table as we will be dropping\n> > the schema at the end but Peter E has this comment \"-- leave for\n> > pg_upgrade/pg_dump tests\".\n> >\n> > > 3. Instead of creating a new property graph in the test, you may\n> > > modify one of the existing property graphs to have a label with except\n> > > list and then query it.\n> > >\n> >\n> > * I have modified the graphs in create_property_graph.sql in order to\n> > test except list cols in the alter command and create graph command.\n> >\n> > > We are aiming a minimal set of features in the first version. I will\n> > > let Peter E. decide whether to consider this as minimal set feature or\n> > > not. The feature looks useful to me.\n> >\n> > Thanks if you find this patch useful. I am attaching the modified patch.\n> >\n> > > 0001 - same as previous one\n> > > 0002 - fixes pgperltidy complaints\n> > > 0003 - fixes compilation failure\n> > > 0004 - fixes issue seen on CI\n> > > 0005 - adds support for WHERE clause in graph pattern missing in the\n> > > first patch.\n> > > 0006 - adds full support for cyclic path patterns\n> >\n> > 0007 - adds support for except cols list in graph properties\n> >\n> > Thanks\n> > Imran Zaheer\n> >\n> > [1]: https://github.com/cwida/duckpgq-extension\n> > [2]: https://github.com/postgres/postgres/blob/f5a1311fccd2ed24a9fb42aa47a17d1df7126039/src/backend/parser/gram.y#L16166",
"msg_date": "Tue, 13 Aug 2024 16:08:07 +0530",
"msg_from": "Ajay Pal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "Hi All,\n\nWhen we use a graph table and any local table, the server crashes.\nPlease note, It is happening when using the where clause for the local\ntable only.\n\npostgres=# SELECT * FROM customers a, GRAPH_TABLE (myshop2 MATCH (c IS\ncustomers WHERE c.address = 'US')-[IS customer_orders]->(o IS orders)\nCOLUMNS (c.name_redacted AS customer_name_redacted));\n customer_id | name | address | customer_name_redacted\n-------------+-----------+---------+------------------------\n 1 | customer1 | US | redacted1\n 2 | customer2 | CA | redacted1\n 3 | customer3 | GL | redacted1\n(3 rows)\n\npostgres=# SELECT * FROM customers a, GRAPH_TABLE (myshop2 MATCH (c IS\ncustomers WHERE c.address = 'US')-[IS customer_orders]->(o IS orders)\nCOLUMNS (c.name_redacted AS customer_name_redacted)) where\na.customer_id=1;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> \\q\n\nNote:- I have referred to graph_table.sql to get the table structure\nused in the above query.\n\nThanks\nAjay\n\n\nOn Tue, Aug 13, 2024 at 4:08 PM Ajay Pal <[email protected]> wrote:\n>\n> Hello,\n>\n> Further testing found that using a property graph with the plpgsql\n> function crashed the server. Please take a look at the attached SQL\n> file for reference tables.\n>\n> postgres=# create or replace function func() returns int as\n> postgres-# $$\n> postgres$# declare person_av varchar;\n> postgres$# begin\n> postgres$#\n> postgres$# SELECT person_a into person_av FROM GRAPH_TABLE\n> (students_graph\n> postgres$# MATCH\n> postgres$# (a IS person) -[e IS friends]-> (b IS person\n> WHERE b.name = 'Bob')\n> postgres$# WHERE a.name='John'\n> postgres$# COLUMNS (a.name AS person_a, b.name AS person_b)\n> postgres$# );\n> postgres$#\n> postgres$# return person_av;\n> postgres$# end\n> postgres$# $$ language plpgsql;\n> CREATE FUNCTION\n> postgres=# select func();\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n>\n> Please let me know if you need more details.\n>\n> Thanks\n> Ajay\n>\n> On Tue, Aug 13, 2024 at 3:22 PM Ajay Pal <[email protected]> wrote:\n> >\n> > Hello,\n> >\n> > With the attached patch found below error when try to use \"Any\n> > directed edge\" syntax.\n> >\n> > postgres=# SELECT * FROM GRAPH_TABLE (students_graph\n> > postgres(# MATCH\n> > postgres(# (a IS person ) - [] - (b IS person)\n> > postgres(# COLUMNS (a.name AS person_a, b.name AS person_b)\n> > postgres(# );\n> > ERROR: unsupported element pattern kind: undirected edge\n> >\n> > If this syntax is supported then should behave as below,\n> >\n> > PERSON_A PERSON_B\n> > ---------- ----------\n> > Bob John\n> > John Mary\n> > Alice Mary\n> > Mary Bob\n> > Mary John\n> > Bob Mary\n> > John Bob\n> > Mary Alice\n> >\n> > 8 rows selected.\n> >\n> > Attaching the sql file for reference.\n> >\n> > Thanks\n> > Ajay\n> >\n> > On Sat, Aug 10, 2024 at 2:52 PM Imran Zaheer <[email protected]> wrote:\n> > >\n> > > Hi Ashutosh,\n> > >\n> > > Thanks for the feedback.\n> > >\n> > > > Do you intend to support EXCEPT in the label expression as well or\n> > > > just properties?\n> > > >\n> > >\n> > > I only implemented it for the properties because I couldn't find any\n> > > example for Label expression using EXCEPT clause. So I thought it was\n> > > only meant to be for the properties.\n> > > But if you can confirm that we do use EXCEPT clauses with label\n> > > expressions as well then I can try supporting that too.\n> > >\n> > > >\n> > > > Please do not top-post on hackers.\n> > > >\n> > > > Always sent the whole patchset. Otherwise, CI bot gets confused. It\n> > > > doesn't pick up patchset from the previous emails.\n> > > >\n> > > Okay, I will take care of that.\n> > >\n> > > > About the functionality: It's not clear to me whether an EXCEPT should\n> > > > be applicable only at the time of property graph creation or it should\n> > > > be applicable always. I.e. when a property graph is dumped, should it\n> > > > have EXCEPT in it or have a list of columns surviving except list?\n> > > > What if a column in except list is dropped after creating a property\n> > > > graph?\n> > > >\n> > >\n> > > I did some testing on that, for now we are just dumping the columns\n> > > surviving the except list.\n> > > If an exceptional table column is deleted afterwards it doesn't show\n> > > any effect on the graph. I also tested this scenario with duckdb pgq\n> > > extension [1], deleting the col doesn't affect the graph.\n> > >\n> > > > Some comments on the code\n> > >\n> > > I am attaching a new patch after trying to fix according to you comments\n> > >\n> > > > 1. You could use list_member() in insert_property_records() to check\n> > > > whether a given column is in the list of exceptions after you have\n> > > > enveloped in String node.\n> > >\n> > > * I have changed to code to use list_member(), but I have to make\n> > > ResTarget->name from `pstrdup(NameStr(att->attname));` to `NULL`\n> > > We are using `xml_attribute_list` for our columns list and while\n> > > making this list in gram.y we are assigning `rt->name` as NULL [2],\n> > > this causes list_member() func to fail while comparing except_list\n> > > nodes. That's why I am changing rt->name from string value to NULL in\n> > > propgraphcmds.c in this patch.\n> > >\n> > > * Also, in order to use list_member() func I have to add a separate\n> > > for loop to iterate through the exceptional columns to generate the\n> > > error message if col is not valid. My question is, is it ok to use two\n> > > separate for loops (one to check except cols validity &\n> > > other(list_memeber) to check existence of scanned col in except list).\n> > > In the previous patch I was using single for loop to validate both\n> > > things.\n> > >\n> > > > 2. The SELECT with GRAPH_TABLE queries are tested in graph_table.sql.\n> > > > We don't include those in create_property_graph.sql\n> > >\n> > > * I have moved the graph_table queries from create_property_graph.sql\n> > > to graph_table.sql.\n> > > * But in graph_table.sql I didn't use the existing graphs because\n> > > those graphs and tables look like there for some specific test\n> > > scenario, so I created my separate graph and table for my test\n> > > scenario. I didn't drop the graph and the table as we will be dropping\n> > > the schema at the end but Peter E has this comment \"-- leave for\n> > > pg_upgrade/pg_dump tests\".\n> > >\n> > > > 3. Instead of creating a new property graph in the test, you may\n> > > > modify one of the existing property graphs to have a label with except\n> > > > list and then query it.\n> > > >\n> > >\n> > > * I have modified the graphs in create_property_graph.sql in order to\n> > > test except list cols in the alter command and create graph command.\n> > >\n> > > > We are aiming a minimal set of features in the first version. I will\n> > > > let Peter E. decide whether to consider this as minimal set feature or\n> > > > not. The feature looks useful to me.\n> > >\n> > > Thanks if you find this patch useful. I am attaching the modified patch.\n> > >\n> > > > 0001 - same as previous one\n> > > > 0002 - fixes pgperltidy complaints\n> > > > 0003 - fixes compilation failure\n> > > > 0004 - fixes issue seen on CI\n> > > > 0005 - adds support for WHERE clause in graph pattern missing in the\n> > > > first patch.\n> > > > 0006 - adds full support for cyclic path patterns\n> > >\n> > > 0007 - adds support for except cols list in graph properties\n> > >\n> > > Thanks\n> > > Imran Zaheer\n> > >\n> > > [1]: https://github.com/cwida/duckpgq-extension\n> > > [2]: https://github.com/postgres/postgres/blob/f5a1311fccd2ed24a9fb42aa47a17d1df7126039/src/backend/parser/gram.y#L16166\n\n\n",
"msg_date": "Tue, 20 Aug 2024 17:20:41 +0530",
"msg_from": "Ajay Pal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
},
{
"msg_contents": "On Tue, Aug 13, 2024 at 3:22 PM Ajay Pal <[email protected]> wrote:\n\n>\n> With the attached patch found below error when try to use \"Any\n> directed edge\" syntax.\n>\n> postgres=# SELECT * FROM GRAPH_TABLE (students_graph\n> postgres(# MATCH\n> postgres(# (a IS person ) - [] - (b IS person)\n> postgres(# COLUMNS (a.name AS person_a, b.name AS person_b)\n> postgres(# );\n> ERROR: unsupported element pattern kind: undirected edge\n>\n\nEarlier patches treated syntax \"-[]- \" as undirected edge and didn't\nsupport it. Per standard it is specifies an edge in either direction\nwhich is equivalent of -[]-> OR <-[]-. Implemented in the attached\npatches. Also added a test case in graph_table.sql.\n\nOn Tue, Aug 13, 2024 at 4:08 PM Ajay Pal <[email protected]> wrote:\n\n> postgres=# create or replace function func() returns int as\n> postgres-# $$\n> postgres$# declare person_av varchar;\n> postgres$# begin\n> postgres$#\n> postgres$# SELECT person_a into person_av FROM GRAPH_TABLE\n> (students_graph\n> postgres$# MATCH\n> postgres$# (a IS person) -[e IS friends]-> (b IS person\n> WHERE b.name = 'Bob')\n> postgres$# WHERE a.name='John'\n> postgres$# COLUMNS (a.name AS person_a, b.name AS person_b)\n> postgres$# );\n> postgres$#\n> postgres$# return person_av;\n> postgres$# end\n> postgres$# $$ language plpgsql;\n> CREATE FUNCTION\n> postgres=# select func();\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n>\n\nNice catch. The crash happens because earlier patches implemented\nparser hooks to resolve graph property references. Those\nimplementations conflicted with the same hooks implemented in plpgsql\ncode. The attached patches fix this by adding a member to ParseState\ninstead of using hooks. Once this was fixed, there was another\nproblem. Property graph referenced in GRAPH_TABLE was not being\nlocked. That problem is fixed in the attached patches as well.\n\nOn Tue, Aug 20, 2024 at 5:20 PM Ajay Pal <[email protected]> wrote:\n>\n> Hi All,\n>\n> When we use a graph table and any local table, the server crashes.\n> Please note, It is happening when using the where clause for the local\n> table only.\n>\n> postgres=# SELECT * FROM customers a, GRAPH_TABLE (myshop2 MATCH (c IS\n> customers WHERE c.address = 'US')-[IS customer_orders]->(o IS orders)\n> COLUMNS (c.name_redacted AS customer_name_redacted));\n> customer_id | name | address | customer_name_redacted\n> -------------+-----------+---------+------------------------\n> 1 | customer1 | US | redacted1\n> 2 | customer2 | CA | redacted1\n> 3 | customer3 | GL | redacted1\n> (3 rows)\n>\n> postgres=# SELECT * FROM customers a, GRAPH_TABLE (myshop2 MATCH (c IS\n> customers WHERE c.address = 'US')-[IS customer_orders]->(o IS orders)\n> COLUMNS (c.name_redacted AS customer_name_redacted)) where\n> a.customer_id=1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?> \\q\n>\n\nThis problem is not reproducible after fixing other problem. Please\nlet me know if it's reproduces for you. If it reproduces please\nprovide a patch adding the reproduction to graph_table.sql.\n\nAlong with this I have rebased the patches on the latest HEAD, fixed\nsome comments, code styles etc.\n\nPatches 0001 - 0006 are same as the previous set.\n0007 - fixes all the problems you reported till now and also the one I\nfound. The commit message describes the fixes in detail.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 28 Aug 2024 15:48:46 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL Property Graph Queries (SQL/PGQ)"
}
] |
[
{
"msg_contents": "While analyzing a customer's performance problem, I noticed that\nthe performance of pg_dump for large arrays is terrible.\n\nAs a test case, I created a table with 10000 rows, each of which\nhad an array of 10000 uuids. The table resided in shared buffers.\n\nThe following took 24.5 seconds:\n\n COPY mytab TO '/dev/null';\n\nMost of the time was spent in array_out and uuid_out.\n\nI tried binary copy, which took 4.4 seconds:\n\n COPY mytab TO '/dev/null' (FORMAT 'binary');\n\nHere, a lot of time was spent in pq_begintypsend.\n\n\nSo I looked for low-hanging fruit, and the result is the attached\npatch series.\n\n- Patch 0001 speeds up pq_begintypsend with a custom macro.\n That brought the binary copy down to 3.7 seconds, which is a\n speed gain of 15%.\n\n- Patch 0001 speeds up uuid_out by avoiding the overhead of\n a Stringinfo. This brings text mode COPY to 19.4 seconds,\n which is speed gain of 21%.\n\n- Patch 0003 speeds up array_out a bit by avoiding some zero\n byte writes. The measured speed gain is under 2%.\n\nYours,\nLaurenz Albe",
"msg_date": "Sat, 17 Feb 2024 17:48:23 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-17 17:48:23 +0100, Laurenz Albe wrote:\n> - Patch 0001 speeds up pq_begintypsend with a custom macro.\n> That brought the binary copy down to 3.7 seconds, which is a\n> speed gain of 15%.\n\nNice win, but I think we can do a bit better than this. Certainly it shouldn't\nbe a macro, given the multiple evaluation risks. I don't think we actually\nneed to zero those bytes, in fact that seems more likely to hide bugs than\nprevent them.\n\nI have an old patch series improving performance in this area. A big win is to\nsimply not allocate as much memory for a new stringinfo, when we already know\nthe upper bound, as we do in many of the send functions.\n\n\n> - Patch 0001 speeds up uuid_out by avoiding the overhead of\n> a Stringinfo. This brings text mode COPY to 19.4 seconds,\n> which is speed gain of 21%.\n\nNice!\n\nI wonder if we should move the core part for converting to hex to numutils.c,\nwe already have code the for the inverse. There does seem to be further\noptimization potential in the conversion, and that seems better done somewhere\ncentral rather than one type's output function. OTOH, it might not be worth\nit, given the need to add the dashes.\n\n\n> - Patch 0003 speeds up array_out a bit by avoiding some zero\n> byte writes. The measured speed gain is under 2%.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Feb 2024 12:24:33 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 12:24:33PM -0800, Andres Freund wrote:\n> I wonder if we should move the core part for converting to hex to numutils.c,\n> we already have code the for the inverse. There does seem to be further\n> optimization potential in the conversion, and that seems better done somewhere\n> central rather than one type's output function. OTOH, it might not be worth\n> it, given the need to add the dashes.\n\nI'd tend to live with the current location of the code, but I'm OK if\npeople feel differently on this one, so I'm OK with what Laurenz is\nproposing. \n\n>> - Patch 0003 speeds up array_out a bit by avoiding some zero\n>> byte writes. The measured speed gain is under 2%.\n> \n> Makes sense.\n\n+1.\n--\nMichael",
"msg_date": "Mon, 19 Feb 2024 12:36:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Sat, 2024-02-17 at 12:24 -0800, Andres Freund wrote:\n> On 2024-02-17 17:48:23 +0100, Laurenz Albe wrote:\n> > - Patch 0001 speeds up pq_begintypsend with a custom macro.\n> > That brought the binary copy down to 3.7 seconds, which is a\n> > speed gain of 15%.\n> \n> Nice win, but I think we can do a bit better than this. Certainly it shouldn't\n> be a macro, given the multiple evaluation risks. I don't think we actually\n> need to zero those bytes, in fact that seems more likely to hide bugs than\n> prevent them.\n>\n> I have an old patch series improving performance in this area.\n\nThe multiple evaluation danger could be worked around with an automatic\nvariable, or the macro could just carry a warning (like appendStringInfoCharMacro).\n\nBut considering the thread in [1], I guess I'll retract that patch; I'm\nsure the more invasive improvement you have in mind will do better.\n\n> > - Patch 0001 speeds up uuid_out by avoiding the overhead of\n> > a Stringinfo. This brings text mode COPY to 19.4 seconds,\n> > which is speed gain of 21%.\n> \n> Nice!\n> \n> I wonder if we should move the core part for converting to hex to numutils.c,\n> we already have code the for the inverse. There does seem to be further\n> optimization potential in the conversion, and that seems better done somewhere\n> central rather than one type's output function. OTOH, it might not be worth\n> it, given the need to add the dashes.\n\nI thought about it, but then decided not to do that.\nCalling a function that converts the bytes to hex and then adding the\nhyphens will slow down processing, and I think the code savings would be\nminimal at best.\n\n> > - Patch 0003 speeds up array_out a bit by avoiding some zero\n> > byte writes. The measured speed gain is under 2%.\n> \n> Makes sense.\n\nThanks for your interest and approval. The attached patches are the\nrenamed, but unmodified patches 2 and 3 from before.\n\nYours,\nLaurenz Albe\n\n\n [1]: https://postgr.es/m/CAMkU%3D1whbRDUwa4eayD9%2B59K-coxO9senDkPRbTn3cg0pUz4AQ%40mail.gmail.com",
"msg_date": "Mon, 19 Feb 2024 15:08:45 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 03:08:45PM +0100, Laurenz Albe wrote:\n> On Sat, 2024-02-17 at 12:24 -0800, Andres Freund wrote:\n>> I wonder if we should move the core part for converting to hex to numutils.c,\n>> we already have code the for the inverse. There does seem to be further\n>> optimization potential in the conversion, and that seems better done somewhere\n>> central rather than one type's output function. OTOH, it might not be worth\n>> it, given the need to add the dashes.\n>\n> I thought about it, but then decided not to do that.\n> Calling a function that converts the bytes to hex and then adding the\n> hyphens will slow down processing, and I think the code savings would be\n> minimal at best.\n\nYeah, I'm not sure either if that's worth doing, the current\nconversion code is simple enough. I'd be curious to hear about ideas\nto optimize that more locally, though.\n\nI was curious about the UUID one, and COPYing out 10M rows with a\nsingle UUID attribute brings down the runtime of a COPY from 3.8s to\n2.3s here on a simple benchmark, with uuid_out showing up at the top\nof profiles easily on HEAD. Some references for HEAD:\n31.63% 5300 postgres postgres [.] uuid_out\n19.79% 3316 postgres postgres [.] appendStringInfoChar\n11.27% 1887 postgres postgres [.] CopyAttributeOutText\n 6.36% 1065 postgres postgres [.] pgstat_progress_update_param\n\nAnd with the patch for uuid_out:\n22.66% 2147 postgres postgres [.] CopyAttributeOutText\n12.99% 1231 postgres postgres [.] uuid_out\n11.41% 1081 postgres postgres [.] pgstat_progress_update_param\n 4.79% 454 postgres postgres [.] CopyOneRowTo\n\nThat's very good, so I'd like to apply this part. Let me know if\nthere are any objections.\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 14:29:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 02:29:05PM +0900, Michael Paquier wrote:\n> That's very good, so I'd like to apply this part. Let me know if\n> there are any objections.\n\nThis part is done as of 011d60c4352c. I did not evaluate the rest\nyet.\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 10:34:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Thu, 2024-02-22 at 10:34 +0900, Michael Paquier wrote:\n> This part is done as of 011d60c4352c. I did not evaluate the rest\n> yet.\n\nThanks!\n\nI'm attaching the remaining patch for the Juli commitfest, if you\ndon't get inspired before that.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 22 Feb 2024 08:16:09 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 08:16:09AM +0100, Laurenz Albe wrote:\n> I'm attaching the remaining patch for the Juli commitfest, if you\n> don't get inspired before that.\n\nThere are good chances that I get inspired some time next week. We'll\nsee ;)\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 16:27:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\n I found an issue while using the latest version of PG15 (8fa4a1ac61189efffb8b851ee77e1bc87360c445).\r\n\r\n This question is about 'merge into'.\r\n\r\n\r\n When two merge into statements are executed concurrently, I obtain the following process and results. \r\n Firstly, the execution results of each Oracle are different, and secondly, I tried to understand its execution process and found that it was not very clear.\r\n\r\n\r\n <merge into & merge into>\r\n\r\n\r\nThe first merge statement clearly updates the year field for ID 2 and 3, and then inserts the row for ID 4.\r\nBut when the second merge statement is executed, the year field of id 2 is actually updated based on the execution of the first merge statement, and then insert rows of id 3 and id 4.\r\nI don't understand, I think if it is updated, it should be that both ID 2 and 3 have been updated.\r\nI am currently unable to determine whether ID 4 should be updated or insert.\r\n\r\n\r\n\r\nAccording to the results from Oracle, the second merge statement should have updated id 2 3 4.\r\n\r\n\r\n\r\n\r\n\r\n\r\n<update & merge into> \r\nI think the problem with the above scenario is due to the concurrent scenarios of update and merge, the behavior of PG and Oracle is consistent. The following figure:\r\n\r\n\r\n(The results of Oracle and PG are consistent)\r\n \r\n\r\n In my opinion, in the concurrent scenarios of mergeand merge, the behavior of pg seems inconsistent. Can you help me analyze and take a look, and help us use SQL with clearer semantics?\r\n Looking forward to your reply.\r\n\r\n\r\nThanks\r\nwenjiang_zhang",
"msg_date": "Sun, 18 Feb 2024 09:37:39 +0800",
"msg_from": "\"=?gb18030?B?endq?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 17:48, zwj <[email protected]> wrote:\n>\n> Hello,\n>\n> I found an issue while using the latest version of PG15 (8fa4a1ac61189efffb8b851ee77e1bc87360c445).\n> This question is about 'merge into'.\n>\n> When two merge into statements are executed concurrently, I obtain the following process and results.\n> Firstly, the execution results of each Oracle are different, and secondly, I tried to understand its execution process and found that it was not very clear.\n>\n\nHmm, looking at this I think there is a problem with how UNION ALL\nsubqueries are pulled up, and I don't think it's necessarily limited\nto MERGE.\n\nLooking at the plan for this MERGE operation:\n\nexplain (verbose, costs off)\nmerge into mergeinto_0023_tb01 a using (select aid,name,year from\nmergeinto_0023_tb02 union all select aid,name,year from\nmergeinto_0023_tb03) c on (a.id=c.aid) when matched then update set\nyear=c.year when not matched then insert(id,name,year)\nvalues(c.aid,c.name,c.year);\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge on public.mergeinto_0023_tb01 a\n -> Merge Right Join\n Output: a.ctid, mergeinto_0023_tb02.year,\nmergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\n(ROW(mergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\nmergeinto_0023_tb02.year))\n Merge Cond: (a.id = mergeinto_0023_tb02.aid)\n -> Sort\n Output: a.ctid, a.id\n Sort Key: a.id\n -> Seq Scan on public.mergeinto_0023_tb01 a\n Output: a.ctid, a.id\n -> Sort\n Output: mergeinto_0023_tb02.year,\nmergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\n(ROW(mergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\nmergeinto_0023_tb02.year))\n Sort Key: mergeinto_0023_tb02.aid\n -> Append\n -> Seq Scan on public.mergeinto_0023_tb02\n Output: mergeinto_0023_tb02.year,\nmergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\nROW(mergeinto_0023_tb02.aid, mergeinto_0023_tb02.name,\nmergeinto_0023_tb02.year)\n -> Seq Scan on public.mergeinto_0023_tb03\n Output: mergeinto_0023_tb03.year,\nmergeinto_0023_tb03.aid, mergeinto_0023_tb03.name,\nROW(mergeinto_0023_tb03.aid, mergeinto_0023_tb03.name,\nmergeinto_0023_tb03.year)\n\nThe \"ROW(...)\" targetlist entries are added because\npreprocess_rowmarks() adds a rowmark to the UNION ALL subquery, which\nat that point is the only non-target relation in the jointree. It does\nthis intending that the same values be returned during EPQ rechecking.\nHowever, pull_up_subqueries() causes the UNION all subquery and its\nleaf subqueries to be pulled up into the main query as appendrel\nentries. So when it comes to EPQ rechecking, the rowmark does\nabsolutely nothing, and EvalPlanQual() does a full re-scan of\nmergeinto_0023_tb02 and mergeinto_0023_tb03 and a re-sort for each\nconcurrently modified row.\n\nA similar thing happens for UPDATE and DELETE, if they're joined to a\nUNION ALL subquery. However, AFAICS that doesn't cause a problem\n(other than being pretty inefficient) since, for UPDATE and DELETE,\nthe join to the UNION ALL subquery will always be an inner join, I\nthink, and so the join output will always be correct.\n\nHowever, for MERGE, the join may be an outer join, so during an EPQ\nrecheck, we're joining the target relation (fixed to return just the\nupdated row) to the full UNION ALL subquery. So if it's an outer join,\nthe join output will return all-but-one of the subquery rows as not\nmatched rows in addition to the one matched row that we want, whereas\nthe EPQ mechanism is expecting the plan to return just one row.\n\nOn the face of it, the simplest fix is to tweak is_simple_union_all()\nto prevent UNION ALL subquery pullup for MERGE, forcing a\nsubquery-scan plan. A quick test shows that that fixes the reported\nissue.\n\nis_simple_union_all() already has a test for rowmarks, and a comment\nsaying that locking isn't supported, but since it is called before\npreprocess_rowmarks(), it doesn't know that the subquery is about to\nbe marked.\n\nHowever, that leaves the question of whether we should do the same for\nUPDATE and DELETE. There doesn't appear to be a live bug there, so\nmaybe they're best left alone. Also, back-patching a change like that\nmight make existing queries less efficient. But I feel like I might be\noverlooking something here, and this doesn't seem to be how EPQ\nrechecks are meant to work (doing a full re-scan of non-target\nrelations). Also, if the concurrent update were an update of a key\ncolumn that was included in the join condition, the re-scan would\nfollow the update to a new matching source row, which is inconsistent\nwith what would happen if it were a join to a regular relation.\n\nThoughts?\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 20 Feb 2024 14:49:48 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 14:49, Dean Rasheed <[email protected]> wrote:\n>\n> Also, if the concurrent update were an update of a key\n> column that was included in the join condition, the re-scan would\n> follow the update to a new matching source row, which is inconsistent\n> with what would happen if it were a join to a regular relation.\n>\n\nIn case it wasn't clear what I was talking about there, here's a simple example:\n\n-- Setup\nDROP TABLE IF EXISTS src1, src2, tgt;\nCREATE TABLE src1 (a int, b text);\nCREATE TABLE src2 (a int, b text);\nCREATE TABLE tgt (a int, b text);\n\nINSERT INTO src1 SELECT x, 'Src1 '||x FROM generate_series(1, 3) g(x);\nINSERT INTO src2 SELECT x, 'Src2 '||x FROM generate_series(4, 6) g(x);\nINSERT INTO tgt SELECT x, 'Tgt '||x FROM generate_series(1, 6, 2) g(x);\n\n-- Session 1\nBEGIN;\nUPDATE tgt SET a = 2 WHERE a = 1;\n\n-- Session 2\nUPDATE tgt t SET b = s.b\n FROM (SELECT * FROM src1 UNION ALL SELECT * FROM src2) s\n WHERE s.a = t.a;\nSELECT * FROM tgt;\n\n-- Session 1\nCOMMIT;\n\nand the result in tgt is:\n\n a | b\n---+--------\n 2 | Src1 2\n 3 | Src1 3\n 5 | Src2 5\n(3 rows)\n\nwhereas if that UNION ALL subquery had been a regular table with the\nsame contents, the result would have been:\n\n a | b\n---+--------\n 2 | Tgt 1\n 3 | Src1 3\n 5 | Src2 5\n\ni.e., the concurrently modified row would not have been updated.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:10:11 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 14:49, Dean Rasheed <[email protected]> wrote:\n>\n> On the face of it, the simplest fix is to tweak is_simple_union_all()\n> to prevent UNION ALL subquery pullup for MERGE, forcing a\n> subquery-scan plan. A quick test shows that that fixes the reported\n> issue.\n>\n> However, that leaves the question of whether we should do the same for\n> UPDATE and DELETE.\n>\n\nAttached is a patch that prevents UNION ALL subquery pullup in MERGE only.\n\nI've re-used and extended the isolation test cases added by\n1d5caec221, since it's clear that replacing the plain source relation\nin those tests with a UNION ALL subquery that returns the same results\nshould produce the same end result. (Without this patch, the UNION ALL\nsubquery is pulled up, EPQ rechecking fails to re-find the match, and\na WHEN NOT MATCHED THEN INSERT action is executed instead, resulting\nin a primary key violation.)\n\nIt's still not quite clear whether preventing UNION ALL subquery\npullup should also apply to UPDATE and DELETE, but I wasn't able to\nfind any live bug there, so I think they're best left alone.\n\nThis fixes the reported issue, though it's worth noting that\nconcurrent WHEN NOT MATCHED THEN INSERT actions will still lead to\nduplicate rows being inserted, which is a limitation that is already\ndocumented [1].\n\n[1] https://www.postgresql.org/docs/current/transaction-iso.html\n\nRegards,\nDean",
"msg_date": "Wed, 21 Feb 2024 17:00:45 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "Dean Rasheed <[email protected]> writes:\n> Attached is a patch that prevents UNION ALL subquery pullup in MERGE only.\n\nI think that this is a band-aid that's just masking an error in the\nrowmarking logic: it's not doing the right thing for appendrels\nmade from UNION ALL subqueries. I've not wrapped my head around\nexactly where it's going off the rails, but I feel like this ought\nto get fixed somewhere else. Please hold off pushing your patch\nfor now.\n\n(The test case looks valuable though, thanks for working on that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Feb 2024 11:20:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "I wrote:\n> I think that this is a band-aid that's just masking an error in the\n> rowmarking logic: it's not doing the right thing for appendrels\n> made from UNION ALL subqueries. I've not wrapped my head around\n> exactly where it's going off the rails, but I feel like this ought\n> to get fixed somewhere else. Please hold off pushing your patch\n> for now.\n\nSo after studying this for awhile, I see that the planner is emitting\na PlanRowMark that presumes that the UNION ALL subquery will be\nscanned as though it's a base relation; but since we've converted it\nto an appendrel, the executor just ignores that rowmark, and the wrong\nthings happen. I think the really right fix would be to teach the\nexecutor to honor such PlanRowMarks, by getting nodeAppend.c and\nnodeMergeAppend.c to perform EPQ row substitution. I wrote a draft\npatch for that, attached, and it almost works but not quite. The\ntrouble is that we're jamming the contents of the row identity Var\ncreated for the rowmark into the output of the Append or MergeAppend,\nand it doesn't necessarily match exactly. In the test case you\ncreated, the planner produces targetlists like\n\n Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)\n\nand as you can see the order of the columns doesn't match.\nI can see three ways we might attack that:\n\n1. Persuade the planner to build output tlists that always match\nthe row identity Var. This seems undesirable, because the planner\nmight have intentionally elided columns that won't be read by the\nupper parts of the plan.\n\n2. Change generation of the ROW() expression so that it lists only\nthe values we're going to output, in the order we're going to\noutput them. This would amount to saying that for UNION cases\nthe \"identity\" of a row need only consider columns used by the\nplan, which feels a little odd but I can't think of a reason why\nit wouldn't work. I'm not sure how messy this'd be to implement\nthough, as the set of columns to be emitted isn't fully determined\nuntil much later than where we currently expand the row identity\nVars into RowExprs.\n\n3. Fix the executor to remap what it gets out of the ROW() into the\norder of the subquery tlists. This is probably do-able but I'm\nnot certain; it may be that the executor hasn't enough info.\nWe might need to teach the planner to produce a mapping projection\nand attach it to the Append node, which carries some ABI risk (but\nin the past we've gotten away with adding new fields to the ends\nof plan nodes in the back branches). Another objection is that\nadding cycles to execution rather than planning might be a poor\ntradeoff --- although if we only do the work when EPQ is invoked,\nmaybe it'd be the best way.\n\nIt might be that any of these things is too messy to be considered\nfor back-patching, and we ought to apply what you did in the\nback branches. I'd like a better fix in HEAD though.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 22 Feb 2024 19:12:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Fri, 23 Feb 2024 at 00:12, Tom Lane <[email protected]> wrote:\n>\n> So after studying this for awhile, I see that the planner is emitting\n> a PlanRowMark that presumes that the UNION ALL subquery will be\n> scanned as though it's a base relation; but since we've converted it\n> to an appendrel, the executor just ignores that rowmark, and the wrong\n> things happen. I think the really right fix would be to teach the\n> executor to honor such PlanRowMarks, by getting nodeAppend.c and\n> nodeMergeAppend.c to perform EPQ row substitution.\n\nYes, I agree that's a much better solution, if it can be made to work,\nthough I have been really struggling to see how.\n\n\n> the planner produces targetlists like\n>\n> Output: src_1.val, src_1.id, ROW(src_1.id, src_1.val)\n>\n> and as you can see the order of the columns doesn't match.\n> I can see three ways we might attack that:\n>\n> 1. Persuade the planner to build output tlists that always match\n> the row identity Var.\n>\n> 2. Change generation of the ROW() expression so that it lists only\n> the values we're going to output, in the order we're going to\n> output them.\n>\n> 3. Fix the executor to remap what it gets out of the ROW() into the\n> order of the subquery tlists. This is probably do-able but I'm\n> not certain; it may be that the executor hasn't enough info.\n> We might need to teach the planner to produce a mapping projection\n> and attach it to the Append node, which carries some ABI risk (but\n> in the past we've gotten away with adding new fields to the ends\n> of plan nodes in the back branches). Another objection is that\n> adding cycles to execution rather than planning might be a poor\n> tradeoff --- although if we only do the work when EPQ is invoked,\n> maybe it'd be the best way.\n>\n\nOf those, option 3 feels like the best one, though I'm really not\nsure. I played around with it and convinced myself that the executor\ndoesn't have the information it needs to make it work, but I think all\nit needs is the Append node's original targetlist, as it is just\nbefore it's rewritten by set_dummy_tlist_references(), which rewrites\nthe attribute numbers sequentially. In the original targetlist, all\nthe Vars have the right attribute numbers, so it can be used to build\nthe required projection (I think).\n\nAttached is a very rough patch. It seemed better to build the\nprojection in the executor rather than the planner, since then the\nextra work can be avoided, if EPQ is not invoked.\n\nIt seems to work (it passes the isolation tests, and I couldn't break\nit in ad hoc testing), but it definitely needs tidying up, and it's\nhard to be sure that it's not overlooking something.\n\nRegards,\nDean",
"msg_date": "Tue, 27 Feb 2024 12:53:10 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 8:53 PM Dean Rasheed <[email protected]> wrote:\n>\n> Attached is a very rough patch. It seemed better to build the\n> projection in the executor rather than the planner, since then the\n> extra work can be avoided, if EPQ is not invoked.\n>\n> It seems to work (it passes the isolation tests, and I couldn't break\n> it in ad hoc testing), but it definitely needs tidying up, and it's\n> hard to be sure that it's not overlooking something.\n>\n\nHi. minor issues.\nIn nodeAppend.c, nodeMergeAppend.c\n\n+ if (estate->es_epq_active != NULL)\n+ {\n+ /*\n+ * We are inside an EvalPlanQual recheck. If there is a relevant\n+ * rowmark for the append relation, return the test tuple if one is\n+ * available.\n+ */\n\n+ oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);\n+\n+ node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);\n+\n+ ExecAssignExprContext(estate, &node->ps);\n+\n+ node->ps.ps_ProjInfo =\n+ ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,\n+ node->ps.ps_ExprContext,\n+ node->ps.ps_ResultTupleSlot,\n+ &node->ps,\n+ NULL);\n+\n+ MemoryContextSwitchTo(oldcontext);\nEvalPlanQualStart, EvalPlanQualNext will switch the memory context to\nes_query_cxt.\nso the memory context switch here is not necessary?\n\ndo you think it's sane to change\n`ExecAssignExprContext(estate, &node->ps);`\nto\n`\n/* need an expression context to do the projection */\nif (node->ps.ps_ExprContext == NULL)\nExecAssignExprContext(estate, &node->ps);\n`\n?\n\n\n",
"msg_date": "Wed, 28 Feb 2024 17:16:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Wed, 28 Feb 2024 at 09:16, jian he <[email protected]> wrote:\n>\n> + oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);\n> +\n> + node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);\n> +\n> + ExecAssignExprContext(estate, &node->ps);\n> +\n> + node->ps.ps_ProjInfo =\n> + ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,\n> +\n> EvalPlanQualStart, EvalPlanQualNext will switch the memory context to\n> es_query_cxt.\n> so the memory context switch here is not necessary?\n>\n\nYes it is necessary. The EvalPlanQual mechanism switches to the\nepqstate->recheckestate->es_query_cxt memory context, which is not the\nsame as the main query's estate->es_query_cxt (they're different\nexecutor states). Most stuff allocated under EvalPlanQual() is\nintended to be short-lived (just for the duration of that specific EPQ\ncheck), whereas this stuff (the TupleDesc and Projection) is intended\nto last for the duration of the main query, so that it can be reused\nin later EPQ checks.\n\n> do you think it's sane to change\n> `ExecAssignExprContext(estate, &node->ps);`\n> to\n> `\n> /* need an expression context to do the projection */\n> if (node->ps.ps_ExprContext == NULL)\n> ExecAssignExprContext(estate, &node->ps);\n> `\n> ?\n\nPossibly. More importantly, it should be doing a ResetExprContext() to\nfree any previous stuff before projecting the new row.\n\nAt this stage, this is just a rough draft patch. There are many things\nlike that and code comments that will need to be improved before it is\ncommittable, but for now I'm more concerned with whether it actually\nworks, and if the approach it's taking is sane.\n\nI've tried various things and I haven't been able to break it, but I'm\nstill nervous that I might be overlooking something, particularly in\nrelation to what might get added to the targetlist at various stages\nduring planning for different types of query.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 28 Feb 2024 12:11:02 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 8:11 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Wed, 28 Feb 2024 at 09:16, jian he <[email protected]> wrote:\n> >\n> > + oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);\n> > +\n> > + node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);\n> > +\n> > + ExecAssignExprContext(estate, &node->ps);\n> > +\n> > + node->ps.ps_ProjInfo =\n> > + ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,\n> > +\n> > EvalPlanQualStart, EvalPlanQualNext will switch the memory context to\n> > es_query_cxt.\n> > so the memory context switch here is not necessary?\n> >\n>\n> Yes it is necessary. The EvalPlanQual mechanism switches to the\n> epqstate->recheckestate->es_query_cxt memory context, which is not the\n> same as the main query's estate->es_query_cxt (they're different\n> executor states). Most stuff allocated under EvalPlanQual() is\n> intended to be short-lived (just for the duration of that specific EPQ\n> check), whereas this stuff (the TupleDesc and Projection) is intended\n> to last for the duration of the main query, so that it can be reused\n> in later EPQ checks.\n>\nsorry for the noise. I understand it now.\n\nAnother small question:\nfor the Append case, we can set/initialize it at create_append_plan,\nall other elements are initialized there,\nwhy we set it at set_append_references.\njust wondering.\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:04:35 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "Hi,hackers\r\n \r\n I may have discovered another issue in the concurrency scenario of merge, and I am currently not sure if this new issue is related to the previous one. \r\n It seems that it may also be an issue with the EPQ mechanism in the merge scenario? \r\n I will provide this test case, hoping it will be helpful for you to fix related issues in the future.\r\n\r\n\r\n\r\n DROP TABLE IF EXISTS src1, tgt;\r\n CREATE TABLE src1 (a int, b text);\r\n CREATE TABLE tgt (a int, b text);\r\n INSERT INTO src1 SELECT x, 'Src1 '||x FROM generate_series(1, 3) g(x);\r\n INSERT INTO tgt SELECT x, 'Tgt '||x FROM generate_series(1, 6, 2) g(x);\r\n insert into src1 values(3,'src1 33');\r\n\r\n\r\n\r\n If I only execute merge , I will get the following error:\r\n merge into tgt a using src1 c on a.a = c.a when matched then update set b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute fail\r\n ERROR: MERGE command cannot affect row a second time\r\n HIINT: Ensure that not more than one source row matches any one target row.\r\n\r\n\r\n\r\n But when I execute the update and merge concurrently, I will get the following result set.\r\n\r\n --session1\r\n begin;\r\n\r\n update tgt set b = 'tgt333' where a =3;\r\n\r\n --session2\r\n merge into tgt a using src1 c on a.a = c.a when matched then update set b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute success\r\n --session1\r\n commit;\r\n select * from tgt;\r\n a | b \r\n ---+---------\r\n 5 | Tgt 5\r\n 1 | Src1 1\r\n 2 | Src1 2\r\n 3 | Src1 3\r\n 3 | src1 33\r\n\r\n I think even if the tuple with id:3 is udpated, merge should still be able to retrieve new tuples with id:3, and report the same error as above?\r\n\r\n\r\nRegards,\r\nwenjiang zhang\r\n\r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"jian he\" <[email protected]>;\r\n发送时间: 2024年2月29日(星期四) 中午11:04\r\n收件人: \"Dean Rasheed\"<[email protected]>;\r\n抄送: \"Tom Lane\"<[email protected]>;\"zwj\"<[email protected]>;\"pgsql-hackers\"<[email protected]>;\r\n主题: Re: bug report: some issues about pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)\r\n\r\n\r\n\r\nOn Wed, Feb 28, 2024 at 8:11 PM Dean Rasheed <[email protected]> wrote:\r\n>\r\n> On Wed, 28 Feb 2024 at 09:16, jian he <[email protected]> wrote:\r\n> >\r\n> > + oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);\r\n> > +\r\n> > + node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);\r\n> > +\r\n> > + ExecAssignExprContext(estate, &node->ps);\r\n> > +\r\n> > + node->ps.ps_ProjInfo =\r\n> > + ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,\r\n> > +\r\n> > EvalPlanQualStart, EvalPlanQualNext will switch the memory context to\r\n> > es_query_cxt.\r\n> > so the memory context switch here is not necessary?\r\n> >\r\n>\r\n> Yes it is necessary. The EvalPlanQual mechanism switches to the\r\n> epqstate->recheckestate->es_query_cxt memory context, which is not the\r\n> same as the main query's estate->es_query_cxt (they're different\r\n> executor states). Most stuff allocated under EvalPlanQual() is\r\n> intended to be short-lived (just for the duration of that specific EPQ\r\n> check), whereas this stuff (the TupleDesc and Projection) is intended\r\n> to last for the duration of the main query, so that it can be reused\r\n> in later EPQ checks.\r\n>\r\nsorry for the noise. I understand it now.\r\n\r\nAnother small question:\r\nfor the Append case, we can set/initialize it at create_append_plan,\r\nall other elements are initialized there,\r\nwhy we set it at set_append_references.\r\njust wondering.\nHi,hackers I may have discovered another issue in the concurrency scenario of merge, and I am currently not sure if this new issue is related to the previous one. It seems that it may also be an issue with the EPQ mechanism in the merge scenario? I will provide this test case, hoping it will be helpful for you to fix related issues in the future. DROP TABLE IF EXISTS src1, tgt; CREATE TABLE src1 (a int, b text); CREATE TABLE tgt (a int, b text); INSERT INTO src1 SELECT x, 'Src1 '||x FROM generate_series(1, 3) g(x); INSERT INTO tgt SELECT x, 'Tgt '||x FROM generate_series(1, 6, 2) g(x); insert into src1 values(3,'src1 33'); If I only execute merge , I will get the following error: merge into tgt a using src1 c on a.a = c.a when matched then update set b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute fail ERROR: MERGE command cannot affect row a second time HIINT: Ensure that not more than one source row matches any one target row. But when I execute the update and merge concurrently, I will get the following result set. --session1 begin; update tgt set b = 'tgt333' where a =3; --session2 merge into tgt a using src1 c on a.a = c.a when matched then update set\n b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute success --session1 commit; select * from tgt; a | b ---+--------- 5 | Tgt 5 1 | Src1 1 2 | Src1 2 3 | Src1 3 3 | src1 33 I think even if the tuple with id:3 is udpated, merge should still be able to retrieve new tuples with id:3, and report the same error as above?Regards,wenjiang zhang------------------ 原始邮件 ------------------发件人: \"jian he\" <[email protected]>;发送时间: 2024年2月29日(星期四) 中午11:04收件人: \"Dean Rasheed\"<[email protected]>;抄送: \"Tom Lane\"<[email protected]>;\"zwj\"<[email protected]>;\"pgsql-hackers\"<[email protected]>;主题: Re: bug report: some issues about pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)On Wed, Feb 28, 2024 at 8:11 PM Dean Rasheed <[email protected]> wrote:>> On Wed, 28 Feb 2024 at 09:16, jian he <[email protected]> wrote:> >> > + oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);> > +> > + node->as_epq_tupdesc = lookup_rowtype_tupdesc_copy(tupType, tupTypmod);> > +> > + ExecAssignExprContext(estate, &node->ps);> > +> > + node->ps.ps_ProjInfo => > + ExecBuildProjectionInfo(castNode(Append, node->ps.plan)->epq_targetlist,> > +> > EvalPlanQualStart, EvalPlanQualNext will switch the memory context to> > es_query_cxt.> > so the memory context switch here is not necessary?> >>> Yes it is necessary. The EvalPlanQual mechanism switches to the> epqstate->recheckestate->es_query_cxt memory context, which is not the> same as the main query's estate->es_query_cxt (they're different> executor states). Most stuff allocated under EvalPlanQual() is> intended to be short-lived (just for the duration of that specific EPQ> check), whereas this stuff (the TupleDesc and Projection) is intended> to last for the duration of the main query, so that it can be reused> in later EPQ checks.>sorry for the noise. I understand it now.Another small question:for the Append case, we can set/initialize it at create_append_plan,all other elements are initialized there,why we set it at set_append_references.just wondering.",
"msg_date": "Tue, 5 Mar 2024 18:04:33 +0800",
"msg_from": "\"=?gb18030?B?endq?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?gb18030?B?u9i4tKO6IGJ1ZyByZXBvcnQ6IHNvbWUgaXNzdWVz?=\n =?gb18030?B?IGFib3V0IHBnXzE1X3N0YWJsZSg4ZmE0YTFhYzYx?=\n =?gb18030?B?MTg5ZWZmZmI4Yjg1MWVlNzdlMWJjODczNjBjNDQ1?=\n =?gb18030?B?KQ==?="
},
{
"msg_contents": "[cc'ing Alvaro]\n\nOn Tue, 5 Mar 2024 at 10:04, zwj <[email protected]> wrote:\n>\n> If I only execute merge , I will get the following error:\n> merge into tgt a using src1 c on a.a = c.a when matched then update set b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute fail\n> ERROR: MERGE command cannot affect row a second time\n> HIINT: Ensure that not more than one source row matches any one target row.\n>\n> But when I execute the update and merge concurrently, I will get the following result set.\n>\n\nYes, this should still produce that error, even after a concurrent update.\n\nIn the first example without a concurrent update, when we reach the\ntgt.a = 3 row the second time, ExecUpdateAct() returns TM_SelfModified\nand we do this:\n\n case TM_SelfModified:\n\n /*\n * The SQL standard disallows this for MERGE.\n */\n if (TransactionIdIsCurrentTransactionId(context->tmfd.xmax))\n ereport(ERROR,\n (errcode(ERRCODE_CARDINALITY_VIOLATION),\n /* translator: %s is a SQL command name */\n errmsg(\"%s command cannot affect row a second time\",\n \"MERGE\"),\n errhint(\"Ensure that not more than one source row\nmatches any one target row.\")));\n /* This shouldn't happen */\n elog(ERROR, \"attempted to update or delete invisible tuple\");\n break;\n\nwhereas in the second case, after a concurrent update, ExecUpdateAct()\nreturns TM_Updated, we attempt to lock the tuple prior to running EPQ,\nand table_tuple_lock() returns TM_SelfModified, which does this:\n\n case TM_SelfModified:\n\n /*\n * This can be reached when following an update\n * chain from a tuple updated by another session,\n * reaching a tuple that was already updated in\n * this transaction. If previously modified by\n * this command, ignore the redundant update,\n * otherwise error out.\n *\n * See also response to TM_SelfModified in\n * ExecUpdate().\n */\n if (context->tmfd.cmax != estate->es_output_cid)\n ereport(ERROR,\n (errcode(ERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION),\n errmsg(\"tuple to be updated or deleted\nwas already modified by an operation triggered by the current\ncommand\"),\n errhint(\"Consider using an AFTER trigger\ninstead of a BEFORE trigger to propagate changes to other rows.\")));\n return false;\n\nMy initial reaction is that neither of those blocks of code is\nentirely correct, and that they should both be doing both of those\nchecks. I.e., something like the attached (which probably needs some\nadditional test cases).\n\nRegards,\nDean",
"msg_date": "Tue, 5 Mar 2024 13:55:07 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 13:55, Dean Rasheed <[email protected]> wrote:\n>\n> > If I only execute merge , I will get the following error:\n> > merge into tgt a using src1 c on a.a = c.a when matched then update set b = c.b when not matched then insert (a,b) values(c.a,c.b); -- excute fail\n> > ERROR: MERGE command cannot affect row a second time\n> > HIINT: Ensure that not more than one source row matches any one target row.\n> >\n> > But when I execute the update and merge concurrently, I will get the following result set.\n>\n> Yes, this should still produce that error, even after a concurrent update.\n>\n> My initial reaction is that neither of those blocks of code is\n> entirely correct, and that they should both be doing both of those\n> checks. I.e., something like the attached (which probably needs some\n> additional test cases).\n>\n\nOK, I've pushed and back-patched that fix for this issue, after adding\nsome tests (nice catch, by the way!).\n\nThat wasn't related to the original issue though, so the problem with\nUNION ALL still remains to be fixed. The patch from [1] looks\npromising (for HEAD at least), but it really needs more pairs of eyes\non it (bearing in mind that it's just a rough proof-of-concept patch\nat this stage).\n\n[1] https://www.postgresql.org/message-id/CAEZATCVa-mgPuOdgd8%2BYVgOJ4wgJuhT2mJznbj_tmsGAP8hHJw%40mail.gmail.com\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 7 Mar 2024 10:20:41 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nAfter starting the server (initdb + pg_ctl start) I ran a regress test \ncreate_misc.sql ('\\i src/test/regress/sql/create_misc.sql') and, after \nthat,\nI ran the fdw test ('\\i contrib/postgres_fdw/sql/postgres_fdw.sql') in \nthe psql, and it failed in the core-dump due to the worked assert.\n\nTo be honest, such a failure occurred only if the fdw extension was not \ninstalled earlier.\n\n\nscript to reproduce the error:\n\n./configure CFLAGS='-g -ggdb -O0' --enable-debug --enable-cassert \n--prefix=`pwd`/tmp_install && make -j8 -s install\n\nexport CDIR=$(pwd)\nexport PGDATA=$CDIR/postgres_data\nrm -r $PGDATA\nmkdir $PGDATA\n${CDIR}/tmp_install/bin/initdb -D $PGDATA >> log.txt\n${CDIR}/tmp_install/bin/pg_ctl -D $PGDATA -l logfile start\n\n${CDIR}/tmp_install/bin/psql -d postgres -f \nsrc/test/regress/sql/create_misc.sql &&\n${CDIR}/tmp_install/bin/psql -d postgres -f \ncontrib/postgres_fdw/sql/postgres_fdw.sql\n\n\nThe query, where the problem is:\n\n-- MERGE ought to fail cleanly\nmerge into itrtest using (select 1, 'foo') as source on (true)\n when matched then do nothing;\n\nCoredump:\n\n#5 0x0000555d1451f483 in ExceptionalCondition \n(conditionName=0x555d146bba13 \"requiredPerms != 0\", \nfileName=0x555d146bb7b0 \"execMain.c\",\n lineNumber=654) at assert.c:66\n#6 0x0000555d14064ebe in ExecCheckOneRelPerms (perminfo=0x555d1565ef90) \nat execMain.c:654\n#7 0x0000555d14064d94 in ExecCheckPermissions \n(rangeTable=0x555d1565eef0, rteperminfos=0x555d1565efe0, \nereport_on_violation=true) at execMain.c:623\n#8 0x0000555d140652ca in InitPlan (queryDesc=0x555d156cde50, eflags=0) \nat execMain.c:850\n#9 0x0000555d140644a8 in standard_ExecutorStart \n(queryDesc=0x555d156cde50, eflags=0) at execMain.c:266\n#10 0x0000555d140641ec in ExecutorStart (queryDesc=0x555d156cde50, \neflags=0) at execMain.c:145\n#11 0x0000555d1432c025 in ProcessQuery (plan=0x555d1565f3e0,\n sourceText=0x555d1551b020 \"merge into itrtest using (select 1, \n'foo') as source on (true)\\n when matched then do nothing;\", params=0x0,\n queryEnv=0x0, dest=0x555d1565f540, qc=0x7fffc9454080) at pquery.c:155\n#12 0x0000555d1432dbd8 in PortalRunMulti (portal=0x555d15597ef0, \nisTopLevel=true, setHoldSnapshot=false, dest=0x555d1565f540, \naltdest=0x555d1565f540,\n qc=0x7fffc9454080) at pquery.c:1277\n#13 0x0000555d1432d0cf in PortalRun (portal=0x555d15597ef0, \ncount=9223372036854775807, isTopLevel=true, run_once=true, \ndest=0x555d1565f540,\n altdest=0x555d1565f540, qc=0x7fffc9454080) at pquery.c:791\n#14 0x0000555d14325ec8 in exec_simple_query (\n--Type <RET> for more, q to quit, c to continue without paging--\n query_string=0x555d1551b020 \"merge into itrtest using (select 1, \n'foo') as source on (true)\\n when matched then do nothing;\") at \npostgres.c:1273\n#15 0x0000555d1432ae4c in PostgresMain (dbname=0x555d15555ab0 \"aaa\", \nusername=0x555d15555a98 \"alena\") at postgres.c:4645\n#16 0x0000555d14244a5d in BackendRun (port=0x555d1554b3b0) at \npostmaster.c:4440\n#17 0x0000555d14244072 in BackendStartup (port=0x555d1554b3b0) at \npostmaster.c:4116\n#18 0x0000555d142405c5 in ServerLoop () at postmaster.c:1768\n#19 0x0000555d1423fec2 in PostmasterMain (argc=3, argv=0x555d1547fcf0) \nat postmaster.c:1467\n#20 0x0000555d140f3122 in main (argc=3, argv=0x555d1547fcf0) at main.c:198\n\nThis error is consistently reproduced.\n\nTo be honest, I wasn't able to fully figure out the reason for this, but \nit seems that this operation on partitions should not be available at all?\n\nalena@postgres=# SELECT relname, relkind FROM pg_class where \nrelname='itrtest';\n relname | relkind\n---------+---------\n itrtest | p\n(1 row)\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Sun, 18 Feb 2024 21:33:07 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": true,
"msg_subject": "Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "Alena Rybakina <[email protected]> writes:\n> After starting the server (initdb + pg_ctl start) I ran a regress test \n> create_misc.sql ('\\i src/test/regress/sql/create_misc.sql') and, after \n> that,\n> I ran the fdw test ('\\i contrib/postgres_fdw/sql/postgres_fdw.sql') in \n> the psql, and it failed in the core-dump due to the worked assert.\n> To be honest, such a failure occurred only if the fdw extension was not \n> installed earlier.\n\nThanks for the report! This can be reproduced more simply with\n\nz=# create table test (a int, b text) partition by list(a);\nCREATE TABLE\nz=# merge into test using (select 1, 'foo') as source on (true) when matched then do nothing;\nserver closed the connection unexpectedly\n\nThe MERGE produces a query tree with\n\n :rtable (\n {RANGETBLENTRY \n :alias <> \n :eref \n {ALIAS \n :aliasname test \n :colnames (\"a\" \"b\")\n }\n :rtekind 0 \n :relid 49152 \n :relkind p \n :rellockmode 3 \n :tablesample <> \n :perminfoindex 1 \n :lateral false \n :inh true \n :inFromCl false \n :securityQuals <>\n }\n ...\n )\n :rteperminfos (\n {RTEPERMISSIONINFO \n :relid 49152 \n :inh true \n :requiredPerms 0 \n :checkAsUser 0 \n :selectedCols (b)\n :insertedCols (b)\n :updatedCols (b)\n }\n )\n\nand that zero for requiredPerms is what leads to the assertion\nfailure later. So I'd blame this on faulty handling of the\nzero-partitions case in the RTEPermissionInfo refactoring.\n(I didn't bisect to prove that a61b1f748 is exactly where it\nbroke, but I do see that the query successfully does nothing\nin v15.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Feb 2024 14:13:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On 2024-Feb-18, Tom Lane wrote:\n\n> So I'd blame this on faulty handling of the zero-partitions case in\n> the RTEPermissionInfo refactoring. (I didn't bisect to prove that\n> a61b1f748 is exactly where it broke, but I do see that the query\n> successfully does nothing in v15.)\n\nYou're right, this is the commit that broke it. It's unclear to me if\nAmit is available to look at it, so I'll give this a look tomorrow if he\nisn't.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... ¿Quién es el machito que tendría carnet?\" (Mafalda)\n\n\n",
"msg_date": "Sun, 18 Feb 2024 20:44:02 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 4:44 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Feb-18, Tom Lane wrote:\n>\n> > So I'd blame this on faulty handling of the zero-partitions case in\n> > the RTEPermissionInfo refactoring. (I didn't bisect to prove that\n> > a61b1f748 is exactly where it broke, but I do see that the query\n> > successfully does nothing in v15.)\n>\n> You're right, this is the commit that broke it. It's unclear to me if\n> Amit is available to look at it, so I'll give this a look tomorrow if he\n> isn't.\n\nI'll look at this today.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Mon, 19 Feb 2024 09:03:58 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On 2024-Feb-18, Alvaro Herrera wrote:\n\n> On 2024-Feb-18, Tom Lane wrote:\n> \n> > So I'd blame this on faulty handling of the zero-partitions case in\n> > the RTEPermissionInfo refactoring. (I didn't bisect to prove that\n> > a61b1f748 is exactly where it broke, but I do see that the query\n> > successfully does nothing in v15.)\n> \n> You're right, this is the commit that broke it. It's unclear to me if\n> Amit is available to look at it, so I'll give this a look tomorrow if he\n> isn't.\n\nOK, so it turns out that the bug is older than that -- it was actually\nintroduced with MERGE itself (7103ebb7aae8) and has nothing to do with\npartitioning or RTEPermissionInfo; commit a61b1f748 is only related\nbecause it added the Assert() that barfs when there are no privileges to\ncheck.\n\nThe real problem is that a MERGE ... DO NOTHING action reports that no\npermissions need to be checked on the target relation, which is not a\nproblem when there are other actions in the MERGE command since they add\nprivs to check. When DO NOTHING is the only action, the added assert in\na61b1f748 is triggered.\n\nSo, this means we can fix this by simply requiring ACL_SELECT privileges\non a DO NOTHING action. We don't need to request specific privileges on\nany particular column (perminfo->selectedCols continues to be the empty\nset) -- which means that any role that has privileges on *any* column\nwould get a pass. If you're doing MERGE with any other action besides\nDO NOTHING, you already have privileges on at least some column, so\nadding ACL_SELECT breaks nothing.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)",
"msg_date": "Tue, 20 Feb 2024 20:42:47 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> The real problem is that a MERGE ... DO NOTHING action reports that no\n> permissions need to be checked on the target relation, which is not a\n> problem when there are other actions in the MERGE command since they add\n> privs to check. When DO NOTHING is the only action, the added assert in\n> a61b1f748 is triggered.\n\n> So, this means we can fix this by simply requiring ACL_SELECT privileges\n> on a DO NOTHING action. We don't need to request specific privileges on\n> any particular column (perminfo->selectedCols continues to be the empty\n> set) -- which means that any role that has privileges on *any* column\n> would get a pass. If you're doing MERGE with any other action besides\n> DO NOTHING, you already have privileges on at least some column, so\n> adding ACL_SELECT breaks nothing.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 14:48:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On 2024-Feb-20, Tom Lane wrote:\n\n> > So, this means we can fix this by simply requiring ACL_SELECT privileges\n> > on a DO NOTHING action. We don't need to request specific privileges on\n> > any particular column (perminfo->selectedCols continues to be the empty\n> > set) -- which means that any role that has privileges on *any* column\n> > would get a pass.\n> \n> LGTM.\n\nThanks for looking!\n\nAfter having pushed that, I wonder if we should document this. It seems\nquite the minor thing, but I'm sure somebody will complain if we don't.\nI propose the attached. (Extra context so that the full paragraph can\nbe read from the comfort of your email program.)\n\n(While at it, I found the placement of the previous-to-last sentence in\nthat paragraph rather strange, so I moved it to the end.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Sallah, I said NO camels! That's FIVE camels; can't you count?\"\n(Indiana Jones)",
"msg_date": "Wed, 21 Feb 2024 17:53:07 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> After having pushed that, I wonder if we should document this. It seems\n> quite the minor thing, but I'm sure somebody will complain if we don't.\n\nYup, no doubt.\n\n> I propose the attached. (Extra context so that the full paragraph can\n> be read from the comfort of your email program.)\n\nThis reads awkwardly to me. I think it'd be better to construct it\nso that DO NOTHING's requirement is stated exactly parallel to the other\nthree clause types, more or less as attached.\n\n> (While at it, I found the placement of the previous-to-last sentence in\n> that paragraph rather strange, so I moved it to the end.)\n\nAgreed, and done in my version too.\n\nBTW, if you read it without paying attention to markup, you'll notice\nthat we are saying things like\n\n If you specify an insert action, you must have the INSERT\n privilege on the target_table_name.\n\nwhich is fairly nonsensical: we don't define privileges on the name\nof something. While I've not done anything about that here,\nI wonder if we shouldn't just write \"privilege on the target table\"\nwithout any special markup.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 22 Feb 2024 18:48:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On 2024-Feb-22, Tom Lane wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n\n> > I propose the attached. (Extra context so that the full paragraph can\n> > be read from the comfort of your email program.)\n> \n> This reads awkwardly to me. I think it'd be better to construct it\n> so that DO NOTHING's requirement is stated exactly parallel to the other\n> three clause types, more or less as attached.\n\nSure, that works.\n\n> BTW, if you read it without paying attention to markup, you'll notice\n> that we are saying things like\n> \n> If you specify an insert action, you must have the INSERT\n> privilege on the target_table_name.\n> \n> which is fairly nonsensical: we don't define privileges on the name\n> of something.\n\nHmm, you're right, this is not strictly correct.\n\n> While I've not done anything about that here, I wonder if we shouldn't\n> just write \"privilege on the target table\" without any special markup.\n\nThat would work for me.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 23 Feb 2024 23:03:01 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Feb-22, Tom Lane wrote:\n>> While I've not done anything about that here, I wonder if we shouldn't\n>> just write \"privilege on the target table\" without any special markup.\n\n> That would work for me.\n\nOK. Will you do the honors, or shall I?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:16:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
},
{
"msg_contents": "On 2024-Feb-25, Tom Lane wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > On 2024-Feb-22, Tom Lane wrote:\n> >> While I've not done anything about that here, I wonder if we shouldn't\n> >> just write \"privilege on the target table\" without any special markup.\n> \n> > That would work for me.\n> \n> OK. Will you do the honors, or shall I?\n\nI can push the whole later today.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:13:11 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running the fdw test from the terminal crashes into the core-dump"
}
] |
[
{
"msg_contents": "A recent commit (7a424ece48) added the following message:\n\n> could not sync slot information as remote slot precedes local slot:\n> remote slot \"%s\": LSN (%X/%X), catalog xmin (%u) local slot: LSN\n> (%X/%X), catalog xmin (%u)\n\nSince it is a bit overloaded but doesn't have a separator between\n\"catalog xmin (%u)\" and \"local slot:\", it is somewhat cluttered. Don't\nwe need something, for example a semicolon there to improve\nreadability and reduce clutter?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Feb 2024 13:40:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 10:10 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> A recent commit (7a424ece48) added the following message:\n>\n> > could not sync slot information as remote slot precedes local slot:\n> > remote slot \"%s\": LSN (%X/%X), catalog xmin (%u) local slot: LSN\n> > (%X/%X), catalog xmin (%u)\n>\n> Since it is a bit overloaded but doesn't have a separator between\n> \"catalog xmin (%u)\" and \"local slot:\", it is somewhat cluttered. Don't\n> we need something, for example a semicolon there to improve\n> readability and reduce clutter?\n\nI think maybe we should even revise the message even more. In most\nplaces we do not just print out a whole bunch of values, but use a\nsentence to connect them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:31:33 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 10:31 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 10:10 AM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> > A recent commit (7a424ece48) added the following message:\n> >\n> > > could not sync slot information as remote slot precedes local slot:\n> > > remote slot \"%s\": LSN (%X/%X), catalog xmin (%u) local slot: LSN\n> > > (%X/%X), catalog xmin (%u)\n> >\n> > Since it is a bit overloaded but doesn't have a separator between\n> > \"catalog xmin (%u)\" and \"local slot:\", it is somewhat cluttered. Don't\n> > we need something, for example a semicolon there to improve\n> > readability and reduce clutter?\n>\n> I think maybe we should even revise the message even more. In most\n> places we do not just print out a whole bunch of values, but use a\n> sentence to connect them.\n\nI have tried to reword the msg, please have a look.\n\nthanks\nShveta",
"msg_date": "Mon, 19 Feb 2024 10:56:33 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "At Mon, 19 Feb 2024 10:31:33 +0530, Robert Haas <[email protected]> wrote in \r\n> On Mon, Feb 19, 2024 at 10:10 AM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> > A recent commit (7a424ece48) added the following message:\r\n> >\r\n> > > could not sync slot information as remote slot precedes local slot:\r\n> > > remote slot \"%s\": LSN (%X/%X), catalog xmin (%u) local slot: LSN\r\n> > > (%X/%X), catalog xmin (%u)\r\n> >\r\n> > Since it is a bit overloaded but doesn't have a separator between\r\n> > \"catalog xmin (%u)\" and \"local slot:\", it is somewhat cluttered. Don't\r\n> > we need something, for example a semicolon there to improve\r\n> > readability and reduce clutter?\r\n> \r\n> I think maybe we should even revise the message even more. In most\r\n> places we do not just print out a whole bunch of values, but use a\r\n> sentence to connect them.\r\n\r\nMmm. Something like this?:\r\n\r\n\"could not sync slot information: local slot LSN (%X/%X) or xmin(%u)\r\n behind remote (%X/%X, %u)\"\r\n\r\nOr I thought the values could be moved to DETAILS: line.\r\n\r\nBy the way, the code around the message is as follows.\r\n\r\n> /*\r\n> * The remote slot didn't catch up to locally reserved position.\r\n> *\r\n> * We do not drop the slot because the restart_lsn can be ahead of the\r\n> * current location when recreating the slot in the next cycle. It may\r\n> * take more time to create such a slot. Therefore, we keep this slot\r\n> * and attempt the synchronization in the next cycle.\r\n> *\r\n> * XXX should this be changed to elog(DEBUG1) perhaps?\r\n> */\r\n> ereport(LOG,\r\n> \t\terrmsg(\"could not sync slot information as remote slot precedes local slot:\"\r\n>\t\t\t\t\t \" remote slot \\\"%s\\\": LSN (%X/%X), catalog xmin (%u) local slot: LSN (%X/%X), catalog xmin (%u)\",\r\n\r\nIf we change this message to DEBUG1, a timeout feature to show a\r\nWARNING message might be needed for DBAs to notice that something bad\r\nis ongoing. However, it doesn't seem appropriate as a LOG message to\r\nme. Perhaps, a LOG message should be more like:\r\n\r\n> \"LOG: waiting for local slot to catch up to remote\"\r\n> \"DETAIL: remote slot LSN (%X/%X)....; \"\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 19 Feb 2024 14:34:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 11:04 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> Or I thought the values could be moved to DETAILS: line.\n\nYeah, I think that's likely to be the right approach here. The details\naren't too clear to me.\n\nDoes the primary error message really need to say \"could not sync\nslot\"? If it will be obvious from context that we were trying to sync\na slot, then it would be fine to just say \"ERROR: remote slot precedes\nlocal slot\".\n\nBut I also don't quite understand what problem this is trying to\nreport. Is this slot-syncing code running on the primary or the\nstandby? If it's running on the primary, then surely it's expected\nthat the remote slot will precede the local one. And if it's running\non the standby, then the comments in\nupdate_and_persist_local_synced_slot about waiting for the remote side\nto catch up seem quite confusing, because surely we're chasing the\nprimary and not the other way around?\n\nBut if we ignore all of that, then we could just do this:\n\nERROR: could not sync slot information as remote slot precedes local slot\nDETAIL: Remote slot has LSN %X/%X and catalog xmin %u, but remote slot\nhas LSN %X/%X and catalog xmin %u.\n\nwhich would fix the original complaint, and my point about using\nEnglish rather than just printing a bunch of values.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Feb 2024 11:42:27 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 11:42 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 11:04 AM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> > Or I thought the values could be moved to DETAILS: line.\n>\n> Yeah, I think that's likely to be the right approach here. The details\n> aren't too clear to me.\n>\n> Does the primary error message really need to say \"could not sync\n> slot\"? If it will be obvious from context that we were trying to sync\n> a slot, then it would be fine to just say \"ERROR: remote slot precedes\n> local slot\".\n>\n\nAs this is a LOG message, I feel one may need some more information on\nthe context but it is not mandatory.\n\n> But I also don't quite understand what problem this is trying to\n> report. Is this slot-syncing code running on the primary or the\n> standby? If it's running on the primary, then surely it's expected\n> that the remote slot will precede the local one. And if it's running\n> on the standby, then the comments in\n> update_and_persist_local_synced_slot about waiting for the remote side\n> to catch up seem quite confusing, because surely we're chasing the\n> primary and not the other way around?\n>\n\nThe local's restart_lsn could be ahead of than primary's for the very\nfirst sync when the WAL corresponding to the remote's restart_lsn is\nnot available on standby (say due to a different wal related settings\nthe required WAL has been removed when we first time tried to sync the\nslot). For more details, you can refer to comments atop slotsync.c\nstarting from \"If the WAL corresponding to the remote's restart_lsn\n...\"\n\n> But if we ignore all of that, then we could just do this:\n>\n> ERROR: could not sync slot information as remote slot precedes local slot\n> DETAIL: Remote slot has LSN %X/%X and catalog xmin %u, but remote slot\n> has LSN %X/%X and catalog xmin %u.\n>\n\nThis looks good to me but instead of ERROR here we want to use LOG.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 12:14:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 12:14 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 11:42 AM Robert Haas <[email protected]> wrote:\n> >\n> > On Mon, Feb 19, 2024 at 11:04 AM Kyotaro Horiguchi\n> > <[email protected]> wrote:\n> > > Or I thought the values could be moved to DETAILS: line.\n> >\n> > Yeah, I think that's likely to be the right approach here. The details\n> > aren't too clear to me.\n> >\n> > Does the primary error message really need to say \"could not sync\n> > slot\"? If it will be obvious from context that we were trying to sync\n> > a slot, then it would be fine to just say \"ERROR: remote slot precedes\n> > local slot\".\n> >\n>\n> As this is a LOG message, I feel one may need some more information on\n> the context but it is not mandatory.\n>\n> > But I also don't quite understand what problem this is trying to\n> > report. Is this slot-syncing code running on the primary or the\n> > standby? If it's running on the primary, then surely it's expected\n> > that the remote slot will precede the local one. And if it's running\n> > on the standby, then the comments in\n> > update_and_persist_local_synced_slot about waiting for the remote side\n> > to catch up seem quite confusing, because surely we're chasing the\n> > primary and not the other way around?\n> >\n>\n> The local's restart_lsn could be ahead of than primary's for the very\n> first sync when the WAL corresponding to the remote's restart_lsn is\n> not available on standby (say due to a different wal related settings\n> the required WAL has been removed when we first time tried to sync the\n> slot). For more details, you can refer to comments atop slotsync.c\n> starting from \"If the WAL corresponding to the remote's restart_lsn\n> ...\"\n>\n\nSorry, I gave the wrong reference, the comments I was referring to\nstart with: \"If on physical standby, the WAL corresponding ...\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 12:43:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 12:14 PM Amit Kapila <[email protected]> wrote:\n> > Does the primary error message really need to say \"could not sync\n> > slot\"? If it will be obvious from context that we were trying to sync\n> > a slot, then it would be fine to just say \"ERROR: remote slot precedes\n> > local slot\".\n>\n> As this is a LOG message, I feel one may need some more information on\n> the context but it is not mandatory.\n\nI'll defer to you.\n\n> > But I also don't quite understand what problem this is trying to\n> > report. Is this slot-syncing code running on the primary or the\n> > standby? If it's running on the primary, then surely it's expected\n> > that the remote slot will precede the local one. And if it's running\n> > on the standby, then the comments in\n> > update_and_persist_local_synced_slot about waiting for the remote side\n> > to catch up seem quite confusing, because surely we're chasing the\n> > primary and not the other way around?\n>\n> The local's restart_lsn could be ahead of than primary's for the very\n> first sync when the WAL corresponding to the remote's restart_lsn is\n> not available on standby (say due to a different wal related settings\n> the required WAL has been removed when we first time tried to sync the\n> slot). For more details, you can refer to comments atop slotsync.c\n> starting from \"If the WAL corresponding to the remote's restart_lsn\n> ...\"\n\nSo why do we log a message about this?\n\n> > But if we ignore all of that, then we could just do this:\n> >\n> > ERROR: could not sync slot information as remote slot precedes local slot\n> > DETAIL: Remote slot has LSN %X/%X and catalog xmin %u, but remote slot\n> > has LSN %X/%X and catalog xmin %u.\n> >\n>\n> This looks good to me but instead of ERROR here we want to use LOG.\n\nFair enough!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:21:19 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:21 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 12:14 PM Amit Kapila <[email protected]> wrote:\n>\n> > > But I also don't quite understand what problem this is trying to\n> > > report. Is this slot-syncing code running on the primary or the\n> > > standby? If it's running on the primary, then surely it's expected\n> > > that the remote slot will precede the local one. And if it's running\n> > > on the standby, then the comments in\n> > > update_and_persist_local_synced_slot about waiting for the remote side\n> > > to catch up seem quite confusing, because surely we're chasing the\n> > > primary and not the other way around?\n> >\n> > The local's restart_lsn could be ahead of than primary's for the very\n> > first sync when the WAL corresponding to the remote's restart_lsn is\n> > not available on standby (say due to a different wal related settings\n> > the required WAL has been removed when we first time tried to sync the\n> > slot). For more details, you can refer to comments atop slotsync.c\n> > starting from \"If the WAL corresponding to the remote's restart_lsn\n> > ...\"\n>\n> So why do we log a message about this?\n>\n\nThis was added after the main commit of this functionality to find\nsome BF failures (where we were expecting the slot to sync but due to\none of these conditions not being met the slot was not synced) and we\ncan probably change it to DEBUG1 as well. I think we would need this\ninformation w.r.t this functionality to gather more information in\ncase expected slots are not being synced and it may be helpful for\nusers to also know why the slots are not synced, if that happens.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:42:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:42 PM Amit Kapila <[email protected]> wrote:\n> > So why do we log a message about this?\n>\n> This was added after the main commit of this functionality to find\n> some BF failures (where we were expecting the slot to sync but due to\n> one of these conditions not being met the slot was not synced) and we\n> can probably change it to DEBUG1 as well. I think we would need this\n> information w.r.t this functionality to gather more information in\n> case expected slots are not being synced and it may be helpful for\n> users to also know why the slots are not synced, if that happens.\n\nAh, OK. Do you think we need any kind of system view to provide more\ninsight here or is a log message sufficient?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:50:27 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:50 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 4:42 PM Amit Kapila <[email protected]> wrote:\n> > > So why do we log a message about this?\n> >\n> > This was added after the main commit of this functionality to find\n> > some BF failures (where we were expecting the slot to sync but due to\n> > one of these conditions not being met the slot was not synced) and we\n> > can probably change it to DEBUG1 as well. I think we would need this\n> > information w.r.t this functionality to gather more information in\n> > case expected slots are not being synced and it may be helpful for\n> > users to also know why the slots are not synced, if that happens.\n>\n> Ah, OK. Do you think we need any kind of system view to provide more\n> insight here or is a log message sufficient?\n>\n\nWe do expose the required information (restart_lsn, catalog_xmin,\nsynced, temporary, etc) via pg_replication_slots. So, I feel the LOG\nmessage here is sufficient to DEBUG (or know the details) when the\nslot sync doesn't succeed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 17:01:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:01 PM Amit Kapila <[email protected]> wrote:\n>\n>\n> We do expose the required information (restart_lsn, catalog_xmin,\n> synced, temporary, etc) via pg_replication_slots. So, I feel the LOG\n> message here is sufficient to DEBUG (or know the details) when the\n> slot sync doesn't succeed.\n>\n\nPlease find the patch having the suggested changes.\n\nthanks\nShveta",
"msg_date": "Tue, 20 Feb 2024 19:25:21 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 7:25 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 5:01 PM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > We do expose the required information (restart_lsn, catalog_xmin,\n> > synced, temporary, etc) via pg_replication_slots. So, I feel the LOG\n> > message here is sufficient to DEBUG (or know the details) when the\n> > slot sync doesn't succeed.\n> >\n>\n> Please find the patch having the suggested changes.\n>\n\nLGTM. I'll push this tomorrow unless someone has further comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:14:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A new message seems missing a punctuation"
}
] |
[
{
"msg_contents": "- Motivation\n\nA regular B-tree index provides efficient mapping of key values to tuples\nwithin a table. However, if you have two tables connected in some way, a\nregular B-tree index may not be efficient enough. In this case, you would\nneed to create an index for each table. The purpose will become clearer if\nwe consider a simple example which is the main use-case as I see it.\n\n- Example\n\nWe need to store a graph. So we create a table for nodes\n\nCREATE TABLE Nodes (\n id SERIAL PRIMARY KEY,\n label VARCHAR(255)\n);\n\nand a table for edges\n\nCREATE TABLE Edges (\n label VARCHAR(255),\n source INTEGER REFERENCES Nodes(id),\n target INTEGER REFERENCES Nodes(id)\n);\n\nIn order to efficiently traverse a graph we would have an index for\nNodes.id which created automatically in this case and an index for\nEdges.source.\n\n- Tweaked B-Tree\n\nWe could optimize cases like the former by modifying PostgreSQL btree index\nto allow it to index 2 tables simultaneously.\n\nSemantically it would be UNIQUE index for attribute x of table A and an\nindex for attribute y in table B. In the non-deduplicated case index tuple\npointing to a tuple in A should be marked by a flag. In the deduplicated\ncase first TID in an array can be interpreted as a link to A.\nDuring the vacuum of A an index tuple pointing to a dead tuple in A should\nbe cleaned as well as all index tuples for the same key. We can reach this\nby clearing all index tuples after the dead one until we come to index\ntuple marked by a flag with different key. Or we can enforce deduplication\nin such index.\n\nIn the example with a graph such index would provide PRIMARY KEY for Nodes\nand the index for Edges.Source. The query\n\nSELECT * FROM Nodes WHERE id = X;\n\nwill use this index and take into account only a TID in Nodes (so this\nwould be marked index tuple or first TID in a posting list). The query\n\nSELECT * FROM Edges WHERE source = X;\n\nconversely will ignore links to Nodes.\n\n-- Syntax\n\nI believe that\nCREATE TABLE Nodes (\n id SERIAL PRIMARY KEY ADJACENT,\n label VARCHAR(255)\n);\nCREATE TABLE Edges (\n label VARCHAR(255),\n source INTEGER REFERENCES ADJACENT Nodes(id),\n target INTEGER REFERENCES Nodes(id)\n);\n\nwould suffice for this new semantics.\n--\nDilshod Urazov\n\n- MotivationA regular B-tree index provides efficient mapping of key values to tuples within a table. However, if you have two tables connected in some way, a regular B-tree index may not be efficient enough. In this case, you would need to create an index for each table. The purpose will become clearer if we consider a simple example which is the main use-case as I see it.- ExampleWe need to store a graph. So we create a table for nodesCREATE TABLE Nodes ( id SERIAL PRIMARY KEY, label VARCHAR(255));and a table for edgesCREATE TABLE Edges ( label VARCHAR(255), source INTEGER REFERENCES Nodes(id), target INTEGER REFERENCES Nodes(id));In order to efficiently traverse a graph we would have an index for Nodes.id which created automatically in this case and an index for Edges.source.- Tweaked B-TreeWe could optimize cases like the former by modifying PostgreSQL btree index to allow it to index 2 tables simultaneously.Semantically it would be UNIQUE index for attribute x of table A and an index for attribute y in table B. In the non-deduplicated case index tuple pointing to a tuple in A should be marked by a flag. In the deduplicated case first TID in an array can be interpreted as a link to A.During the vacuum of A an index tuple pointing to a dead tuple in A should be cleaned as well as all index tuples for the same key. We can reach this by clearing all index tuples after the dead one until we come to index tuple marked by a flag with different key. Or we can enforce deduplication in such index.In the example with a graph such index would provide PRIMARY KEY for Nodes and the index for Edges.Source. The querySELECT * FROM Nodes WHERE id = X;will use this index and take into account only a TID in Nodes (so this would be marked index tuple or first TID in a posting list). The querySELECT * FROM Edges WHERE source = X;conversely will ignore links to Nodes.-- SyntaxI believe thatCREATE TABLE Nodes ( id SERIAL PRIMARY KEY ADJACENT, label VARCHAR(255));CREATE TABLE Edges ( label VARCHAR(255), source INTEGER REFERENCES ADJACENT Nodes(id), target INTEGER REFERENCES Nodes(id));would suffice for this new semantics.--Dilshod Urazov",
"msg_date": "Mon, 19 Feb 2024 08:50:18 +0300",
"msg_from": "Dilshod Urazov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal: Adjacent B-Tree index"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 18:48, Dilshod Urazov <[email protected]> wrote:\n>\n> - Motivation\n>\n> A regular B-tree index provides efficient mapping of key values to tuples within a table. However, if you have two tables connected in some way, a regular B-tree index may not be efficient enough. In this case, you would need to create an index for each table. The purpose will become clearer if we consider a simple example which is the main use-case as I see it.\n\nI'm not sure why are two indexes not sufficient here? PostgreSQL can\nalready do merge joins, which would have the same effect of hitting\nthe same location in the index at the same time between all tables,\nwithout the additional overhead of having to scan two table's worth of\nindexes in VACUUM.\n\n> During the vacuum of A an index tuple pointing to a dead tuple in A should be cleaned as well as all index tuples for the same key.\n\nThis is definitely not correct. If I have this schema\n\ntable test (id int primary key, b text unique)\ntable test_ref(test_id int references test(id))\n\nand if an index would contain entries for both test and test_ref, it\ncan't just remove all test_ref entries because an index entry with the\nsame id was removed: The entry could've been removed because (e.g.)\ntest's b column was updated thus inserting a new index entry for the\nnew HOT-chain's TID.\n\n> would suffice for this new semantics.\n\nWith the provided explanation I don't think this is a great idea.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 20:32:19 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Adjacent B-Tree index"
},
{
"msg_contents": "> I'm not sure why are two indexes not sufficient here?\n\nDid I write that they are not sufficient? The whole point is that in\nrelational DBMSs which are widely used\nto store graphs we can optimize storage in such cases. Also we can optimize\ntraversals e.g. if we want to\nget all nodes that are adjacent to a given node with id = X in an oriented\ngraph\n\nSELECT id, label\nFROM Nodes\nJOIN Edges ON Nodes.id = Edges.target\nWHERE Edges.source = X;\n\nonly 1 index lookup is needed.\n\n> The entry could've been removed because (e.g.)\n> test's b column was updated thus inserting a new index entry for the\n> new HOT-chain's TID.\n\nIf test'b column was updated and HOT optimization took place no new index\nentry is created. Index tuple\npointing to old heap tuple is valid since now it is pointing to HOT-chain.\n\n--\nDilshod Urazov\n\nпн, 19 февр. 2024 г. в 22:32, Matthias van de Meent <\[email protected]>:\n\n> On Mon, 19 Feb 2024 at 18:48, Dilshod Urazov <[email protected]>\n> wrote:\n> >\n> > - Motivation\n> >\n> > A regular B-tree index provides efficient mapping of key values to\n> tuples within a table. However, if you have two tables connected in some\n> way, a regular B-tree index may not be efficient enough. In this case, you\n> would need to create an index for each table. The purpose will become\n> clearer if we consider a simple example which is the main use-case as I see\n> it.\n>\n> I'm not sure why are two indexes not sufficient here? PostgreSQL can\n> already do merge joins, which would have the same effect of hitting\n> the same location in the index at the same time between all tables,\n> without the additional overhead of having to scan two table's worth of\n> indexes in VACUUM.\n>\n> > During the vacuum of A an index tuple pointing to a dead tuple in A\n> should be cleaned as well as all index tuples for the same key.\n>\n> This is definitely not correct. If I have this schema\n>\n> table test (id int primary key, b text unique)\n> table test_ref(test_id int references test(id))\n>\n> and if an index would contain entries for both test and test_ref, it\n> can't just remove all test_ref entries because an index entry with the\n> same id was removed: The entry could've been removed because (e.g.)\n> test's b column was updated thus inserting a new index entry for the\n> new HOT-chain's TID.\n>\n> > would suffice for this new semantics.\n>\n> With the provided explanation I don't think this is a great idea.\n>\n> Kind regards,\n>\n> Matthias van de Meent.\n>\n\n> I'm not sure why are two indexes not sufficient here?Did I write that they are not sufficient? The whole point is that in relational DBMSs which are widely usedto store graphs we can optimize storage in such cases. Also we can optimize traversals e.g. if we want toget all nodes that are adjacent to a given node with id = X in an oriented graphSELECT id, labelFROM NodesJOIN Edges ON Nodes.id = Edges.targetWHERE Edges.source = X;only 1 index lookup is needed.> The entry could've been removed because (e.g.)> test's b column was updated thus inserting a new index entry for the> new HOT-chain's TID.If test'b column was updated and HOT optimization took place no new index entry is created. Index tuplepointing to old heap tuple is valid since now it is pointing to HOT-chain.--Dilshod Urazovпн, 19 февр. 2024 г. в 22:32, Matthias van de Meent <[email protected]>:On Mon, 19 Feb 2024 at 18:48, Dilshod Urazov <[email protected]> wrote:\n>\n> - Motivation\n>\n> A regular B-tree index provides efficient mapping of key values to tuples within a table. However, if you have two tables connected in some way, a regular B-tree index may not be efficient enough. In this case, you would need to create an index for each table. The purpose will become clearer if we consider a simple example which is the main use-case as I see it.\n\nI'm not sure why are two indexes not sufficient here? PostgreSQL can\nalready do merge joins, which would have the same effect of hitting\nthe same location in the index at the same time between all tables,\nwithout the additional overhead of having to scan two table's worth of\nindexes in VACUUM.\n\n> During the vacuum of A an index tuple pointing to a dead tuple in A should be cleaned as well as all index tuples for the same key.\n\nThis is definitely not correct. If I have this schema\n\ntable test (id int primary key, b text unique)\ntable test_ref(test_id int references test(id))\n\nand if an index would contain entries for both test and test_ref, it\ncan't just remove all test_ref entries because an index entry with the\nsame id was removed: The entry could've been removed because (e.g.)\ntest's b column was updated thus inserting a new index entry for the\nnew HOT-chain's TID.\n\n> would suffice for this new semantics.\n\nWith the provided explanation I don't think this is a great idea.\n\nKind regards,\n\nMatthias van de Meent.",
"msg_date": "Tue, 20 Feb 2024 21:09:39 +0300",
"msg_from": "Dilshod Urazov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Adjacent B-Tree index"
},
{
"msg_contents": "> only 1 index lookup is needed.\nSorry, must be \"only lookups of 1 index are needed\".\n\n--\nDilshod Urazov\n\nвт, 20 февр. 2024 г. в 21:09, Dilshod Urazov <[email protected]>:\n\n> > I'm not sure why are two indexes not sufficient here?\n>\n> Did I write that they are not sufficient? The whole point is that in\n> relational DBMSs which are widely used\n> to store graphs we can optimize storage in such cases. Also we can\n> optimize traversals e.g. if we want to\n> get all nodes that are adjacent to a given node with id = X in an oriented\n> graph\n>\n> SELECT id, label\n> FROM Nodes\n> JOIN Edges ON Nodes.id = Edges.target\n> WHERE Edges.source = X;\n>\n> only 1 index lookup is needed.\n>\n> > The entry could've been removed because (e.g.)\n> > test's b column was updated thus inserting a new index entry for the\n> > new HOT-chain's TID.\n>\n> If test'b column was updated and HOT optimization took place no new index\n> entry is created. Index tuple\n> pointing to old heap tuple is valid since now it is pointing to HOT-chain.\n>\n> --\n> Dilshod Urazov\n>\n> пн, 19 февр. 2024 г. в 22:32, Matthias van de Meent <\n> [email protected]>:\n>\n>> On Mon, 19 Feb 2024 at 18:48, Dilshod Urazov <[email protected]>\n>> wrote:\n>> >\n>> > - Motivation\n>> >\n>> > A regular B-tree index provides efficient mapping of key values to\n>> tuples within a table. However, if you have two tables connected in some\n>> way, a regular B-tree index may not be efficient enough. In this case, you\n>> would need to create an index for each table. The purpose will become\n>> clearer if we consider a simple example which is the main use-case as I see\n>> it.\n>>\n>> I'm not sure why are two indexes not sufficient here? PostgreSQL can\n>> already do merge joins, which would have the same effect of hitting\n>> the same location in the index at the same time between all tables,\n>> without the additional overhead of having to scan two table's worth of\n>> indexes in VACUUM.\n>>\n>> > During the vacuum of A an index tuple pointing to a dead tuple in A\n>> should be cleaned as well as all index tuples for the same key.\n>>\n>> This is definitely not correct. If I have this schema\n>>\n>> table test (id int primary key, b text unique)\n>> table test_ref(test_id int references test(id))\n>>\n>> and if an index would contain entries for both test and test_ref, it\n>> can't just remove all test_ref entries because an index entry with the\n>> same id was removed: The entry could've been removed because (e.g.)\n>> test's b column was updated thus inserting a new index entry for the\n>> new HOT-chain's TID.\n>>\n>> > would suffice for this new semantics.\n>>\n>> With the provided explanation I don't think this is a great idea.\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent.\n>>\n>\n\n> only 1 index lookup is needed.Sorry, must be \"only lookups of 1 index are needed\".--Dilshod Urazovвт, 20 февр. 2024 г. в 21:09, Dilshod Urazov <[email protected]>:> I'm not sure why are two indexes not sufficient here?Did I write that they are not sufficient? The whole point is that in relational DBMSs which are widely usedto store graphs we can optimize storage in such cases. Also we can optimize traversals e.g. if we want toget all nodes that are adjacent to a given node with id = X in an oriented graphSELECT id, labelFROM NodesJOIN Edges ON Nodes.id = Edges.targetWHERE Edges.source = X;only 1 index lookup is needed.> The entry could've been removed because (e.g.)> test's b column was updated thus inserting a new index entry for the> new HOT-chain's TID.If test'b column was updated and HOT optimization took place no new index entry is created. Index tuplepointing to old heap tuple is valid since now it is pointing to HOT-chain.--Dilshod Urazovпн, 19 февр. 2024 г. в 22:32, Matthias van de Meent <[email protected]>:On Mon, 19 Feb 2024 at 18:48, Dilshod Urazov <[email protected]> wrote:\n>\n> - Motivation\n>\n> A regular B-tree index provides efficient mapping of key values to tuples within a table. However, if you have two tables connected in some way, a regular B-tree index may not be efficient enough. In this case, you would need to create an index for each table. The purpose will become clearer if we consider a simple example which is the main use-case as I see it.\n\nI'm not sure why are two indexes not sufficient here? PostgreSQL can\nalready do merge joins, which would have the same effect of hitting\nthe same location in the index at the same time between all tables,\nwithout the additional overhead of having to scan two table's worth of\nindexes in VACUUM.\n\n> During the vacuum of A an index tuple pointing to a dead tuple in A should be cleaned as well as all index tuples for the same key.\n\nThis is definitely not correct. If I have this schema\n\ntable test (id int primary key, b text unique)\ntable test_ref(test_id int references test(id))\n\nand if an index would contain entries for both test and test_ref, it\ncan't just remove all test_ref entries because an index entry with the\nsame id was removed: The entry could've been removed because (e.g.)\ntest's b column was updated thus inserting a new index entry for the\nnew HOT-chain's TID.\n\n> would suffice for this new semantics.\n\nWith the provided explanation I don't think this is a great idea.\n\nKind regards,\n\nMatthias van de Meent.",
"msg_date": "Tue, 20 Feb 2024 22:19:21 +0300",
"msg_from": "Dilshod Urazov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: Adjacent B-Tree index"
}
] |
[
{
"msg_contents": "Hi all,\n(Ashutosh in CC as he was involved in the discussion last time.)\n\nI have proposed on the original thread related to injection points to\nhave more stuff to be able to wait at an arbtrary point and wake at\nwill the process waiting so as it is possible to control the order of\nactions taken in a test:\nhttps://www.postgresql.org/message-id/ZTiV8tn_MIb_H2rE%40paquier.xyz\n\nI didn't do that in the other thread out of time, but here is a patch\nset to complete what I wanted, using a condition variable to wait and\nwake processes:\n- State is in shared memory, using a DSM tracked by the registry and\nan integer counter.\n- Callback to wait on a condition variable.\n- SQL function to update the shared state and broadcast the update to\nthe condition variable.\n- Use a custom wait event to track the wait in pg_stat_activity.\n\n0001 requires no backend changes, only more stuff into the test module\ninjection_points so that could be backpatched assuming that the\nbackend is able to support injection points. This could be expanded\ninto using more variables and/or states, but I don't really see a\npoint in introducing more without a reason to do so, and I have no\nneed for more at the moment.\n\n0002 is a polished version of the TAP test that makes use of this\nfacility, providing coverage for the bug fixed by 7863ee4def65\n(reverting this commit causes the test to fail), where a restart point \nruns across a promotion request. The trick is to stop the\ncheckpointer in the middle of a restart point and issue a promotion\nin-between.\n\nThoughts and comments are welcome.\n--\nMichael",
"msg_date": "Mon, 19 Feb 2024 15:01:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 03:01:40PM +0900, Michael Paquier wrote:\n> 0002 is a polished version of the TAP test that makes use of this\n> facility, providing coverage for the bug fixed by 7863ee4def65\n> (reverting this commit causes the test to fail), where a restart point \n> runs across a promotion request. The trick is to stop the\n> checkpointer in the middle of a restart point and issue a promotion\n> in-between.\n\nThe CF bot has been screaming at this one on Windows because the\nprocess started with IPC::Run::start was not properly finished, so\nattached is an updated version to bring that back to green.\n--\nMichael",
"msg_date": "Mon, 19 Feb 2024 16:51:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 19 Feb 2024, at 09:01, Michael Paquier <[email protected]> wrote:\n> \n> Thoughts and comments are welcome.\n\nHi Michael,\n\nthanks for your work on injection points! I want to test a bunch of stuff using this facility.\n\nI have a wishlist of functionality that I'd like to see in injection points. I hope you will find some of these ideas useful to improve the feature.\n1. injection_points_wake() will wake all of waiters. But it's not suitable for complex tests. I think there must be a way to wake only specific waiter by injection point name.\n2. Alexander Korotkov's stopevents could be used in isolation tests. This kind of tests is perfect for describing complex race conditions. (as a side note, I'd be happy if we could have primary\\standby in isolation tests too)\n3. Can we have some Perl function for this?\n+# Wait until the checkpointer is in the middle of the restart point\n+# processing, relying on the custom wait event generated in the\n+# wait callback used in the injection point previously attached.\n+ok( $node_standby->poll_query_until(\n+\t\t'postgres',\n+\t\tqq[SELECT count(*) FROM pg_stat_activity\n+ WHERE backend_type = 'checkpointer' AND wait_event = 'injection_wait' ;],\n+\t\t'1'),\n+\t'checkpointer is waiting in restart point'\n+) or die \"Timed out while waiting for checkpointer to run restart point\";\n\nPerhaps something like\n$node->do_a_query_and_wait_for_injection_point_observed(sql,injection_point_name);\n4. Maybe I missed it, but I'd like to see a guideline on how to name injection points.\n5. In many cases we need to have injection point under critical section. I propose to have a \"prepared injection point\". See [0] for example in v2-0003-Test-multixact-CV-sleep.patch\n+ INJECTION_POINT_PREPARE(\"GetNewMultiXactId-done\");\n+\n START_CRIT_SECTION();\n \n+ INJECTION_POINT_RUN_PREPARED();\n6. Currently our codebase have files injection_point.c and injection_points.c. It's very difficult to remember which is where...\n7. This is extremely distant, but some DBMSs allow to enable injection points by placing files on the filesystem. That would allow to test something during recovery when no SQL interface is present.\n\nLet's test all the neat stuff! Thank you!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n",
"msg_date": "Mon, 19 Feb 2024 11:54:20 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 19, 2024 at 04:51:45PM +0900, Michael Paquier wrote:\n> On Mon, Feb 19, 2024 at 03:01:40PM +0900, Michael Paquier wrote:\n> > 0002 is a polished version of the TAP test that makes use of this\n> > facility, providing coverage for the bug fixed by 7863ee4def65\n> > (reverting this commit causes the test to fail), where a restart point \n> > runs across a promotion request. The trick is to stop the\n> > checkpointer in the middle of a restart point and issue a promotion\n> > in-between.\n> \n> The CF bot has been screaming at this one on Windows because the\n> process started with IPC::Run::start was not properly finished, so\n> attached is an updated version to bring that back to green.\n\nThanks for the patch, that's a very cool feature!\n\nRandom comments:\n\n1 ===\n\n+CREATE FUNCTION injection_points_wake()\n\nwhat about injection_points_wakeup() instead?\n\n2 ===\n+/* Shared state information for injection points. */\n+typedef struct InjectionPointSharedState\n+{\n+ /* protects accesses to wait_counts */\n+ slock_t lock;\n+\n+ /* Counter advancing when injection_points_wake() is called */\n+ int wait_counts;\n\nIf slock_t protects \"only\" one counter, then what about using pg_atomic_uint64\nor pg_atomic_uint32 instead? And btw do we need wait_counts at all? (see comment \nnumber 4)\n\n3 ===\n\n+ * SQL function for waking a condition variable\n\nwaking up instead?\n\n4 ===\n\n+ for (;;)\n+ {\n+ int new_wait_counts;\n+\n+ SpinLockAcquire(&inj_state->lock);\n+ new_wait_counts = inj_state->wait_counts;\n+ SpinLockRelease(&inj_state->lock);\n+\n+ if (old_wait_counts != new_wait_counts)\n+ break;\n+ ConditionVariableSleep(&inj_state->wait_point, injection_wait_event);\n+ }\n\nI'm wondering if this loop and wait_counts are needed, why not doing something\nlike (and completely get rid of wait_counts)?\n\n\"\n ConditionVariablePrepareToSleep(&inj_state->wait_point);\n ConditionVariableSleep(&inj_state->wait_point, injection_wait_event);\n ConditionVariableCancelSleep();\n\"\n\nIt's true that the comment above ConditionVariableSleep() mentions that:\n\n\"\n * This should be called in a predicate loop that tests for a specific exit\n * condition and otherwise sleeps\n\"\n\nbut is it needed in our particular case here?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 19 Feb 2024 14:28:04 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 11:54:20AM +0300, Andrey M. Borodin wrote:\n> 1. injection_points_wake() will wake all of waiters. But it's not\n> suitable for complex tests. I think there must be a way to wake only\n> specific waiter by injection point name.\n\nI don't disagree with that, but I don't have a strong argument for\nimplementing that until there is an explicit need for it in core. It\nis also possible to plug in your own module, outside of core, if you\nare looking for something more specific. The backend APIs allow that.\n\n> 2. Alexander Korotkov's stopevents could be used in isolation\n> tests. This kind of tests is perfect for describing complex race\n> conditions. (as a side note, I'd be happy if we could have\n> primary\\standby in isolation tests too)\n\nThis requires plugging is more into src/test/isolation/, with multiple\nconnection strings. This has been suggested in the past.\n\n> 3. Can we have some Perl function for this?\n> +# Wait until the checkpointer is in the middle of the restart point\n> +# processing, relying on the custom wait event generated in the\n> +# wait callback used in the injection point previously attached.\n> +ok( $node_standby->poll_query_until(\n> +\t\t'postgres',\n> +\t\tqq[SELECT count(*) FROM pg_stat_activity\n> + WHERE backend_type = 'checkpointer' AND wait_event = 'injection_wait' ;],\n> +\t\t'1'),\n> +\t'checkpointer is waiting in restart point'\n> +) or die \"Timed out while waiting for checkpointer to run restart point\";\n> \n> Perhaps something like\n> $node->do_a_query_and_wait_for_injection_point_observed(sql,injection_point_name);\n\nI don't see why not. But I'm not sure how much I need to plug in into\nthe main modules yet.\n\n> 4. Maybe I missed it, but I'd like to see a guideline on how to name\n> injection points.\n\nI don't think we have any of that, or even that we need one. In\nshort, I'm OK to be more consistent with the choice of ginbtree.c and\ngive priority to it and make it more official in the docs.\n\n> 5. In many cases we need to have injection point under critical\n> section. I propose to have a \"prepared injection point\". See [0] for\n> example in v2-0003-Test-multixact-CV-sleep.patch\n> + INJECTION_POINT_PREPARE(\"GetNewMultiXactId-done\");\n> +\n> START_CRIT_SECTION();\n> \n> + INJECTION_POINT_RUN_PREPARED();\n\nI don't see how that's different from a wait/wake logic? The only\nthing you've changed is to stop a wait when a point is detached and\nyou want to make the stop conditional. Plugging in a condition\nvariable is more flexible than a hardcoded sleep in terms of wait,\nwhile being more responsive.\n\n> 7. This is extremely distant, but some DBMSs allow to enable\n> injection points by placing files on the filesystem. That would\n> allow to test something during recovery when no SQL interface is\n> present.\n\nYeah, I could see why you'd want to do something like that. If a use\ncase pops up, that can surely be discussed.\n--\nMichael",
"msg_date": "Tue, 20 Feb 2024 08:21:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 02:28:04PM +0000, Bertrand Drouvot wrote:\n> +CREATE FUNCTION injection_points_wake()\n> \n> what about injection_points_wakeup() instead?\n\nSure.\n\n> +/* Shared state information for injection points. */\n> +typedef struct InjectionPointSharedState\n> +{\n> + /* protects accesses to wait_counts */\n> + slock_t lock;\n> +\n> + /* Counter advancing when injection_points_wake() is called */\n> + int wait_counts;\n> \n> If slock_t protects \"only\" one counter, then what about using pg_atomic_uint64\n> or pg_atomic_uint32 instead? And btw do we need wait_counts at all? (see comment \n> number 4)\n\nWe could, indeed, even if we use more than one counter. Still, I\nwould be tempted to keep it in case more data is added to this\nstructure as that would make for less stuff to do while being normally\nnon-critical. This sentence may sound pedantic, though..\n\n> + * SQL function for waking a condition variable\n> \n> waking up instead?\n\nOkay.\n\n> I'm wondering if this loop and wait_counts are needed, why not doing something\n> like (and completely get rid of wait_counts)?\n> \n> \"\n> ConditionVariablePrepareToSleep(&inj_state->wait_point);\n> ConditionVariableSleep(&inj_state->wait_point, injection_wait_event);\n> ConditionVariableCancelSleep();\n> \"\n> \n> It's true that the comment above ConditionVariableSleep() mentions that:\n\nPerhaps not, but it encourages good practices around the use of\ncondition variables, and we need to track all that in shared memory\nanyway. Ashutosh has argued in favor of the approach taken by the\npatch in the original thread when I've sent a version doing exactly\nwhat you are saying now to not track a state in shmem.\n--\nMichael",
"msg_date": "Tue, 20 Feb 2024 08:28:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 20 Feb 2024, at 02:21, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Feb 19, 2024 at 11:54:20AM +0300, Andrey M. Borodin wrote:\n>> 1. injection_points_wake() will wake all of waiters. But it's not\n>> suitable for complex tests. I think there must be a way to wake only\n>> specific waiter by injection point name.\n> \n> I don't disagree with that, but I don't have a strong argument for\n> implementing that until there is an explicit need for it in core. It\n> is also possible to plug in your own module, outside of core, if you\n> are looking for something more specific. The backend APIs allow that.\n\nIn [0] I want to create a test for edge case of reading recent multixact. External module does not allow to have a test within repository.\nIn that test I need to sync backends in 3 steps (backend 1 starts to wait in injection point, backend 2 starts to sleep in other point, backend 1 is released and observe 3rd injection point). \"wake them all\" implementation allows only 2-step synchronization.\nI will try to simplify test to 2-step, but it would be much easier to implement if injection points could be awaken independently.\n\n>> 2. Alexander Korotkov's stopevents could be used in isolation\n>> tests. This kind of tests is perfect for describing complex race\n>> conditions. (as a side note, I'd be happy if we could have\n>> primary\\standby in isolation tests too)\n> \n> This requires plugging is more into src/test/isolation/, with multiple\n> connection strings. This has been suggested in the past.\n\nI think standby isolation tests are just a sugar-on-top feature here.\nWrt injection points, I'd like to see a function to wait until some injection point is observed.\nWith this function at hand developer can implement race condition tests as an isolation test.\n\n>> 5. In many cases we need to have injection point under critical\n>> section. I propose to have a \"prepared injection point\". See [0] for\n>> example in v2-0003-Test-multixact-CV-sleep.patch\n>> + INJECTION_POINT_PREPARE(\"GetNewMultiXactId-done\");\n>> +\n>> START_CRIT_SECTION();\n>> \n>> + INJECTION_POINT_RUN_PREPARED();\n> \n> I don't see how that's different from a wait/wake logic? The only\n> thing you've changed is to stop a wait when a point is detached and\n> you want to make the stop conditional. Plugging in a condition\n> variable is more flexible than a hardcoded sleep in terms of wait,\n> while being more responsive.\n\nNo, \"prepared injection point\" is not about wait\\wake logic. It's about having injection point in critical section.\nNormal injection point will pstrdup(name) and fail. In [0] I need a test that waits after multixact generation before WAL-logging it. It's only possible in a critical section.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n",
"msg_date": "Tue, 20 Feb 2024 17:32:38 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Tue, Feb 20, 2024 at 08:28:28AM +0900, Michael Paquier wrote:\n> On Mon, Feb 19, 2024 at 02:28:04PM +0000, Bertrand Drouvot wrote:\n> > If slock_t protects \"only\" one counter, then what about using pg_atomic_uint64\n> > or pg_atomic_uint32 instead? And btw do we need wait_counts at all? (see comment \n> > number 4)\n> \n> We could, indeed, even if we use more than one counter. Still, I\n> would be tempted to keep it in case more data is added to this\n> structure as that would make for less stuff to do while being normally\n> non-critical. This sentence may sound pedantic, though..\n\nOkay, makes sense to keep this as it is as a \"template\" in case more stuff is\nadded. \n\n+ /* Counter advancing when injection_points_wake() is called */\n+ int wait_counts;\n\nIn that case what about using an unsigned instead? (Nit)\n\n> > I'm wondering if this loop and wait_counts are needed, why not doing something\n> > like (and completely get rid of wait_counts)?\n> > \n> > \"\n> > ConditionVariablePrepareToSleep(&inj_state->wait_point);\n> > ConditionVariableSleep(&inj_state->wait_point, injection_wait_event);\n> > ConditionVariableCancelSleep();\n> > \"\n> > \n> > It's true that the comment above ConditionVariableSleep() mentions that:\n> \n> Perhaps not, but it encourages good practices around the use of\n> condition variables, and we need to track all that in shared memory\n> anyway. Ashutosh has argued in favor of the approach taken by the\n> patch in the original thread when I've sent a version doing exactly\n> what you are saying now to not track a state in shmem.\n\nOh okay I missed this previous discussion, let's keep it as it is then.\n\nNew comments:\n\n1 ===\n\n+void\n+injection_wait(const char *name)\n\nLooks like name is not used in the function. I guess the reason it is a parameter\nis because that's the way the callback function is being called in\nInjectionPointRun()?\n\n2 ===\n\n+PG_FUNCTION_INFO_V1(injection_points_wake);\n+Datum\n+injection_points_wake(PG_FUNCTION_ARGS)\n+{\n\nI think that This function will wake up all the \"wait\" injection points.\nWould that make sense to implement some filtering based on the name? That could\nbe useful for tests that would need multiple wait injection points and that want\nto wake them up \"sequentially\".\n\nWe could think about it if there is such a need in the future though.\n\n3 ===\n\n+# Register a injection point on the standby so as the follow-up\n\ntypo: \"an injection\"?\n\n4 ===\n\n+for (my $i = 0; $i < 3000; $i++)\n+{\n\nis 3000 due to?:\n\n+checkpoint_timeout = 30s\n\nIf so, would that make sense to reduce the value for both?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:55:08 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 03:55:08PM +0000, Bertrand Drouvot wrote:\n> Okay, makes sense to keep this as it is as a \"template\" in case more stuff is\n> added. \n> \n> + /* Counter advancing when injection_points_wake() is called */\n> + int wait_counts;\n> \n> In that case what about using an unsigned instead? (Nit)\n\nuint32. Sure.\n\n> 1 ===\n> \n> +void\n> +injection_wait(const char *name)\n> \n> Looks like name is not used in the function. I guess the reason it is a parameter\n> is because that's the way the callback function is being called in\n> InjectionPointRun()?\n\nRight. The callback has to define this argument.\n\n> 2 ===\n> \n> +PG_FUNCTION_INFO_V1(injection_points_wake);\n> +Datum\n> +injection_points_wake(PG_FUNCTION_ARGS)\n> +{\n> \n> I think that This function will wake up all the \"wait\" injection points.\n> Would that make sense to implement some filtering based on the name? That could\n> be useful for tests that would need multiple wait injection points and that want\n> to wake them up \"sequentially\".\n> \n> We could think about it if there is such a need in the future though.\n\nWell, both you and Andrey are asking for it now, so let's do it. The\nimplementation is simple:\n- Store in InjectionPointSharedState an array of wait_counts and an\narray of names. There is only one condition variable.\n- When a point wants to wait, it takes the spinlock and looks within\nthe array of names until it finds a free slot, adds its name into the\narray to reserve a wait counter at the same position, releases the\nspinlock. Then it loops on the condition variable for an update of\nthe counter it has reserved. It is possible to make something more\nefficient, but at a small size it would not really matter.\n- The wakeup takes a point name in argument, acquires the spinlock,\nand checks if it can find the point into the array, pinpoints the\nlocation of the counter to update and updates it. Then it broadcasts\nthe change.\n- The wait loop checks its counter, leaves its loop, cancels the\nsleep, takes the spinlock to unregister from the array, and leaves.\n\nI would just hardcode the number of points that can wait, say 5 of\nthem tracked in shmem? Does that look like what you are looking at?\n\n> +# Register a injection point on the standby so as the follow-up\n> \n> typo: \"an injection\"?\n\nOops. Fixed locally.\n\n> +for (my $i = 0; $i < 3000; $i++)\n> +{\n> \n> is 3000 due to?:\n> \n> +checkpoint_timeout = 30s\n> \n> If so, would that make sense to reduce the value for both?\n\nThat had better be based on PostgreSQL::Test::Utils::timeout_default,\nactually, as in something like:\nforeach my $i (0 .. 10 * $PostgreSQL::Test::Utils::timeout_default)\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 07:08:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 05:32:38PM +0300, Andrey M. Borodin wrote:\n> I will try to simplify test to 2-step, but it would be much easier\n> to implement if injection points could be awaken independently. \n\nI don't mind implementing something that wakes only a given point\nname, that's just more state data to track in shmem for the module.\n\n> No, \"prepared injection point\" is not about wait\\wake logic. It's\n> about having injection point in critical section. \n> Normal injection point will pstrdup(name) and fail. In [0] I need a\n> test that waits after multixact generation before WAL-logging\n> it. It's only possible in a critical section. \n\nIt took me some time to understand what you mean here. You are\nreferring to the allocations done in load_external_function() ->\nexpand_dynamic_library_name() -> substitute_libpath_macro(), which\nis something that has to happen the first time a callback is loaded\ninto the local cache of a process. So what you want to achieve is to\npreload the callback in its cache in a first step without running it,\nthen be able to run it, so your Prepare[d] layer is just a way to\nsplit InjectionPointRun() into two. Fancy use case. You could\ndisable the critical section around the INJECTION_POINT() as one\nsolution, though having something to pre-warm the local cache for a\npoint name and avoid the allocations done in the external load would\nbe a second way to achieve that.\n\n\"Prepare\" is not the best term I would have used, perhaps just have\none INJECTION_POINT_PRELOAD() macro to warm up the cache outside the\ncritical section with a wrapper routine? You could then leave\nInjectionPointRun() as it is now. Having a check on CritSectionCount\nin the injection point internals to disable temporarily a critical\nsection would not feel right as this is used as a safety check\nanywhere else.\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 08:22:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 07:08:03AM +0900, Michael Paquier wrote:\n> Well, both you and Andrey are asking for it now, so let's do it. The\n> implementation is simple:\n> - Store in InjectionPointSharedState an array of wait_counts and an\n> array of names. There is only one condition variable.\n> - When a point wants to wait, it takes the spinlock and looks within\n> the array of names until it finds a free slot, adds its name into the\n> array to reserve a wait counter at the same position, releases the\n> spinlock. Then it loops on the condition variable for an update of\n> the counter it has reserved. It is possible to make something more\n> efficient, but at a small size it would not really matter.\n> - The wakeup takes a point name in argument, acquires the spinlock,\n> and checks if it can find the point into the array, pinpoints the\n> location of the counter to update and updates it. Then it broadcasts\n> the change.\n> - The wait loop checks its counter, leaves its loop, cancels the\n> sleep, takes the spinlock to unregister from the array, and leaves.\n> \n> I would just hardcode the number of points that can wait, say 5 of\n> them tracked in shmem? Does that look like what you are looking at?\n\nI was looking at that, and it proves to work OK, so you can do stuff\nlike waits and wakeups for multiple processes in a controlled manner.\nThe attached patch authorizes up to 32 waiters. I have switched\nthings so as the information reported in pg_stat_activity is the name\nof the injection point itself.\n\nComments and ideas are welcome.\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 16:46:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 21, 2024 at 07:08:03AM +0900, Michael Paquier wrote:\n> On Tue, Feb 20, 2024 at 03:55:08PM +0000, Bertrand Drouvot wrote:\n> > +PG_FUNCTION_INFO_V1(injection_points_wake);\n> > +Datum\n> > +injection_points_wake(PG_FUNCTION_ARGS)\n> > +{\n> > \n> > I think that This function will wake up all the \"wait\" injection points.\n> > Would that make sense to implement some filtering based on the name? That could\n> > be useful for tests that would need multiple wait injection points and that want\n> > to wake them up \"sequentially\".\n> > \n> > We could think about it if there is such a need in the future though.\n> \n> Well, both you and Andrey are asking for it now, so let's do it.\n\nThanks!\n\n> The implementation is simple:\n> - Store in InjectionPointSharedState an array of wait_counts and an\n> array of names. There is only one condition variable.\n> - When a point wants to wait, it takes the spinlock and looks within\n> the array of names until it finds a free slot, adds its name into the\n> array to reserve a wait counter at the same position, releases the\n> spinlock. Then it loops on the condition variable for an update of\n> the counter it has reserved. It is possible to make something more\n> efficient, but at a small size it would not really matter.\n> - The wakeup takes a point name in argument, acquires the spinlock,\n> and checks if it can find the point into the array, pinpoints the\n> location of the counter to update and updates it. Then it broadcasts\n> the change.\n> - The wait loop checks its counter, leaves its loop, cancels the\n> sleep, takes the spinlock to unregister from the array, and leaves.\n>\n\nI think that makes sense and now the \"counter\" makes more sense to me (thanks to\nit we don't need multiple CV).\n \n> I would just hardcode the number of points that can wait, say 5 of\n> them tracked in shmem? Does that look like what you are looking at?\n\nI think so yes and more than 5 points would look like a complicated test IMHO.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 08:07:47 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 21, 2024 at 04:46:00PM +0900, Michael Paquier wrote:\n> On Wed, Feb 21, 2024 at 07:08:03AM +0900, Michael Paquier wrote:\n> > Well, both you and Andrey are asking for it now, so let's do it. The\n> > implementation is simple:\n> > - Store in InjectionPointSharedState an array of wait_counts and an\n> > array of names. There is only one condition variable.\n> > - When a point wants to wait, it takes the spinlock and looks within\n> > the array of names until it finds a free slot, adds its name into the\n> > array to reserve a wait counter at the same position, releases the\n> > spinlock. Then it loops on the condition variable for an update of\n> > the counter it has reserved. It is possible to make something more\n> > efficient, but at a small size it would not really matter.\n> > - The wakeup takes a point name in argument, acquires the spinlock,\n> > and checks if it can find the point into the array, pinpoints the\n> > location of the counter to update and updates it. Then it broadcasts\n> > the change.\n> > - The wait loop checks its counter, leaves its loop, cancels the\n> > sleep, takes the spinlock to unregister from the array, and leaves.\n> > \n> > I would just hardcode the number of points that can wait, say 5 of\n> > them tracked in shmem? Does that look like what you are looking at?\n> \n> I was looking at that, and it proves to work OK, so you can do stuff\n> like waits and wakeups for multiple processes in a controlled manner.\n> The attached patch authorizes up to 32 waiters. I have switched\n> things so as the information reported in pg_stat_activity is the name\n> of the injection point itself.\n\nThanks!\n\nI think the approach is fine and the hardcoded value is \"large\" enough (it would\nbe surprising, at least to me, to write a test that would need more than 32\nwaiters).\n\nA few comments:\n\n1 ===\n\n+-- Wakes a condition variable\n\nI think \"up\" is missing at several places in the patch where \"wake\" is used.\nI could be wrong as non native english speaker though.\n\n2 ===\n\n+ /* Counters advancing when injection_points_wakeup() is called */\n+ int wait_counts[INJ_MAX_WAIT];\n\nuint32? (here and other places where counter is manipulated).\n\n3 ===\n\n+ /* Remove us from the waiting list */\n\n\"Remove this injection wait name from the waiting list\" instead?\n\n4 ===\n\n+ * SQL function for waking a condition variable.\n\ns/a condition variable/an injection wait point/ ?\n\n5 ===\n\n+PG_FUNCTION_INFO_V1(injection_points_wakeup);\n+Datum\n+injection_points_wakeup(PG_FUNCTION_ARGS)\n\nEmpty line missing before \"Datum\"?\n\n6 ===\n\nAlso maybe some comments are missing above injection_point_init_state(), \ninjection_init_shmem() but it's more a Nit.\n\n7 ===\n\nWhile at it, should we add a second injection wait point in\nt/041_invalid_checkpoint_after_promote.pl to check that they are wake up\nindividually?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 11:50:21 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 11:50:21AM +0000, Bertrand Drouvot wrote:\n> I think the approach is fine and the hardcoded value is \"large\" enough (it would\n> be surprising, at least to me, to write a test that would need more than 32\n> waiters).\n\nThis could use less. I've never used more than 3 of them in a single\ntest, and that was already very complex to follow.\n\n\n> A few comments:\n> \n> 1 ===\n> I think \"up\" is missing at several places in the patch where \"wake\" is used.\n> I could be wrong as non native english speaker though.\n\nPatched up three places to be more consisten.\n\n> 2 ===\n> \n> + /* Counters advancing when injection_points_wakeup() is called */\n> + int wait_counts[INJ_MAX_WAIT];\n> \n> uint32? (here and other places where counter is manipulated).\n\nOkay, why not.\n\n> \"Remove this injection wait name from the waiting list\" instead?\n> s/a condition variable/an injection wait point/ ?\n\nOkay.\n\n> 5 ===\n> \n> +PG_FUNCTION_INFO_V1(injection_points_wakeup);\n> +Datum\n> +injection_points_wakeup(PG_FUNCTION_ARGS)\n> \n> Empty line missing before \"Datum\"?\n\nI've used the same style as anywhere else.\n\n> Also maybe some comments are missing above injection_point_init_state(), \n> injection_init_shmem() but it's more a Nit.\n\nSure.\n\n> While at it, should we add a second injection wait point in\n> t/041_invalid_checkpoint_after_promote.pl to check that they are wake up\n> individually?\n\nI'm not sure that's a good idea for the sake of this test, TBH, as\nthat would provide coverage outside the original scope of the\nrestartpoint/promote check.\n\nI have also looked at if it would be possible to get an isolation test\nout of that, but the arbitrary wait does not make that possible with\nthe existing isolation APIs, see GetSafeSnapshotBlockingPids() used in\npg_isolation_test_session_is_blocked(). isolation/README seems to be\na bit off here, actually, mentioning pg_locks.. We could use some\ntricks with transactions or locks internally, but I'm not really\ntempted to make the wait callback more complicated for the sake of\nmore coverage as I'd rather keep it generic for anybody who wants to\ncontrol the order of events across processes.\n\nAttaching a v3 for reference with everything that has been mentioned\nup to now.\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 12:02:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 22, 2024 at 12:02:01PM +0900, Michael Paquier wrote:\n> On Wed, Feb 21, 2024 at 11:50:21AM +0000, Bertrand Drouvot wrote:\n> > A few comments:\n> > \n> > 1 ===\n> > I think \"up\" is missing at several places in the patch where \"wake\" is used.\n> > I could be wrong as non native english speaker though.\n> \n> Patched up three places to be more consisten.\n\nThanks!\n\n> > 5 ===\n> > \n> > +PG_FUNCTION_INFO_V1(injection_points_wakeup);\n> > +Datum\n> > +injection_points_wakeup(PG_FUNCTION_ARGS)\n> > \n> > Empty line missing before \"Datum\"?\n> \n> I've used the same style as anywhere else.\n\nhumm, looking at src/test/regress/regress.c for example, I can see an empty\nline between PG_FUNCTION_INFO_V1 and Datum for all the \"PG_FUNCTION_INFO_V1\"\nones.\n\n> > While at it, should we add a second injection wait point in\n> > t/041_invalid_checkpoint_after_promote.pl to check that they are wake up\n> > individually?\n> \n> I'm not sure that's a good idea for the sake of this test, TBH, as\n> that would provide coverage outside the original scope of the\n> restartpoint/promote check.\n\nYeah, that was my doubt too.\n\n> I have also looked at if it would be possible to get an isolation test\n> out of that, but the arbitrary wait does not make that possible with\n> the existing isolation APIs, see GetSafeSnapshotBlockingPids() used in\n> pg_isolation_test_session_is_blocked(). isolation/README seems to be\n> a bit off here, actually, mentioning pg_locks.. We could use some\n> tricks with transactions or locks internally, but I'm not really\n> tempted to make the wait callback more complicated for the sake of\n> more coverage as I'd rather keep it generic for anybody who wants to\n> control the order of events across processes.\n\nMakes sense to me, let's keep it as it is.\n> \n> Attaching a v3 for reference with everything that has been mentioned\n> up to now.\n\nThanks!\n\nSorry, I missed those ones previously:\n\n1 ===\n\n+/* Maximum number of wait usable in injection points at once */\n\ns/Maximum number of wait/Maximum number of waits/ ?\n\n2 ===\n\n+# Check the logs that the restart point has started on standby. This is\n+# optional, but let's be sure.\n+my $log = slurp_file($node_standby->logfile, $logstart);\n+my $checkpoint_start = 0;\n+if ($log =~ m/restartpoint starting: immediate wait/)\n+{\n+ $checkpoint_start = 1;\n+}\n+is($checkpoint_start, 1, 'restartpoint has started');\n\nwhat about?\n\nok( $node_standby->log_contains( \"restartpoint starting: immediate wait\", $logstart),\n \"restartpoint has started\");\n\nExcept for the above, v3 looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Feb 2024 08:00:24 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 08:00:24AM +0000, Bertrand Drouvot wrote:\n> +/* Maximum number of wait usable in injection points at once */\n> \n> s/Maximum number of wait/Maximum number of waits/ ?\n\nThanks. I've edited a few more places while scanning the whole.\n\n> \n> 2 ===\n> \n> +# Check the logs that the restart point has started on standby. This is\n> +# optional, but let's be sure.\n> +my $log = slurp_file($node_standby->logfile, $logstart);\n> +my $checkpoint_start = 0;\n> +if ($log =~ m/restartpoint starting: immediate wait/)\n> +{\n> + $checkpoint_start = 1;\n> +}\n> +is($checkpoint_start, 1, 'restartpoint has started');\n> \n> what about?\n> \n> ok( $node_standby->log_contains( \"restartpoint starting: immediate wait\", $logstart),\n> \"restartpoint has started\");\n\nAnd I'm behind the commit that introduced it (392ea0c78fdb). It is\npossible to remove the dependency to slurp_file() entirely by\nswitching the second location checking the logs for the checkpoint\ncompletion.\n\n> Except for the above, v3 looks good to me.\n\nThanks. I'm looking at applying that at the beginning of next week if\nthere are no more comments, to get something by the feature freeze.\nWe could be more flexible for all that as that's related to testing,\nbut let's be in the clear.\n\nI've also renamed the test to 041_checkpoint_at_promote.pl, as now\nthat the original is fixed, the checkpoint is not invalid. That's\ncleaner this way.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 12:57:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 26, 2024 at 12:57:09PM +0900, Michael Paquier wrote:\n> On Thu, Feb 22, 2024 at 08:00:24AM +0000, Bertrand Drouvot wrote:\n> > +/* Maximum number of wait usable in injection points at once */\n> > \n> > s/Maximum number of wait/Maximum number of waits/ ?\n> \n> Thanks. I've edited a few more places while scanning the whole.\n\nThanks!\n\n> > \n> > 2 ===\n> > \n> > +# Check the logs that the restart point has started on standby. This is\n> > +# optional, but let's be sure.\n> > +my $log = slurp_file($node_standby->logfile, $logstart);\n> > +my $checkpoint_start = 0;\n> > +if ($log =~ m/restartpoint starting: immediate wait/)\n> > +{\n> > + $checkpoint_start = 1;\n> > +}\n> > +is($checkpoint_start, 1, 'restartpoint has started');\n> > \n> > what about?\n> > \n> > ok( $node_standby->log_contains( \"restartpoint starting: immediate wait\", $logstart),\n> > \"restartpoint has started\");\n> \n> And I'm behind the commit that introduced it (392ea0c78fdb).\n\n;-)\n\n> It is\n> possible to remove the dependency to slurp_file() entirely by\n> switching the second location checking the logs for the checkpoint\n> completion.\n\nYeah right.\n\n> > Except for the above, v3 looks good to me.\n> \n> Thanks. I'm looking at applying that at the beginning of next week if\n> there are no more comments, to get something by the feature freeze.\n> We could be more flexible for all that as that's related to testing,\n> but let's be in the clear.\n\nSounds reasonable to me.\n\n> I've also renamed the test to 041_checkpoint_at_promote.pl, as now\n> that the original is fixed, the checkpoint is not invalid. That's\n> cleaner this way.\n\nAgree.\n\nI'll try to submit the POC patch in [1] before beginning of next week now that\nwe're \"just waiting\" if there is more comments on this current thread.\n\n\n[1]: https://www.postgresql.org/message-id/ZdTNafYSxwnKNIhj%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 08:24:04 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 26 Feb 2024, at 08:57, Michael Paquier <[email protected]> wrote:\n> \n> <v4-0001-injection_points-Add-routines-to-wait-and-wake-pr.patch>\n\nWould it be possible to have a helper function to check this:\n\n+ok( $node_standby->poll_query_until(\n+\t\t'postgres',\n+\t\tqq[SELECT count(*) FROM pg_stat_activity\n+ WHERE backend_type = 'checkpointer' AND wait_event = 'CreateRestartPoint' ;],\n+\t\t'1'),\n+\t'checkpointer is waiting in restart point'\n+) or die \"Timed out while waiting for checkpointer to run restart point”;\n\nSo that we could do something like\n\nok(node_standby->await_injection_point(“CreateRestartPoint”,”checkpointer\"));\n\nIMO, this could make many tests cleaner.\nOr, perhaps, it’s a functionality for a future development?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 26 Feb 2024 14:10:49 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 02:10:49PM +0500, Andrey M. Borodin wrote:\n> So that we could do something like\n> \n> ok(node_standby->await_injection_point(“CreateRestartPoint”,”checkpointer\"));\n\nIt would be more flexible with a full string to describe the test\nrather than a process name in the second argument.\n\n> IMO, this could make many tests cleaner.\n> Or, perhaps, it’s a functionality for a future development?\n\nThis could just happen as separate refactoring, I guess. But I'd wait\nto see if more tests requiring scans to pg_stat_activity pop up. For\nexample, the test just posted here does not rely on that:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 08:29:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 27 Feb 2024, at 04:29, Michael Paquier <[email protected]> wrote:\n> \n> For\n> example, the test just posted here does not rely on that:\n> https://www.postgresql.org/message-id/[email protected]\n\nInstead, that test is scanning logs\n\n+ # Note: $node_primary->wait_for_replay_catchup($node_standby) would be\n+ # hanging here due to the injection point, so check the log instead.+\n+ my $terminated = 0;\n+ for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n+ {\n+ if ($node_standby->log_contains(\n+ 'terminating process .* to release replication slot \\\"injection_activeslot\\\"', $logstart))\n+ {\n+ $terminated = 1;\n+ last;\n+ }\n+ usleep(100_000);\n+ }\n\nBut, AFAICS, the purpose is the same: wait until event happened.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 27 Feb 2024 11:00:10 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Tue, Feb 27, 2024 at 11:00:10AM +0500, Andrey M. Borodin wrote:\n> \n> \n> > On 27 Feb 2024, at 04:29, Michael Paquier <[email protected]> wrote:\n> > \n> > For\n> > example, the test just posted here does not rely on that:\n> > https://www.postgresql.org/message-id/[email protected]\n> \n> Instead, that test is scanning logs\n> \n> + # Note: $node_primary->wait_for_replay_catchup($node_standby) would be\n> + # hanging here due to the injection point, so check the log instead.+\n> + my $terminated = 0;\n> + for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n> + {\n> + if ($node_standby->log_contains(\n> + 'terminating process .* to release replication slot \\\"injection_activeslot\\\"', $logstart))\n> + {\n> + $terminated = 1;\n> + last;\n> + }\n> + usleep(100_000);\n> + }\n> \n> But, AFAICS, the purpose is the same: wait until event happened.\n\nI think it's easier to understand the tests (I mean what the purpose of the\ninjection points are) if we don't use an helper function. While the helper \nfunction would make the test easier to read / cleaner, I think it may make them\nmore difficult to understand as 'await_injection_point' would probably be too\ngeneric.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 11:08:15 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\n> On 27 Feb 2024, at 16:08, Bertrand Drouvot <[email protected]> wrote:\n> \n> On Tue, Feb 27, 2024 at 11:00:10AM +0500, Andrey M. Borodin wrote:\n>> \n>> But, AFAICS, the purpose is the same: wait until event happened.\n> \n> I think it's easier to understand the tests (I mean what the purpose of the\n> injection points are) if we don't use an helper function. While the helper \n> function would make the test easier to read / cleaner, I think it may make them\n> more difficult to understand as 'await_injection_point' would probably be too\n> generic.\n\nFor the record: I’m fine if there is no such function.\nThere will be at least one implementation of this function in every single test with waiting injection points.\nThat’s the case where we might want something generic.\nWhat the specific there might be? The test can check that \n - conditions are logged\n - injection point reached in specific backend (e.g. checkpointer)\netc\n\nI doubt that this specific checks worth cleanness of the test. And sacrificing test readability in favour of teaching reader what injection points are sounds strange.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 27 Feb 2024 16:49:03 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Tue, Feb 27, 2024 at 04:49:03PM +0500, Andrey M. Borodin wrote:\n> Hi,\n> \n> > On 27 Feb 2024, at 16:08, Bertrand Drouvot <[email protected]> wrote:\n> > \n> > On Tue, Feb 27, 2024 at 11:00:10AM +0500, Andrey M. Borodin wrote:\n> >> \n> >> But, AFAICS, the purpose is the same: wait until event happened.\n> > \n> > I think it's easier to understand the tests (I mean what the purpose of the\n> > injection points are) if we don't use an helper function. While the helper \n> > function would make the test easier to read / cleaner, I think it may make them\n> > more difficult to understand as 'await_injection_point' would probably be too\n> > generic.\n> \n> For the record: I’m fine if there is no such function.\n> There will be at least one implementation of this function in every single test with waiting injection points.\n> That’s the case where we might want something generic.\n> What the specific there might be? The test can check that \n> - conditions are logged\n> - injection point reached in specific backend (e.g. checkpointer)\n> etc\n> \n> I doubt that this specific checks worth cleanness of the test. And sacrificing test readability in favour of teaching reader what injection points are sounds strange.\n\nI'm giving a second thought and it does not have to be exclusive, for example in\nthis specific test we could:\n\n1) check that the injection point is reached with the helper (querying pg_stat_activity\nlooks good to me) \nAnd\n2) check in the log \n\nSo even if two checks might sound \"too much\" they both make sense to give 1) better\nreadability and 2) better understanding of the test.\n\nSo, I'm ok with the new helper too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:39:59 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 01:39:59PM +0000, Bertrand Drouvot wrote:\n> So, I'm ok with the new helper too.\n\nIf both of you feel strongly about that, I'm OK with introducing\nsomething like that. Now, a routine should be only about waiting on\npg_stat_activity to report something, as for the logs we already have\nlog_contains(). And a test may want one check, but unlikely both.\nEven if both are wanted, it's just a matter of using log_contains()\nand the new routine that does pg_stat_activity lookups.\n--\nMichael",
"msg_date": "Wed, 28 Feb 2024 13:26:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 28 Feb 2024, at 09:26, Michael Paquier <[email protected]> wrote:\n> \n> Now, a routine should be only about waiting on\n> pg_stat_activity to report something\n\nBTW, if we had an SQL function such as injection_point_await(name), we could use it in our isolation tests as well as our TAP tests :)\nHowever, this might well be a future improvement, if we had a generic “await\" Perl function - we wouldn’t need to re-use new SQL code in so many places.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 28 Feb 2024 10:27:52 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 28, 2024 at 01:26:46PM +0900, Michael Paquier wrote:\n> On Tue, Feb 27, 2024 at 01:39:59PM +0000, Bertrand Drouvot wrote:\n> > So, I'm ok with the new helper too.\n> \n> If both of you feel strongly about that, I'm OK with introducing\n> something like that.\n\nThanks!\n\n> Now, a routine should be only about waiting on\n> pg_stat_activity to report something, as for the logs we already have\n> log_contains().\n\nAgree.\n\n> And a test may want one check, but unlikely both.\n> Even if both are wanted, it's just a matter of using log_contains()\n> and the new routine that does pg_stat_activity lookups.\n\nYeah, that's also my point of view.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 06:20:41 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 10:27:52AM +0500, Andrey M. Borodin wrote:\n> BTW, if we had an SQL function such as injection_point_await(name),\n> we could use it in our isolation tests as well as our TAP tests :)\n\nI am not quite sure to follow here. The isolation test facility now\nrelies on the in-core function pg_isolation_test_session_is_blocked()\nto check the state of backends getting blocked, and that's outside of\nthe scope of the module injection_points.\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 14:54:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 06:20:41AM +0000, Bertrand Drouvot wrote:\n> On Wed, Feb 28, 2024 at 01:26:46PM +0900, Michael Paquier wrote:\n>> On Tue, Feb 27, 2024 at 01:39:59PM +0000, Bertrand Drouvot wrote:\n>> > So, I'm ok with the new helper too.\n>> \n>> If both of you feel strongly about that, I'm OK with introducing\n>> something like that.\n> \n> Thanks!\n\n(Cough. As in, \"feel free to send a patch\" on top of what's already\nproposed if any of you feel that's better at the end.)\n\n;)\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 14:56:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 29, 2024 at 02:56:23PM +0900, Michael Paquier wrote:\n> On Wed, Feb 28, 2024 at 06:20:41AM +0000, Bertrand Drouvot wrote:\n> > On Wed, Feb 28, 2024 at 01:26:46PM +0900, Michael Paquier wrote:\n> >> On Tue, Feb 27, 2024 at 01:39:59PM +0000, Bertrand Drouvot wrote:\n> >> > So, I'm ok with the new helper too.\n> >> \n> >> If both of you feel strongly about that, I'm OK with introducing\n> >> something like that.\n> > \n> > Thanks!\n> \n> (Cough. As in, \"feel free to send a patch\" on top of what's already\n> proposed if any of you feel that's better at the end.)\n> \n> ;)\n\nokay ;-)\n\nSomething like the attached?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Feb 2024 08:35:45 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 29 Feb 2024, at 13:35, Bertrand Drouvot <[email protected]> wrote:\n> \n> Something like the attached?\n\nThere's extraneous print \"done\\n\";\nAlso can we have optional backend type :)\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 29 Feb 2024 14:34:35 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 29, 2024 at 02:34:35PM +0500, Andrey M. Borodin wrote:\n> \n> \n> > On 29 Feb 2024, at 13:35, Bertrand Drouvot <[email protected]> wrote:\n> > \n> > Something like the attached?\n> \n> There's extraneous print \"done\\n\";\n\ndoh! bad copy/paste ;-) \n\n> Also can we have optional backend type :)\n\ndone in v2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Feb 2024 10:20:13 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 29 Feb 2024, at 15:20, Bertrand Drouvot <[email protected]> wrote:\n> \n> done in v2 attached.\n\nWorks fine for me. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 29 Feb 2024 18:19:58 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 06:19:58PM +0500, Andrey M. Borodin wrote:\n> Works fine for me. Thanks!\n\n+ \"timed out waiting for the backend type to wait for the injection point name\";\n\nThe log should provide some context about the caller failing, meaning\nthat the backend type and the injection point name should be mentioned\nin these logs to help in debugging issues.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 11:02:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 01, 2024 at 11:02:01AM +0900, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 06:19:58PM +0500, Andrey M. Borodin wrote:\n> > Works fine for me. Thanks!\n> \n> + \"timed out waiting for the backend type to wait for the injection point name\";\n> \n> The log should provide some context about the caller failing, meaning\n> that the backend type and the injection point name should be mentioned\n> in these logs to help in debugging issues.\n\nYeah, done in v3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 1 Mar 2024 06:52:45 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 08:24:04AM +0000, Bertrand Drouvot wrote:\n> I'll try to submit the POC patch in [1] before beginning of next week now that\n> we're \"just waiting\" if there is more comments on this current thread.\n\nI'll look at what you have there in more details.\n\n> [1]: https://www.postgresql.org/message-id/ZdTNafYSxwnKNIhj%40ip-10-97-1-34.eu-west-3.compute.internal\n\nThe main routines have been now applied as 37b369dc67bc, with the test\nin 6782709df81f. I have used the same naming policy as 6a1ea02c491d\nfor consistency, naming the injection point create-restart-point.\n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 09:55:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 06:52:45AM +0000, Bertrand Drouvot wrote:\n> +\tif (defined($backend_type))\n> +\t{\n> +\t\t$backend_type = qq('$backend_type');\n> +\t\t$die_message = \"the backend type $backend_type\";\n> +\t}\n> +\telse\n> +\t{\n> +\t\t$backend_type = 'backend_type';\n> +\t\t$die_message = 'one backend';\n> +\n> +\t}\n> +\n> +\t$self->poll_query_until(\n> +\t\t'postgres', qq[\n> +\t\tSELECT count(*) > 0 FROM pg_stat_activity\n> +\t\tWHERE backend_type = $backend_type AND wait_event = '$injection_name'\n> +\t])\n> +\t or die\n> +\t qq(timed out waiting for $die_message to wait for the injection point '$injection_name');\n\nI was looking at that, and found v3 to be an overkill. First, I think\nthat we should encourage callers to pass down a backend_type. Perhaps\nI am wrong to assume so, but that's based on my catalog of tests\nwaiting in my stack.\n\nA second thing is that this is entirely unrelated to injection points,\nbecause a test may want to wait for a given wait_event on a\nbackend_type without using the module injection_points. At the end, I \nhave renamed the routine to wait_for_event(), tweaked a bit its\ninternals, and the result looked fine so I have applied it while\nupdating 041_checkpoint_at_promote.pl to use it. Its internals could\nalways be expanded more depending on the demand for it.\n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 10:44:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "I noticed this was committed, and took a quick look. It sounds very\nuseful for testing Citus to be able to use injection points too, but I\nnoticed this code is included in src/test/modules. It sounds like that\nlocation will make it somewhat hard to install. If that's indeed the\ncase, would it be possible to move it to contrib instead?\n\n\n",
"msg_date": "Mon, 4 Mar 2024 05:17:52 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "\n\n> On 4 Mar 2024, at 06:44, Michael Paquier <[email protected]> wrote:\n> so I have applied it \n\nGreat! Thank you! A really useful stuff for an asynchronous testing!\n\n\n> On 4 Mar 2024, at 09:17, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> this code is included in src/test/modules. It sounds like that\n> location will make it somewhat hard to install.\n\n+1. I joined the thread too late to ask why it’s not in core. But, it seems to me that separating logic even further is not necessary, given build option —with-injection-points off by default.\n\n> If that's indeed the\n> case, would it be possible to move it to contrib instead?\n\nThere’s no point in shipping this to users, especially with the build without injection points compiled.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Mar 2024 10:16:15 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 05:17:52AM +0100, Jelte Fennema-Nio wrote:\n> I noticed this was committed, and took a quick look. It sounds very\n> useful for testing Citus to be able to use injection points too, but I\n> noticed this code is included in src/test/modules. It sounds like that\n> location will make it somewhat hard to install. If that's indeed the\n> case, would it be possible to move it to contrib instead?\n\nOne problem with installing that in contrib/ is that it would require\nmore maintenance as a stable and \"released\" module. The aim of this\nmodule is to be more flexible than that, so as it is possible to\nextend it at will even in the back branches to be able to implement\nfeatures that could help with tests that we'd want to implement in\nstable branches. I have mentioned that on a separate thread, but\nadding more extension maintenance burden while implementing complex\ntests does not sound like a good idea for me in the long-run.\n\nPerhaps we could consider that as an exception in \"contrib\", or have a\nseparate path for test modules we're OK to install (the calls had\nbetter be superuser-only if we do that). Another thing with the\nbackend support of injection points is that you could implement your\nown extension within citus, able to do what you mimic this in-core\nmodule, and get inspiration from it. Duplication is never cool, I\nagree, though.\n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 14:27:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "Hi,\n\nOn Mon, Mar 04, 2024 at 10:44:34AM +0900, Michael Paquier wrote:\n> On Fri, Mar 01, 2024 at 06:52:45AM +0000, Bertrand Drouvot wrote:\n> > +\tif (defined($backend_type))\n> > +\t{\n> > +\t\t$backend_type = qq('$backend_type');\n> > +\t\t$die_message = \"the backend type $backend_type\";\n> > +\t}\n> > +\telse\n> > +\t{\n> > +\t\t$backend_type = 'backend_type';\n> > +\t\t$die_message = 'one backend';\n> > +\n> > +\t}\n> > +\n> > +\t$self->poll_query_until(\n> > +\t\t'postgres', qq[\n> > +\t\tSELECT count(*) > 0 FROM pg_stat_activity\n> > +\t\tWHERE backend_type = $backend_type AND wait_event = '$injection_name'\n> > +\t])\n> > +\t or die\n> > +\t qq(timed out waiting for $die_message to wait for the injection point '$injection_name');\n> \n> I was looking at that, and found v3 to be an overkill. First, I think\n> that we should encourage callers to pass down a backend_type. Perhaps\n> I am wrong to assume so, but that's based on my catalog of tests\n> waiting in my stack.\n\nWorks for me.\n\n> A second thing is that this is entirely unrelated to injection points,\n> because a test may want to wait for a given wait_event on a\n> backend_type without using the module injection_points. At the end, I \n> have renamed the routine to wait_for_event(),\n\nGood idea, fully makes sense.\n\n> tweaked a bit its\n> internals, and the result looked fine so I have applied it\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 07:22:40 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 06:27, Michael Paquier <[email protected]> wrote:\n> I have mentioned that on a separate thread,\n\nYeah, I didn't read all emails related to this feature\n\n> Perhaps we could consider that as an exception in \"contrib\", or have a\n> separate path for test modules we're OK to install (the calls had\n> better be superuser-only if we do that).\n\nYeah, it makes sense that you'd want to backport fixes/changes to\nthis. As long as you put a disclaimer in the docs that you can do that\nfor this module, I think it would be fine. Our tests fairly regularly\nbreak anyway when changing minor versions of postgres in our CI, e.g.\ndue to improvements in the output of isolationtester. So if changes to\nthis module require some changes that's fine by me. Seems much nicer\nthan having to copy-paste the code.\n\n\n",
"msg_date": "Mon, 4 Mar 2024 10:51:41 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 10:51:41AM +0100, Jelte Fennema-Nio wrote:\n> Yeah, it makes sense that you'd want to backport fixes/changes to\n> this. As long as you put a disclaimer in the docs that you can do that\n> for this module, I think it would be fine. Our tests fairly regularly\n> break anyway when changing minor versions of postgres in our CI, e.g.\n> due to improvements in the output of isolationtester. So if changes to\n> this module require some changes that's fine by me. Seems much nicer\n> than having to copy-paste the code.\n\nIn my experience, anybody who does serious testing with their product\nintegrated with Postgres have two or three types of builds with their\nown scripts: one with assertions, -DG and other developer-oriented\noptions enabled, and one for production deployments with more\noptimized options like -O2. Once there are custom scripts to build\nand package Postgres, do we really need to move that to contrib/ at\nall? make install would work for a test module as long as the command\nis run locally in its directory.\n--\nMichael",
"msg_date": "Tue, 5 Mar 2024 07:23:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 23:23, Michael Paquier <[email protected]> wrote:\n> In my experience, anybody who does serious testing with their product\n> integrated with Postgres have two or three types of builds with their\n> own scripts: one with assertions, -DG and other developer-oriented\n> options enabled, and one for production deployments with more\n> optimized options like -O2. Once there are custom scripts to build\n> and package Postgres, do we really need to move that to contrib/ at\n> all?\n\nI do think there is quite a bit of a difference from a user\nperspective to providing a few custom configure flags and having to go\nto a separate directory and run \"make install\" there too. Simply\nbecause it's yet another step. For dev environments most/all of my\nteam uses pgenv: https://github.com/theory/pgenv There I agree we\ncould add functionality to it to also install test modules when given\na certain flag/environment variable, but it would be nice if that\nwasn't needed.\n\nOne big downside to not having it in contrib is that we also run tests\nof Citus against official PGDG postgres packages and those would\nlikely not include these test modules, so we wouldn't be able to run\nall our tests then.\n\nAlso I think the injection points extension is quite different from\nthe other modules in src/modules/test. All of these other modules are\nbasically test binaries that need to be separate modules, but they\ndon't provide any useful functionality when installing them. The\ninjection_poinst one actually provides useful functionality itself,\nthat can be used by other people testing things against postgres.\n\n> make install would work for a test module as long as the command\n> is run locally in its directory.\n\nWhat would I need to do for meson builds? I tried the following\ncommand but it doesn't seem to actually install the injection points\ncommand:\nninja -C build install-test-files\n\n\n",
"msg_date": "Tue, 5 Mar 2024 09:43:03 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 09:43:03AM +0100, Jelte Fennema-Nio wrote:\n> I do think there is quite a bit of a difference from a user\n> perspective to providing a few custom configure flags and having to go\n> to a separate directory and run \"make install\" there too. Simply\n> because it's yet another step. For dev environments most/all of my\n> team uses pgenv: https://github.com/theory/pgenv There I agree we\n> could add functionality to it to also install test modules when given\n> a certain flag/environment variable, but it would be nice if that\n> wasn't needed.\n\nDidn't know this one, TBH. That's interesting. My own scripts\nemulate the same things with versioning, patching, etc.\n\n> One big downside to not having it in contrib is that we also run tests\n> of Citus against official PGDG postgres packages and those would\n> likely not include these test modules, so we wouldn't be able to run\n> all our tests then.\n\nNo idea about this part, unfortunately. One thing that I'd do is\nfirst check if the in-core module is enough to satisfy the\nrequirements Citus would want. I got pinged about Timescale and\ngreenplum recently, and they can reuse the backends APIs, but they've\nalso wanted a few more callbacks than the in-core module so they will\nneed to have their own code for tests with a custom callback library.\nPerhaps we could move that to contrib/ and document that this is a\nmodule for testing, that can be updated without notice even in minor\nupgrades and that there are no version upgrades done, or something\nlike that. I'm open to that if there's enough demand for it, but I\ndon't know how much we should accomodate with the existing\nrequirements of contrib/ for something that's only developer-oriented.\n\n> Also I think the injection points extension is quite different from\n> the other modules in src/modules/test. All of these other modules are\n> basically test binaries that need to be separate modules, but they\n> don't provide any useful functionality when installing them. The\n> injection_poinst one actually provides useful functionality itself,\n> that can be used by other people testing things against postgres.\n\nI'm not sure about that yet, TBH. Anything that gets added to this\nmodule should be used in some way by the in-core tests, or just not be\nthere. That's a line I don't want to cross, which is why it's a test \nmodule. FWIW, it would be really annoying to have documentation\nrequirements, actually, because that increases maintenance and I'm not\nsure it's a good idea to add a module maintenance on top of what could \nrequire more facility in the module to implement a test for a bug fix.\n\n> What would I need to do for meson builds? I tried the following\n> command but it doesn't seem to actually install the injection points\n> command:\n> ninja -C build install-test-files\n\nWeird, that works here. I was digging into the meson tree and noticed\nthat this was the only run_target() related to the test module install\ndata, and injection_points gets installed as well as all the other\nmodules. This should just need -Dinjection_points=true.\n--\nMichael",
"msg_date": "Wed, 6 Mar 2024 15:17:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 07:17, Michael Paquier <[email protected]> wrote:\n> I'm open to that if there's enough demand for it, but I\n> don't know how much we should accomodate with the existing\n> requirements of contrib/ for something that's only developer-oriented.\n\nThere's quite a few developer-oriented GUCs as well, even requiring\ntheir own DEVELOPER_OPTIONS section. So I don't think it's too weird\nto have some developer-oriented extensions too.\n\n> I'm not sure about that yet, TBH. Anything that gets added to this\n> module should be used in some way by the in-core tests, or just not be\n> there. That's a line I don't want to cross, which is why it's a test\n> module.\n\nThat seems fine, if people need more functionality they can indeed\ncreate their own test helpers. It mainly seems annoying for people to\nhave to copy-paste the ones from core to their own extension.\n\nWhat I mainly meant is that anything in src/test/modules is not even\nclose to somewhat useful for other people to use. They are really just\nvery specific tests that need to be written in C. Afaict all those\nmodules are not even used by tests outside of their own module. But\nthese functions are helper functions, to be used by other tests. And\nlimiting the users of those helper functions to just be in-core\nPostgres code seems a bit annoying. I feel like these functions are\nmore akin to the pgregress/isolationtester binaries in their usage,\nthan akin to other modules in src/test/modules.\n\n> FWIW, it would be really annoying to have documentation\n> requirements, actually, because that increases maintenance and I'm not\n> sure it's a good idea to add a module maintenance on top of what could\n> require more facility in the module to implement a test for a bug fix.\n\nQuite a few contrib modules have very limited documentation. I think\nit would be fine for this as well.\n\n> This should just need -Dinjection_points=true.\n\nUgh... Sorry... I didn't realize that it needed a dedicated configure\nflag. When providing that flag it indeed installs the expected files.\nI guess that rules out testing against PGDG packages, because those\npackages almost certainly wouldn't specify this flag.\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:19:41 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Injection points: some tools to wait and wake"
},
{
"msg_contents": "On Wed, Mar 06, 2024 at 10:19:41AM +0100, Jelte Fennema-Nio wrote:\n> What I mainly meant is that anything in src/test/modules is not even\n> close to somewhat useful for other people to use. They are really just\n> very specific tests that need to be written in C. Afaict all those\n> modules are not even used by tests outside of their own module. But\n> these functions are helper functions, to be used by other tests. And\n> limiting the users of those helper functions to just be in-core\n> Postgres code seems a bit annoying. I feel like these functions are\n> more akin to the pgregress/isolationtester binaries in their usage,\n> than akin to other modules in src/test/modules.\n\nPerhaps. I think that we're still in the discovery phase for this\nstuff, and more people should get used to it first (this will take\nsome time and everybody is busy with their own stuff for the last\ncommit fest). At least it does not seem good to rush any decisions at\nthis stage.\n\n>> FWIW, it would be really annoying to have documentation\n>> requirements, actually, because that increases maintenance and I'm not\n>> sure it's a good idea to add a module maintenance on top of what could\n>> require more facility in the module to implement a test for a bug fix.\n> \n> Quite a few contrib modules have very limited documentation. I think\n> it would be fine for this as well.\n\nI'd argue that we should try to improve the existing documentation\nrather that use that as an argument to add more modules with limited\ndocumentation ;)\n\n> Ugh... Sorry... I didn't realize that it needed a dedicated configure\n> flag. When providing that flag it indeed installs the expected files.\n> I guess that rules out testing against PGDG packages, because those\n> packages almost certainly wouldn't specify this flag.\n\nThe CI enables the switch, and I've updated all my buildfarm members\nto use it. In terms of coverage, that's already quite good.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2024 07:54:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Injection points: some tools to wait and wake"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently ALTER SUBSCRIPTION ... SET PUBLICATION will break the\nlogical replication in certain cases. This can happen as the apply\nworker will get restarted after SET PUBLICATION, the apply worker will\nuse the existing slot and replication origin corresponding to the\nsubscription. Now, it is possible that before restart the origin has\nnot been updated and the WAL start location points to a location prior\nto where PUBLICATION pub exists which can lead to such an error. Once\nthis error occurs, apply worker will never be able to proceed and will\nalways return the same error.\n\nThere was discussion on this and Amit had posted a patch to handle\nthis at [2]. Amit's patch does continue using a historic snapshot but\nignores publications that are not found for the purpose of computing\nRelSyncEntry attributes. We won't mark such an entry as valid till all\nthe publications are loaded without anything missing. This means we\nwon't publish operations on tables corresponding to that publication\ntill we found such a publication and that seems okay.\nI have added an option skip_not_exist_publication to enable this\noperation only when skip_not_exist_publication is specified as true.\nThere is no change in default behavior when skip_not_exist_publication\nis specified as false.\n\nBut one thing to note with the patch (with skip_not_exist_publication\noption) is that replication of few WAL entries will be skipped till\nthe publication is loaded like in the below example:\n-- Create table in publisher and subscriber\ncreate table t1(c1 int);\ncreate table t2(c1 int);\n\n-- Create publications\ncreate publication pub1 for table t1;\ncreate publication pub2 for table t2;\n\n-- Create subscription\ncreate subscription test1 connection 'dbname=postgres host=localhost\nport=5432' publication pub1, pub2;\n\n-- Drop one publication\ndrop publication pub1;\n\n-- Insert in the publisher\ninsert into t1 values(11);\ninsert into t2 values(21);\n\n-- Select in subscriber\npostgres=# select * from t1;\n c1\n----\n(0 rows)\n\npostgres=# select * from t2;\n c1\n----\n 21\n(1 row)\n\n-- Create the dropped publication in publisher\ncreate publication pub1 for table t1;\n\n-- Insert in the publisher\ninsert into t1 values(12);\npostgres=# select * from t1;\n c1\n----\n 11\n 12\n(2 rows)\n\n-- Select data in subscriber\npostgres=# select * from t1; -- record with value 11 will be missing\nin subscriber\n c1\n----\n 12\n(1 row)\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BT-ETXeRM4DHWzGxBpKafLCp__5bPA_QZfFQp7-0wj4Q%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Mon, 19 Feb 2024 12:48:42 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add an option to skip loading missing publication to avoid logical\n replication failure"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 12:48, vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> Currently ALTER SUBSCRIPTION ... SET PUBLICATION will break the\n> logical replication in certain cases. This can happen as the apply\n> worker will get restarted after SET PUBLICATION, the apply worker will\n> use the existing slot and replication origin corresponding to the\n> subscription. Now, it is possible that before restart the origin has\n> not been updated and the WAL start location points to a location prior\n> to where PUBLICATION pub exists which can lead to such an error. Once\n> this error occurs, apply worker will never be able to proceed and will\n> always return the same error.\n>\n> There was discussion on this and Amit had posted a patch to handle\n> this at [2]. Amit's patch does continue using a historic snapshot but\n> ignores publications that are not found for the purpose of computing\n> RelSyncEntry attributes. We won't mark such an entry as valid till all\n> the publications are loaded without anything missing. This means we\n> won't publish operations on tables corresponding to that publication\n> till we found such a publication and that seems okay.\n> I have added an option skip_not_exist_publication to enable this\n> operation only when skip_not_exist_publication is specified as true.\n> There is no change in default behavior when skip_not_exist_publication\n> is specified as false.\n\nI have updated the patch to now include changes for pg_dump, added few\ntests, describe changes and added documentation changes. The attached\nv2 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 19 Feb 2024 17:11:25 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add an option to skip loading missing publication to avoid\n logical replication failure"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on [1] it has been mentioned that this page\nhttps://www.postgresql.org/docs/current/auth-username-maps.html is switching\nbetween system-user and system-username.\n\nPlease find attached a tiny patch to clean that up.\n\n[1]: https://www.postgresql.org/message-id/CAOBaU_Yp08MQOK7_k4QVyxL6sf7TURGpjX3tn1Z%2BWxJo2x7%2BGQ%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Feb 2024 08:52:11 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid switching between system-user and system-username in the doc"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 09:52, Bertrand Drouvot\n<[email protected]> wrote:\n> Please find attached a tiny patch to clean that up.\n\nLGTM\n\n\n",
"msg_date": "Mon, 19 Feb 2024 18:00:11 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid switching between system-user and system-username in the\n doc"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 06:00:11PM +0100, Jelte Fennema-Nio wrote:\n> On Mon, 19 Feb 2024 at 09:52, Bertrand Drouvot\n> <[email protected]> wrote:\n>> Please find attached a tiny patch to clean that up.\n> \n> LGTM\n\nLooks like a mistake from me in efb6f4a4f9b6, will fix and backpatch\nfor consistency.\n--\nMichael",
"msg_date": "Tue, 20 Feb 2024 08:06:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid switching between system-user and system-username in the\n doc"
}
] |
[
{
"msg_contents": "numeric_big has been left out of parallel_schedule, requiring EXTRA_TESTS to\nrun it, since going in back in 1999 (AFAICT it was even the reason EXTRA_TESTS\nwas invented). The original commit states that it's huge, and it probably was.\nToday it runs faster than many tests we have in parallel_schedule, even on slow\nhardware like my ~5 year old laptop. Repeated runs in CI at various parallel\ngroups place it at 50% the runtime of triggers.sql and 25% of brin.sql.\n\nTo make sure it's executed and not silently breaks, is it time to add this to\nthe regular make check?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:27:58 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "numeric_big in make check?"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> numeric_big has been left out of parallel_schedule, requiring EXTRA_TESTS to\n> run it, since going in back in 1999 (AFAICT it was even the reason EXTRA_TESTS\n> was invented). The original commit states that it's huge, and it probably was.\n> Today it runs faster than many tests we have in parallel_schedule, even on slow\n> hardware like my ~5 year old laptop. Repeated runs in CI at various parallel\n> groups place it at 50% the runtime of triggers.sql and 25% of brin.sql.\n\n> To make sure it's executed and not silently breaks, is it time to add this to\n> the regular make check?\n\nOr we could just flush it. It's never detected a bug, and I think\nyou'd find that it adds zero code coverage (or if not, we could\nfix that in a far more surgical and less expensive manner).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Feb 2024 06:48:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "> On 19 Feb 2024, at 12:48, Tom Lane <[email protected]> wrote:\n> Daniel Gustafsson <[email protected]> writes:\n\n>> To make sure it's executed and not silently breaks, is it time to add this to\n>> the regular make check?\n> \n> Or we could just flush it. It's never detected a bug, and I think\n> you'd find that it adds zero code coverage (or if not, we could\n> fix that in a far more surgical and less expensive manner).\n\nI don't have a problem with that, there isn't much value in keeping it\n(especially when not connected to make check so that it actually runs). That\nalso means we can remove two make targets which hadn't been ported to meson to\nget us a hair closer to parity.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 19 Feb 2024 13:31:03 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "> > On 19 Feb 2024, at 12:48, Tom Lane <[email protected]> wrote:\n> >\n> > Or we could just flush it. It's never detected a bug, and I think\n> > you'd find that it adds zero code coverage (or if not, we could\n> > fix that in a far more surgical and less expensive manner).\n>\n\nOff the top of my head, I can't say to what extent that's true, but it\nwouldn't surprise me if at least some of the tests added in the last 4\ncommits to touch that file aren't covered by tests elsewhere. Indeed\nthat certainly looks like the case for 18a02ad2a5. I'm sure those\ntests could be pared down though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 19 Feb 2024 14:03:21 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "Dean Rasheed <[email protected]> writes:\n> On 19 Feb 2024, at 12:48, Tom Lane <[email protected]> wrote:\n>> Or we could just flush it. It's never detected a bug, and I think\n>> you'd find that it adds zero code coverage (or if not, we could\n>> fix that in a far more surgical and less expensive manner).\n\n> Off the top of my head, I can't say to what extent that's true, but it\n> wouldn't surprise me if at least some of the tests added in the last 4\n> commits to touch that file aren't covered by tests elsewhere. Indeed\n> that certainly looks like the case for 18a02ad2a5. I'm sure those\n> tests could be pared down though.\n\nI thought I'd try to acquire some actual facts here, so I compared\nthe code coverage shown by \"make check\" as of HEAD, versus \"make\ncheck\" after adding numeric_big to parallel_schedule. I saw the\nfollowing lines of numeric.c as being covered in the second run\nand not the first:\n\nnumeric():\n1285 || !NUMERIC_IS_SHORT(num)))\n1293 new->choice.n_long.n_sign_dscale = NUMERIC_SIGN(new) |\n1294 ((uint16) dscale & NUMERIC_DSCALE_MASK);\ndiv_var_fast():\n9185 idivisor = idivisor * NBASE + var2->digits[1];\n9186 idivisor_weight--;\nsqrt_var():\n10073 res_ndigits = res_weight + 1 - (-rscale - 1) / DEC_DIGITS;\n\nPretty poor return on investment for the runtime consumed. I don't\nobject to adding something to numeric.sql that's targeted at adding\ncoverage for these (or anyplace else that's not covered), but let's\nnot just throw cycles at the problem.\n\nOddly, there were a few lines in numeric_poly_combine and\nint8_avg_combine that were hit in the first run and not the second.\nApparently our tests of parallel aggregation aren't as reproducible as\none could wish.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:35:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 15:35, Tom Lane <[email protected]> wrote:\n>\n> I thought I'd try to acquire some actual facts here, so I compared\n> the code coverage shown by \"make check\" as of HEAD, versus \"make\n> check\" after adding numeric_big to parallel_schedule. I saw the\n> following lines of numeric.c as being covered in the second run\n> and not the first:\n>\n\nI don't think that tells the whole story. Code coverage only tells you\nwhether a particular line of code has been hit, not whether it has\nbeen properly tested with all the values that might lead to different\ncases. For example:\n\n> sqrt_var():\n> 10073 res_ndigits = res_weight + 1 - (-rscale - 1) / DEC_DIGITS;\n>\n\nTo get code coverage of this line, all you need to do is ensure that\nsqrt_var() is called with rscale < -1 (which can only happen from the\nrange-reduction code in ln_var()). You could do that by computing\nln(1e50), which results in calling sqrt_var() with rscale = -2,\ncausing that line of code in sqrt_var() to be hit. That would satisfy\ncode coverage, but I would argue that you've only really tested that\nline of code properly if you also hit it with rscale = -3, and\nprobably a few more values, to check that the round-down division is\nworking as intended.\n\nSimilarly, looking at commit 18a02ad2a5, the crucial code change was\nthe following in power_var():\n\n- val = Max(val, -NUMERIC_MAX_RESULT_SCALE);\n- val = Min(val, NUMERIC_MAX_RESULT_SCALE);\n val *= 0.434294481903252; /* approximate decimal result weight */\n\nAny test that calls numeric_power() is going to hit those lines of\ncode, but to see a failure, it was necessary to hit them with the\nabsolute value of \"val\" greater than NUMERIC_MAX_RESULT_SCALE, which\nis why that commit added 2 new test cases to numeric_big, calling\npower_var() with \"val\" outside that range. Code coverage is never\ngoing to tell you whether or not that is being tested, since the code\nchange was to delete lines. Even if that weren't the case, any line of\ncode involving Min() or Max() has 2 branches, and code coverage won't\ntell you if you've hit both of them.\n\n> Pretty poor return on investment for the runtime consumed. I don't\n> object to adding something to numeric.sql that's targeted at adding\n> coverage for these (or anyplace else that's not covered), but let's\n> not just throw cycles at the problem.\n>\n\nI agree that blindly performing a bunch of large computations (just\nthrowing cycles at the problem) is not a good approach to testing. And\nmaybe that's a fair characterisation of parts of numeric_big. However,\nit also contains some fairly well-targeted tests intended to test\nspecific edge cases that only occur with particular ranges of inputs,\nwhich don't necessarily show up as increased code coverage.\n\nSo I think this requires more careful analysis. Simply deleting\nnumeric_big and adding tests to numeric.sql until the same level of\ncode coverage is achieved will not give the same level of testing.\n\nIt's also worth noting that the cost of running numeric_big has come\ndown very significantly over the last few years, as can be seen by\nrunning the current numeric_big script against old backends. There's a\nlot of random variation in the timings, but the trend is very clear:\n\n9.5 1.641s\n9.6 0.856s\n10 0.845s\n11 0.750s\n12 0.760s\n13 0.672s\n14 0.430s\n15 0.347s\n16 0.336s\n\nArguably, this is a useful benchmark to spot performance regressions\nand test proposed performance-improving patches.\n\nIf I run \"EXTRA_TESTS=numeric_big make check | grep 'ok ' | sort\n-nrk5\", numeric_big is not in the top 10 most expensive tests (it's\nusually down at around 15'th place).\n\nLooking at the script itself, the addition, subtraction,\nmultiplication and division tests at the top are probably pointless,\nsince I would expect those operations to be tested adequately (and\nprobably more thoroughly) by the transcendental test cases. In fact, I\nthink it would probably be OK to delete everything above line 650, and\njust keep the bottom half of the script -- the pow(), exp(), ln() and\nlog() tests, which cover various edge cases, as well as exercising\nbasic arithmetic operations internally. We might want to check that\nI/O of large numerics is still being tested properly though.\n\nIf we did that, numeric_big would be even further down the list of\nexpensive tests, and I'd say it should be run by default.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:23:31 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "> On 20 Feb 2024, at 14:23, Dean Rasheed <[email protected]> wrote:\n\n> If we did that, numeric_big would be even further down the list of\n> expensive tests, and I'd say it should be run by default.\n\nMy motivation for raising this was to get a test which is executed as part of\nparallel_schedule to make failures aren't missed. If we get there by slimming\ndown numeric_big to keep the unique coverage then that sounds like a good plan\nto me.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 14:46:16 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "Dean Rasheed <[email protected]> writes:\n> Looking at the script itself, the addition, subtraction,\n> multiplication and division tests at the top are probably pointless,\n> since I would expect those operations to be tested adequately (and\n> probably more thoroughly) by the transcendental test cases. In fact, I\n> think it would probably be OK to delete everything above line 650, and\n> just keep the bottom half of the script -- the pow(), exp(), ln() and\n> log() tests, which cover various edge cases, as well as exercising\n> basic arithmetic operations internally.\n\nI could go with that, but let's just transpose those into numeric.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:16:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 15:16, Tom Lane <[email protected]> wrote:\n>\n> Dean Rasheed <[email protected]> writes:\n> > Looking at the script itself, the addition, subtraction,\n> > multiplication and division tests at the top are probably pointless,\n> > since I would expect those operations to be tested adequately (and\n> > probably more thoroughly) by the transcendental test cases. In fact, I\n> > think it would probably be OK to delete everything above line 650, and\n> > just keep the bottom half of the script -- the pow(), exp(), ln() and\n> > log() tests, which cover various edge cases, as well as exercising\n> > basic arithmetic operations internally.\n>\n> I could go with that, but let's just transpose those into numeric.\n>\n\nWorks for me.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:29:51 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric_big in make check?"
}
] |
[
{
"msg_contents": "Hi,\n\nI worked on using the currently proposed streaming read API [1] in ANALYZE.\nThe patch is attached. 0001 is the not yet merged streaming read API code\nchanges that can be applied to the master, 0002 is the actual code.\n\nThe blocks to analyze are obtained by using the streaming read API now.\n\n- Since streaming read API is already doing prefetch, I removed the #ifdef\nUSE_PREFETCH code from acquire_sample_rows().\n\n- Changed 'while (BlockSampler_HasMore(&bs))' to 'while (nblocks)' because\nthe prefetch mechanism in the streaming read API will advance 'bs' before\nreturning buffers.\n\n- Removed BlockNumber and BufferAccessStrategy from the declaration of\nscan_analyze_next_block(), passing pgsr (PgStreamingRead) instead of them.\n\nI counted syscalls of analyzing ~5GB table. It can be seen that the patched\nversion did ~1300 less read calls.\n\nPatched:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 39.67 0.012128 0 29809 pwrite64\n 36.96 0.011299 0 28594 pread64\n 23.24 0.007104 0 27611 fadvise64\n\nMaster (21a71648d3):\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 38.94 0.016457 0 29816 pwrite64\n 36.79 0.015549 0 29850 pread64\n 23.91 0.010106 0 29848 fadvise64\n\n\nAny kind of feedback would be appreciated.\n\n[1]:\nhttps://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 19 Feb 2024 18:13:23 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Mon, 19 Feb 2024 at 18:13, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> I worked on using the currently proposed streaming read API [1] in ANALYZE. The patch is attached. 0001 is the not yet merged streaming read API code changes that can be applied to the master, 0002 is the actual code.\n>\n> The blocks to analyze are obtained by using the streaming read API now.\n>\n> - Since streaming read API is already doing prefetch, I removed the #ifdef USE_PREFETCH code from acquire_sample_rows().\n>\n> - Changed 'while (BlockSampler_HasMore(&bs))' to 'while (nblocks)' because the prefetch mechanism in the streaming read API will advance 'bs' before returning buffers.\n>\n> - Removed BlockNumber and BufferAccessStrategy from the declaration of scan_analyze_next_block(), passing pgsr (PgStreamingRead) instead of them.\n>\n> I counted syscalls of analyzing ~5GB table. It can be seen that the patched version did ~1300 less read calls.\n>\n> Patched:\n>\n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 39.67 0.012128 0 29809 pwrite64\n> 36.96 0.011299 0 28594 pread64\n> 23.24 0.007104 0 27611 fadvise64\n>\n> Master (21a71648d3):\n>\n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 38.94 0.016457 0 29816 pwrite64\n> 36.79 0.015549 0 29850 pread64\n> 23.91 0.010106 0 29848 fadvise64\n>\n>\n> Any kind of feedback would be appreciated.\n>\n> [1]: https://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\nThe new version of the streaming read API [1] is posted. I updated the\nstreaming read API changes patch (0001), using the streaming read API\nin ANALYZE patch (0002) remains the same. This should make it easier\nto review as it can be applied on top of master\n\n[1]: https://www.postgresql.org/message-id/CA%2BhUKGJtLyxcAEvLhVUhgD4fMQkOu3PDaj8Qb9SR_UsmzgsBpQ%40mail.gmail.com\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 28 Feb 2024 14:42:34 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\r\n\r\nOn Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <[email protected]> wrote:\r\n>\r\n>\r\n> The new version of the streaming read API [1] is posted. I updated the\r\n> streaming read API changes patch (0001), using the streaming read API\r\n> in ANALYZE patch (0002) remains the same. This should make it easier\r\n> to review as it can be applied on top of master\r\n>\r\n>\r\n\r\nThe new version of the streaming read API is posted [1]. I rebased the\r\npatch on top of master and v9 of the streaming read API.\r\n\r\nThere is a minimal change in the 'using the streaming read API in ANALYZE\r\npatch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE\r\nto copy exactly the same behavior as before. Also, some benchmarking\r\nresults:\r\n\r\nI created a 22 GB table and set the size of shared buffers to 30GB, the\r\nrest is default.\r\n\r\n╔═══════════════════════════╦═════════════════════╦════════════╗\r\n║ ║ Avg Timings in ms ║ ║\r\n╠═══════════════════════════╬══════════╦══════════╬════════════╣\r\n║ ║ master ║ patched ║ percentage ║\r\n╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n║ Both OS cache and ║ ║ ║ ║\r\n║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║\r\n╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n║ OS cache is loaded but ║ ║ ║ ║\r\n║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║\r\n╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n║ Shared buffers are loaded ║ ║ ║ ║\r\n║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║\r\n╚═══════════════════════════╩══════════╩══════════╩════════════╝\r\n\r\nAny kind of feedback would be appreciated.\r\n\r\n[1]:\r\nhttps://www.postgresql.org/message-id/CA%2BhUKGL-ONQnnnp-SONCFfLJzqcpAheuzZ%2B-yTrD9WBM-GmAcg%40mail.gmail.com\r\n\r\n-- \r\nRegards,\r\nNazir Bilal Yavuz\r\nMicrosoft",
"msg_date": "Tue, 26 Mar 2024 14:51:27 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 02:51:27PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> >\n> > The new version of the streaming read API [1] is posted. I updated the\n> > streaming read API changes patch (0001), using the streaming read API\n> > in ANALYZE patch (0002) remains the same. This should make it easier\n> > to review as it can be applied on top of master\n> >\n> >\n> \n> The new version of the streaming read API is posted [1]. I rebased the\n> patch on top of master and v9 of the streaming read API.\n> \n> There is a minimal change in the 'using the streaming read API in ANALYZE\n> patch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE\n> to copy exactly the same behavior as before. Also, some benchmarking\n> results:\n> \n> I created a 22 GB table and set the size of shared buffers to 30GB, the\n> rest is default.\n> \n> ╔═══════════════════════════╦═════════════════════╦════════════╗\n> ║ ║ Avg Timings in ms ║ ║\n> ╠═══════════════════════════╬══════════╦══════════╬════════════╣\n> ║ ║ master ║ patched ║ percentage ║\n> ╠═══════════════════════════╬══════════╬══════════╬════════════╣\n> ║ Both OS cache and ║ ║ ║ ║\n> ║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║\n> ╠═══════════════════════════╬══════════╬══════════╬════════════╣\n> ║ OS cache is loaded but ║ ║ ║ ║\n> ║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║\n> ╠═══════════════════════════╬══════════╬══════════╬════════════╣\n> ║ Shared buffers are loaded ║ ║ ║ ║\n> ║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║\n> ╚═══════════════════════════╩══════════╩══════════╩════════════╝\n> \n> Any kind of feedback would be appreciated.\n\nThanks for working on this!\n\nA general review comment: I noticed you have the old streaming read\n(pgsr) naming still in a few places (including comments) -- so I would\njust make sure and update everywhere when you rebase in Thomas' latest\nversion of the read stream API.\n\n> From c7500cc1b9068ff0b704181442999cd8bed58658 Mon Sep 17 00:00:00 2001\n> From: Nazir Bilal Yavuz <[email protected]>\n> Date: Mon, 19 Feb 2024 14:30:47 +0300\n> Subject: [PATCH v3 2/2] Use streaming read API in ANALYZE\n>\n> --- a/src/backend/commands/analyze.c\n> +++ b/src/backend/commands/analyze.c\n> @@ -1102,6 +1102,26 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)\n> \treturn stats;\n> }\n> \n> +/*\n> + * Prefetch callback function to get next block number while using\n> + * BlockSampling algorithm\n> + */\n> +static BlockNumber\n> +pg_block_sampling_streaming_read_next(StreamingRead *stream,\n> +\t\t\t\t\t\t\t\t\t void *user_data,\n> +\t\t\t\t\t\t\t\t\t void *per_buffer_data)\n\nI don't think you need the pg_ prefix\n\n> +{\n> +\tBlockSamplerData *bs = user_data;\n> +\tBlockNumber *current_block = per_buffer_data;\n\nWhy can't you just do BufferGetBlockNumber() on the buffer returned from\nthe read stream API instead of allocating per_buffer_data for the block\nnumber?\n\n> +\n> +\tif (BlockSampler_HasMore(bs))\n> +\t\t*current_block = BlockSampler_Next(bs);\n> +\telse\n> +\t\t*current_block = InvalidBlockNumber;\n> +\n> +\treturn *current_block;\n\n\nI think we'd like to keep the read stream code in heapam-specific code.\nInstead of doing streaming_read_buffer_begin() here, you could put this\nin heap_beginscan() or initscan() guarded by\n\tscan->rs_base.rs_flags & SO_TYPE_ANALYZE\n\nsame with streaming_read_buffer_end()/heap_endscan().\n\nYou'd also then need to save the reference to the read stream in the\nHeapScanDescData.\n\n> +\tstream = streaming_read_buffer_begin(STREAMING_READ_MAINTENANCE,\n> +\t\t\t\t\t\t\t\t\t\t vac_strategy,\n> +\t\t\t\t\t\t\t\t\t\t BMR_REL(scan->rs_rd),\n> +\t\t\t\t\t\t\t\t\t\t MAIN_FORKNUM,\n> +\t\t\t\t\t\t\t\t\t\t pg_block_sampling_streaming_read_next,\n> +\t\t\t\t\t\t\t\t\t\t &bs,\n> +\t\t\t\t\t\t\t\t\t\t sizeof(BlockSamplerData));\n> \n> \t/* Outer loop over blocks to sample */\n\nIn fact, I think you could use this opportunity to get rid of the block\ndependency in acquire_sample_rows() altogether.\n\nLooking at the code now, it seems like you could just invoke\nheapam_scan_analyze_next_block() (maybe rename it to\nheapam_scan_analyze_next_buffer() or something) from\nheapam_scan_analyze_next_tuple() and remove\ntable_scan_analyze_next_block() entirely.\n\nThen table AMs can figure out how they want to return tuples from\ntable_scan_analyze_next_tuple().\n\nIf you do all this, note that you'll need to update the comments above\nacquire_sample_rows() accordingly.\n\n> -\twhile (BlockSampler_HasMore(&bs))\n> +\twhile (nblocks)\n> \t{\n> \t\tbool\t\tblock_accepted;\n> -\t\tBlockNumber targblock = BlockSampler_Next(&bs);\n> -#ifdef USE_PREFETCH\n> -\t\tBlockNumber prefetch_targblock = InvalidBlockNumber;\n> -\n> -\t\t/*\n> -\t\t * Make sure that every time the main BlockSampler is moved forward\n> -\t\t * that our prefetch BlockSampler also gets moved forward, so that we\n> -\t\t * always stay out ahead.\n> -\t\t */\n> -\t\tif (prefetch_maximum && BlockSampler_HasMore(&prefetch_bs))\n> -\t\t\tprefetch_targblock = BlockSampler_Next(&prefetch_bs);\n> -#endif\n> \n> \t\tvacuum_delay_point();\n> \n> -\t\tblock_accepted = table_scan_analyze_next_block(scan, targblock, vac_strategy);\n> +\t\tblock_accepted = table_scan_analyze_next_block(scan, stream);\n\n- Melanie\n\n\n",
"msg_date": "Wed, 27 Mar 2024 16:15:23 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\r\n\r\nThanks for the review!\r\n\r\nOn Wed, 27 Mar 2024 at 23:15, Melanie Plageman\r\n<[email protected]> wrote:\r\n>\r\n> On Tue, Mar 26, 2024 at 02:51:27PM +0300, Nazir Bilal Yavuz wrote:\r\n> > Hi,\r\n> >\r\n> > On Wed, 28 Feb 2024 at 14:42, Nazir Bilal Yavuz <[email protected]> wrote:\r\n> > >\r\n> > >\r\n> > > The new version of the streaming read API [1] is posted. I updated the\r\n> > > streaming read API changes patch (0001), using the streaming read API\r\n> > > in ANALYZE patch (0002) remains the same. This should make it easier\r\n> > > to review as it can be applied on top of master\r\n> > >\r\n> > >\r\n> >\r\n> > The new version of the streaming read API is posted [1]. I rebased the\r\n> > patch on top of master and v9 of the streaming read API.\r\n> >\r\n> > There is a minimal change in the 'using the streaming read API in ANALYZE\r\n> > patch (0002)', I changed STREAMING_READ_FULL to STREAMING_READ_MAINTENANCE\r\n> > to copy exactly the same behavior as before. Also, some benchmarking\r\n> > results:\r\n> >\r\n> > I created a 22 GB table and set the size of shared buffers to 30GB, the\r\n> > rest is default.\r\n> >\r\n> > ╔═══════════════════════════╦═════════════════════╦════════════╗\r\n> > ║ ║ Avg Timings in ms ║ ║\r\n> > ╠═══════════════════════════╬══════════╦══════════╬════════════╣\r\n> > ║ ║ master ║ patched ║ percentage ║\r\n> > ╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n> > ║ Both OS cache and ║ ║ ║ ║\r\n> > ║ shared buffers are clear ║ 513.9247 ║ 463.1019 ║ %9.9 ║\r\n> > ╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n> > ║ OS cache is loaded but ║ ║ ║ ║\r\n> > ║ shared buffers are clear ║ 423.1097 ║ 354.3277 ║ %16.3 ║\r\n> > ╠═══════════════════════════╬══════════╬══════════╬════════════╣\r\n> > ║ Shared buffers are loaded ║ ║ ║ ║\r\n> > ║ ║ 89.2846 ║ 84.6952 ║ %5.1 ║\r\n> > ╚═══════════════════════════╩══════════╩══════════╩════════════╝\r\n> >\r\n> > Any kind of feedback would be appreciated.\r\n>\r\n> Thanks for working on this!\r\n>\r\n> A general review comment: I noticed you have the old streaming read\r\n> (pgsr) naming still in a few places (including comments) -- so I would\r\n> just make sure and update everywhere when you rebase in Thomas' latest\r\n> version of the read stream API.\r\n\r\nDone.\r\n\r\n>\r\n> > From c7500cc1b9068ff0b704181442999cd8bed58658 Mon Sep 17 00:00:00 2001\r\n> > From: Nazir Bilal Yavuz <[email protected]>\r\n> > Date: Mon, 19 Feb 2024 14:30:47 +0300\r\n> > Subject: [PATCH v3 2/2] Use streaming read API in ANALYZE\r\n> >\r\n> > --- a/src/backend/commands/analyze.c\r\n> > +++ b/src/backend/commands/analyze.c\r\n> > @@ -1102,6 +1102,26 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)\r\n> > return stats;\r\n> > }\r\n> >\r\n> > +/*\r\n> > + * Prefetch callback function to get next block number while using\r\n> > + * BlockSampling algorithm\r\n> > + */\r\n> > +static BlockNumber\r\n> > +pg_block_sampling_streaming_read_next(StreamingRead *stream,\r\n> > + void *user_data,\r\n> > + void *per_buffer_data)\r\n>\r\n> I don't think you need the pg_ prefix\r\n\r\nDone.\r\n\r\n>\r\n> > +{\r\n> > + BlockSamplerData *bs = user_data;\r\n> > + BlockNumber *current_block = per_buffer_data;\r\n>\r\n> Why can't you just do BufferGetBlockNumber() on the buffer returned from\r\n> the read stream API instead of allocating per_buffer_data for the block\r\n> number?\r\n\r\nDone.\r\n\r\n>\r\n> > +\r\n> > + if (BlockSampler_HasMore(bs))\r\n> > + *current_block = BlockSampler_Next(bs);\r\n> > + else\r\n> > + *current_block = InvalidBlockNumber;\r\n> > +\r\n> > + return *current_block;\r\n>\r\n>\r\n> I think we'd like to keep the read stream code in heapam-specific code.\r\n> Instead of doing streaming_read_buffer_begin() here, you could put this\r\n> in heap_beginscan() or initscan() guarded by\r\n> scan->rs_base.rs_flags & SO_TYPE_ANALYZE\r\n\r\nIn the recent changes [1], heapam_scan_analyze_next_[block | tuple]\r\nare removed from tableam. They are directly called from\r\nheapam-specific code now. So, IMO, no need to do this now.\r\n\r\nv4 is rebased on top of v14 streaming read API changes.\r\n\r\n[1] 27bc1772fc814946918a5ac8ccb9b5c5ad0380aa\r\n\r\n-- \r\nRegards,\r\nNazir Bilal Yavuz\r\nMicrosoft",
"msg_date": "Tue, 2 Apr 2024 10:23:55 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 9:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n[..]\n> v4 is rebased on top of v14 streaming read API changes.\n\nHi Nazir, so with streaming API committed, I gave a try to this patch.\nWith autovacuum=off and 30GB table on NVMe (with standard readahead of\n256kb and ext4, Debian 12, kernel 6.1.0, shared_buffers = 128MB\ndefault) created using: create table t as select repeat('a', 100) || i\n|| repeat('b', 500) as filler from generate_series(1, 45000000) as i;\n\non master, effect of mainteance_io_concurency [default 10] is like\nthat (when resetting the fs cache after each ANALYZE):\n\n m_io_c = 0:\n Time: 3137.914 ms (00:03.138)\n Time: 3094.540 ms (00:03.095)\n Time: 3452.513 ms (00:03.453)\n\n m_io_c = 1:\n Time: 2972.751 ms (00:02.973)\n Time: 2939.551 ms (00:02.940)\n Time: 2904.428 ms (00:02.904)\n\n m_io_c = 2:\n Time: 1580.260 ms (00:01.580)\n Time: 1572.132 ms (00:01.572)\n Time: 1558.334 ms (00:01.558)\n\n m_io_c = 4:\n Time: 938.304 ms\n Time: 931.772 ms\n Time: 920.044 ms\n\n m_io_c = 8:\n Time: 666.025 ms\n Time: 660.241 ms\n Time: 648.848 ms\n\n m_io_c = 16:\n Time: 542.450 ms\n Time: 561.155 ms\n Time: 539.683 ms\n\n m_io_c = 32:\n Time: 538.487 ms\n Time: 541.705 ms\n Time: 538.101 ms\n\nwith patch applied:\n\n m_io_c = 0:\n Time: 3106.469 ms (00:03.106)\n Time: 3140.343 ms (00:03.140)\n Time: 3044.133 ms (00:03.044)\n\n m_io_c = 1:\n Time: 2959.817 ms (00:02.960)\n Time: 2920.265 ms (00:02.920)\n Time: 2911.745 ms (00:02.912)\n\n m_io_c = 2:\n Time: 1581.912 ms (00:01.582)\n Time: 1561.444 ms (00:01.561)\n Time: 1558.251 ms (00:01.558)\n\n m_io_c = 4:\n Time: 908.116 ms\n Time: 901.245 ms\n Time: 901.071 ms\n\n m_io_c = 8:\n Time: 619.870 ms\n Time: 620.327 ms\n Time: 614.266 ms\n\n m_io_c = 16:\n Time: 529.885 ms\n Time: 526.958 ms\n Time: 528.474 ms\n\n m_io_c = 32:\n Time: 521.185 ms\n Time: 520.713 ms\n Time: 517.729 ms\n\nNo difference to me, which seems to be good. I've double checked and\npatch used the new way\n\nacquire_sample_rows -> heapam_scan_analyze_next_block ->\nReadBufferExtended -> ReadBuffer_common (inlined) -> WaitReadBuffers\n-> mdreadv -> FileReadV -> pg_preadv (inlined)\nacquire_sample_rows -> heapam_scan_analyze_next_block ->\nReadBufferExtended -> ReadBuffer_common (inlined) -> StartReadBuffer\n-> ...\n\nI gave also io_combine_limit to 32 (max, 256kB) a try and got those\nslightly better results:\n\n[..]\nm_io_c = 16:\nTime: 494.599 ms\nTime: 496.345 ms\nTime: 973.500 ms\n\nm_io_c = 32:\nTime: 461.031 ms\nTime: 449.037 ms\nTime: 443.375 ms\n\nand that (last one) apparently was able to push it to ~50-60k still\nrandom IOPS range, the rareq-sz was still ~8 (9.9) kB as analyze was\nstill reading random , so I assume no merging was done:\n\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz\nw/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s\ndrqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme0n1 61212.00 591.82 0.00 0.00 0.10 9.90\n2.00 0.02 0.00 0.00 0.00 12.00 0.00 0.00\n0.00 0.00 0.00 0.00 0.00 0.00 6.28 85.20\n\nSo in short it looks good to me.\n\n-Jakub Wartak.\n\n\n",
"msg_date": "Wed, 3 Apr 2024 10:41:42 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Tue, 2 Apr 2024 at 10:23, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> v4 is rebased on top of v14 streaming read API changes.\n\nStreaming API has been committed but the committed version has a minor\nchange, the read_stream_begin_relation function takes Relation instead\nof BufferManagerRelation now. So, here is a v5 which addresses this\nchange.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 3 Apr 2024 13:31:00 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:\n> Streaming API has been committed but the committed version has a minor\n> change, the read_stream_begin_relation function takes Relation instead\n> of BufferManagerRelation now. So, here is a v5 which addresses this\n> change.\n\nI'm getting a repeatable segfault / assertion failure with this:\n\npostgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);\nCREATE TABLE\npostgres=# insert into tengiga select g, repeat('x', 900) from \ngenerate_series(1, 1400000) g;\nINSERT 0 1400000\npostgres=# set default_statistics_target = 10; ANALYZE tengiga;\nSET\nANALYZE\npostgres=# set default_statistics_target = 100; ANALYZE tengiga;\nSET\nANALYZE\npostgres=# set default_statistics_target =1000; ANALYZE tengiga;\nSET\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\nTRAP: failed Assert(\"BufferIsValid(hscan->rs_cbuf)\"), File: \n\"heapam_handler.c\", Line: 1079, PID: 262232\npostgres: heikki postgres [local] \nANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]\npostgres: heikki postgres [local] \nANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]\npostgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]\npostgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]\npostgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]\npostgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]\npostgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]\npostgres: heikki postgres [local] \nANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]\npostgres: heikki postgres [local] \nANALYZE(ProcessUtility+0x136)[0x564889f0afb1]\npostgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]\npostgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]\npostgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]\npostgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]\npostgres: heikki postgres [local] \nANALYZE(PostgresMain+0x80c)[0x564889f06fd7]\npostgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]\npostgres: heikki postgres [local] \nANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]\npostgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]\npostgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]\npostgres: heikki postgres [local] \nANALYZE(PostmasterMain+0x152b)[0x564889e2214d]\npostgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]\npostgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]\n2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232) \nwas terminated by signal 6: Aborted\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 3 Apr 2024 20:17:31 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi Jakub,\n\nThank you for looking into this and doing a performance analysis.\n\nOn Wed, 3 Apr 2024 at 11:42, Jakub Wartak <[email protected]> wrote:\n>\n> On Tue, Apr 2, 2024 at 9:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> [..]\n> > v4 is rebased on top of v14 streaming read API changes.\n>\n> Hi Nazir, so with streaming API committed, I gave a try to this patch.\n> With autovacuum=off and 30GB table on NVMe (with standard readahead of\n> 256kb and ext4, Debian 12, kernel 6.1.0, shared_buffers = 128MB\n> default) created using: create table t as select repeat('a', 100) || i\n> || repeat('b', 500) as filler from generate_series(1, 45000000) as i;\n>\n> on master, effect of mainteance_io_concurency [default 10] is like\n> that (when resetting the fs cache after each ANALYZE):\n>\n> m_io_c = 0:\n> Time: 3137.914 ms (00:03.138)\n> Time: 3094.540 ms (00:03.095)\n> Time: 3452.513 ms (00:03.453)\n>\n> m_io_c = 1:\n> Time: 2972.751 ms (00:02.973)\n> Time: 2939.551 ms (00:02.940)\n> Time: 2904.428 ms (00:02.904)\n>\n> m_io_c = 2:\n> Time: 1580.260 ms (00:01.580)\n> Time: 1572.132 ms (00:01.572)\n> Time: 1558.334 ms (00:01.558)\n>\n> m_io_c = 4:\n> Time: 938.304 ms\n> Time: 931.772 ms\n> Time: 920.044 ms\n>\n> m_io_c = 8:\n> Time: 666.025 ms\n> Time: 660.241 ms\n> Time: 648.848 ms\n>\n> m_io_c = 16:\n> Time: 542.450 ms\n> Time: 561.155 ms\n> Time: 539.683 ms\n>\n> m_io_c = 32:\n> Time: 538.487 ms\n> Time: 541.705 ms\n> Time: 538.101 ms\n>\n> with patch applied:\n>\n> m_io_c = 0:\n> Time: 3106.469 ms (00:03.106)\n> Time: 3140.343 ms (00:03.140)\n> Time: 3044.133 ms (00:03.044)\n>\n> m_io_c = 1:\n> Time: 2959.817 ms (00:02.960)\n> Time: 2920.265 ms (00:02.920)\n> Time: 2911.745 ms (00:02.912)\n>\n> m_io_c = 2:\n> Time: 1581.912 ms (00:01.582)\n> Time: 1561.444 ms (00:01.561)\n> Time: 1558.251 ms (00:01.558)\n>\n> m_io_c = 4:\n> Time: 908.116 ms\n> Time: 901.245 ms\n> Time: 901.071 ms\n>\n> m_io_c = 8:\n> Time: 619.870 ms\n> Time: 620.327 ms\n> Time: 614.266 ms\n>\n> m_io_c = 16:\n> Time: 529.885 ms\n> Time: 526.958 ms\n> Time: 528.474 ms\n>\n> m_io_c = 32:\n> Time: 521.185 ms\n> Time: 520.713 ms\n> Time: 517.729 ms\n>\n> No difference to me, which seems to be good. I've double checked and\n> patch used the new way\n>\n> acquire_sample_rows -> heapam_scan_analyze_next_block ->\n> ReadBufferExtended -> ReadBuffer_common (inlined) -> WaitReadBuffers\n> -> mdreadv -> FileReadV -> pg_preadv (inlined)\n> acquire_sample_rows -> heapam_scan_analyze_next_block ->\n> ReadBufferExtended -> ReadBuffer_common (inlined) -> StartReadBuffer\n> -> ...\n>\n> I gave also io_combine_limit to 32 (max, 256kB) a try and got those\n> slightly better results:\n>\n> [..]\n> m_io_c = 16:\n> Time: 494.599 ms\n> Time: 496.345 ms\n> Time: 973.500 ms\n>\n> m_io_c = 32:\n> Time: 461.031 ms\n> Time: 449.037 ms\n> Time: 443.375 ms\n>\n> and that (last one) apparently was able to push it to ~50-60k still\n> random IOPS range, the rareq-sz was still ~8 (9.9) kB as analyze was\n> still reading random , so I assume no merging was done:\n>\n> Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz\n> w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s\n> drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\n> nvme0n1 61212.00 591.82 0.00 0.00 0.10 9.90\n> 2.00 0.02 0.00 0.00 0.00 12.00 0.00 0.00\n> 0.00 0.00 0.00 0.00 0.00 0.00 6.28 85.20\n>\n> So in short it looks good to me.\n\nMy results are similar to yours, also I realized a bug while working\non your benchmarking cases. I will share the cause and the fix soon.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 3 Apr 2024 21:59:32 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nThank you for looking into this!\n\nOn Wed, 3 Apr 2024 at 20:17, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:\n> > Streaming API has been committed but the committed version has a minor\n> > change, the read_stream_begin_relation function takes Relation instead\n> > of BufferManagerRelation now. So, here is a v5 which addresses this\n> > change.\n>\n> I'm getting a repeatable segfault / assertion failure with this:\n>\n> postgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);\n> CREATE TABLE\n> postgres=# insert into tengiga select g, repeat('x', 900) from\n> generate_series(1, 1400000) g;\n> INSERT 0 1400000\n> postgres=# set default_statistics_target = 10; ANALYZE tengiga;\n> SET\n> ANALYZE\n> postgres=# set default_statistics_target = 100; ANALYZE tengiga;\n> SET\n> ANALYZE\n> postgres=# set default_statistics_target =1000; ANALYZE tengiga;\n> SET\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n> TRAP: failed Assert(\"BufferIsValid(hscan->rs_cbuf)\"), File:\n> \"heapam_handler.c\", Line: 1079, PID: 262232\n> postgres: heikki postgres [local]\n> ANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]\n> postgres: heikki postgres [local]\n> ANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]\n> postgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]\n> postgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]\n> postgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]\n> postgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]\n> postgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]\n> postgres: heikki postgres [local]\n> ANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]\n> postgres: heikki postgres [local]\n> ANALYZE(ProcessUtility+0x136)[0x564889f0afb1]\n> postgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]\n> postgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]\n> postgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]\n> postgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]\n> postgres: heikki postgres [local]\n> ANALYZE(PostgresMain+0x80c)[0x564889f06fd7]\n> postgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]\n> postgres: heikki postgres [local]\n> ANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]\n> postgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]\n> postgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]\n> postgres: heikki postgres [local]\n> ANALYZE(PostmasterMain+0x152b)[0x564889e2214d]\n> postgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]\n> postgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]\n> 2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232)\n> was terminated by signal 6: Aborted\n\nI realized the same error while working on Jakub's benchmarking results.\n\nCause: I was using the nblocks variable to check how many blocks will\nbe returned from the streaming API. But I realized that sometimes the\nnumber returned from BlockSampler_Init() is not equal to the number of\nblocks that BlockSampler_Next() will return as BlockSampling algorithm\ndecides how many blocks to return on the fly by using some random\nseeds.\n\nThere are a couple of solutions I thought of:\n\n1- Use BlockSampler_HasMore() instead of nblocks in the main loop in\nthe acquire_sample_rows():\n\nStreaming API uses this function to prefetch block numbers.\nBlockSampler_HasMore() will reach to the end first as it is used while\nprefetching, so it will start to return false while there are still\nbuffers to return from the streaming API. That will cause some buffers\nat the end to not be processed.\n\n2- Expose something (function, variable etc.) from the streaming API\nto understand if the read is finished and there is no buffer to\nreturn:\n\nI think this works but I am not sure if the streaming API allows\nsomething like that.\n\n3- Check every buffer returned from the streaming API, if it is\ninvalid stop the main loop in the acquire_sample_rows():\n\nThis solves the problem but there will be two if checks for each\nbuffer returned,\n- in heapam_scan_analyze_next_block() to check if the returned buffer is invalid\n- to break main loop in acquire_sample_rows() if\nheapam_scan_analyze_next_block() returns false\nOne of the if cases can be bypassed by moving\nheapam_scan_analyze_next_block()'s code to the main loop in the\nacquire_sample_rows().\n\nI implemented the third solution, here is v6.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 3 Apr 2024 22:25:01 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Wed, Apr 03, 2024 at 10:25:01PM +0300, Nazir Bilal Yavuz wrote:\n>\n> I realized the same error while working on Jakub's benchmarking results.\n> \n> Cause: I was using the nblocks variable to check how many blocks will\n> be returned from the streaming API. But I realized that sometimes the\n> number returned from BlockSampler_Init() is not equal to the number of\n> blocks that BlockSampler_Next() will return as BlockSampling algorithm\n> decides how many blocks to return on the fly by using some random\n> seeds.\n> \n> There are a couple of solutions I thought of:\n> \n> 1- Use BlockSampler_HasMore() instead of nblocks in the main loop in\n> the acquire_sample_rows():\n> \n> Streaming API uses this function to prefetch block numbers.\n> BlockSampler_HasMore() will reach to the end first as it is used while\n> prefetching, so it will start to return false while there are still\n> buffers to return from the streaming API. That will cause some buffers\n> at the end to not be processed.\n> \n> 2- Expose something (function, variable etc.) from the streaming API\n> to understand if the read is finished and there is no buffer to\n> return:\n> \n> I think this works but I am not sure if the streaming API allows\n> something like that.\n> \n> 3- Check every buffer returned from the streaming API, if it is\n> invalid stop the main loop in the acquire_sample_rows():\n> \n> This solves the problem but there will be two if checks for each\n> buffer returned,\n> - in heapam_scan_analyze_next_block() to check if the returned buffer is invalid\n> - to break main loop in acquire_sample_rows() if\n> heapam_scan_analyze_next_block() returns false\n> One of the if cases can be bypassed by moving\n> heapam_scan_analyze_next_block()'s code to the main loop in the\n> acquire_sample_rows().\n> \n> I implemented the third solution, here is v6.\n\nI've reviewed the patches inline below and attached a patch that has\nsome of my ideas on top of your patch.\n\n> From 8d396a42186325f920d5a05e7092d8e1b66f3cdf Mon Sep 17 00:00:00 2001\n> From: Nazir Bilal Yavuz <[email protected]>\n> Date: Wed, 3 Apr 2024 15:14:15 +0300\n> Subject: [PATCH v6] Use streaming read API in ANALYZE\n> \n> ANALYZE command gets random tuples using BlockSampler algorithm. Use\n> streaming reads to get these tuples by using BlockSampler algorithm in\n> streaming read API prefetch logic.\n> ---\n> src/include/access/heapam.h | 6 +-\n> src/backend/access/heap/heapam_handler.c | 22 +++---\n> src/backend/commands/analyze.c | 85 ++++++++----------------\n> 3 files changed, 42 insertions(+), 71 deletions(-)\n> \n> diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> index a307fb5f245..633caee9d95 100644\n> --- a/src/include/access/heapam.h\n> +++ b/src/include/access/heapam.h\n> @@ -25,6 +25,7 @@\n> #include \"storage/bufpage.h\"\n> #include \"storage/dsm.h\"\n> #include \"storage/lockdefs.h\"\n> +#include \"storage/read_stream.h\"\n> #include \"storage/shm_toc.h\"\n> #include \"utils/relcache.h\"\n> #include \"utils/snapshot.h\"\n> @@ -388,9 +389,8 @@ extern bool HeapTupleIsSurelyDead(HeapTuple htup,\n> \t\t\t\t\t\t\t\t struct GlobalVisState *vistest);\n> \n> /* in heap/heapam_handler.c*/\n> -extern void heapam_scan_analyze_next_block(TableScanDesc scan,\n> -\t\t\t\t\t\t\t\t\t\t BlockNumber blockno,\n> -\t\t\t\t\t\t\t\t\t\t BufferAccessStrategy bstrategy);\n> +extern bool heapam_scan_analyze_next_block(TableScanDesc scan,\n> +\t\t\t\t\t\t\t\t\t\t ReadStream *stream);\n> extern bool heapam_scan_analyze_next_tuple(TableScanDesc scan,\n> \t\t\t\t\t\t\t\t\t\t TransactionId OldestXmin,\n> \t\t\t\t\t\t\t\t\t\t double *liverows, double *deadrows,\n> diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\n> index 0952d4a98eb..d83fbbe6af3 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -1054,16 +1054,16 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,\n> }\n> \n> /*\n> - * Prepare to analyze block `blockno` of `scan`. The scan has been started\n> - * with SO_TYPE_ANALYZE option.\n> + * Prepare to analyze block returned from streaming object. If the block returned\n> + * from streaming object is valid, true is returned; otherwise false is returned.\n> + * The scan has been started with SO_TYPE_ANALYZE option.\n> *\n> * This routine holds a buffer pin and lock on the heap page. They are held\n> * until heapam_scan_analyze_next_tuple() returns false. That is until all the\n> * items of the heap page are analyzed.\n> */\n> -void\n> -heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,\n> -\t\t\t\t\t\t\t BufferAccessStrategy bstrategy)\n> +bool\n> +heapam_scan_analyze_next_block(TableScanDesc scan, ReadStream *stream)\n> {\n> \tHeapScanDesc hscan = (HeapScanDesc) scan;\n> \n> @@ -1076,11 +1076,15 @@ heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,\n> \t * doing much work per tuple, the extra lock traffic is probably better\n> \t * avoided.\n\nPersonally I think heapam_scan_analyze_next_block() should be inlined.\nIt only has a few lines. I would find it clearer inline. At the least,\nthere is no reason for it (or heapam_scan_analyze_next_tuple()) to take\na TableScanDesc instead of a HeapScanDesc.\n\n> \t */\n> -\thscan->rs_cblock = blockno;\n> -\thscan->rs_cindex = FirstOffsetNumber;\n> -\thscan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM,\n> -\t\t\t\t\t\t\t\t\t\tblockno, RBM_NORMAL, bstrategy);\n> +\thscan->rs_cbuf = read_stream_next_buffer(stream, NULL);\n> +\tif (hscan->rs_cbuf == InvalidBuffer)\n> +\t\treturn false;\n> +\n> \tLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\n> +\n> +\thscan->rs_cblock = BufferGetBlockNumber(hscan->rs_cbuf);\n> +\thscan->rs_cindex = FirstOffsetNumber;\n> +\treturn true;\n> }\n\n> /*\n> diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\n> index 2fb39f3ede1..764520d5aa2 100644\n> --- a/src/backend/commands/analyze.c\n> +++ b/src/backend/commands/analyze.c\n> @@ -1102,6 +1102,20 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)\n> \treturn stats;\n> }\n> \n> +/*\n> + * Prefetch callback function to get next block number while using\n> + * BlockSampling algorithm\n> + */\n> +static BlockNumber\n> +block_sampling_streaming_read_next(ReadStream *stream,\n> +\t\t\t\t\t\t\t\t void *user_data,\n> +\t\t\t\t\t\t\t\t void *per_buffer_data)\n> +{\n> +\tBlockSamplerData *bs = user_data;\n> +\n> +\treturn BlockSampler_HasMore(bs) ? BlockSampler_Next(bs) : InvalidBlockNumber;\n\nI don't see the point of BlockSampler_HasMore() anymore. I removed it in\nthe attached and made BlockSampler_Next() return InvalidBlockNumber\nunder the same conditions. Is there a reason not to do this? There\naren't other callers. If the BlockSampler_Next() wasn't part of an API,\nwe could just make it the streaming read callback, but that might be\nweird as it is now.\n\nThat and my other ideas in attached. Let me know what you think.\n\n- Melanie",
"msg_date": "Wed, 3 Apr 2024 16:44:20 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Wed, 3 Apr 2024 at 23:44, Melanie Plageman <[email protected]> wrote:\n>\n>\n> I've reviewed the patches inline below and attached a patch that has\n> some of my ideas on top of your patch.\n\nThank you!\n\n>\n> > From 8d396a42186325f920d5a05e7092d8e1b66f3cdf Mon Sep 17 00:00:00 2001\n> > From: Nazir Bilal Yavuz <[email protected]>\n> > Date: Wed, 3 Apr 2024 15:14:15 +0300\n> > Subject: [PATCH v6] Use streaming read API in ANALYZE\n> >\n> > ANALYZE command gets random tuples using BlockSampler algorithm. Use\n> > streaming reads to get these tuples by using BlockSampler algorithm in\n> > streaming read API prefetch logic.\n> > ---\n> > src/include/access/heapam.h | 6 +-\n> > src/backend/access/heap/heapam_handler.c | 22 +++---\n> > src/backend/commands/analyze.c | 85 ++++++++----------------\n> > 3 files changed, 42 insertions(+), 71 deletions(-)\n> >\n> > diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> > index a307fb5f245..633caee9d95 100644\n> > --- a/src/include/access/heapam.h\n> > +++ b/src/include/access/heapam.h\n> > @@ -25,6 +25,7 @@\n> > #include \"storage/bufpage.h\"\n> > #include \"storage/dsm.h\"\n> > #include \"storage/lockdefs.h\"\n> > +#include \"storage/read_stream.h\"\n> > #include \"storage/shm_toc.h\"\n> > #include \"utils/relcache.h\"\n> > #include \"utils/snapshot.h\"\n> > @@ -388,9 +389,8 @@ extern bool HeapTupleIsSurelyDead(HeapTuple htup,\n> > struct GlobalVisState *vistest);\n> >\n> > /* in heap/heapam_handler.c*/\n> > -extern void heapam_scan_analyze_next_block(TableScanDesc scan,\n> > - BlockNumber blockno,\n> > - BufferAccessStrategy bstrategy);\n> > +extern bool heapam_scan_analyze_next_block(TableScanDesc scan,\n> > + ReadStream *stream);\n> > extern bool heapam_scan_analyze_next_tuple(TableScanDesc scan,\n> > TransactionId OldestXmin,\n> > double *liverows, double *deadrows,\n> > diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\n> > index 0952d4a98eb..d83fbbe6af3 100644\n> > --- a/src/backend/access/heap/heapam_handler.c\n> > +++ b/src/backend/access/heap/heapam_handler.c\n> > @@ -1054,16 +1054,16 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,\n> > }\n> >\n> > /*\n> > - * Prepare to analyze block `blockno` of `scan`. The scan has been started\n> > - * with SO_TYPE_ANALYZE option.\n> > + * Prepare to analyze block returned from streaming object. If the block returned\n> > + * from streaming object is valid, true is returned; otherwise false is returned.\n> > + * The scan has been started with SO_TYPE_ANALYZE option.\n> > *\n> > * This routine holds a buffer pin and lock on the heap page. They are held\n> > * until heapam_scan_analyze_next_tuple() returns false. That is until all the\n> > * items of the heap page are analyzed.\n> > */\n> > -void\n> > -heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,\n> > - BufferAccessStrategy bstrategy)\n> > +bool\n> > +heapam_scan_analyze_next_block(TableScanDesc scan, ReadStream *stream)\n> > {\n> > HeapScanDesc hscan = (HeapScanDesc) scan;\n> >\n> > @@ -1076,11 +1076,15 @@ heapam_scan_analyze_next_block(TableScanDesc scan, BlockNumber blockno,\n> > * doing much work per tuple, the extra lock traffic is probably better\n> > * avoided.\n>\n> Personally I think heapam_scan_analyze_next_block() should be inlined.\n> It only has a few lines. I would find it clearer inline. At the least,\n> there is no reason for it (or heapam_scan_analyze_next_tuple()) to take\n> a TableScanDesc instead of a HeapScanDesc.\n\nI agree.\n\n>\n> > */\n> > - hscan->rs_cblock = blockno;\n> > - hscan->rs_cindex = FirstOffsetNumber;\n> > - hscan->rs_cbuf = ReadBufferExtended(scan->rs_rd, MAIN_FORKNUM,\n> > - blockno, RBM_NORMAL, bstrategy);\n> > + hscan->rs_cbuf = read_stream_next_buffer(stream, NULL);\n> > + if (hscan->rs_cbuf == InvalidBuffer)\n> > + return false;\n> > +\n> > LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\n> > +\n> > + hscan->rs_cblock = BufferGetBlockNumber(hscan->rs_cbuf);\n> > + hscan->rs_cindex = FirstOffsetNumber;\n> > + return true;\n> > }\n>\n> > /*\n> > diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\n> > index 2fb39f3ede1..764520d5aa2 100644\n> > --- a/src/backend/commands/analyze.c\n> > +++ b/src/backend/commands/analyze.c\n> > @@ -1102,6 +1102,20 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)\n> > return stats;\n> > }\n> >\n> > +/*\n> > + * Prefetch callback function to get next block number while using\n> > + * BlockSampling algorithm\n> > + */\n> > +static BlockNumber\n> > +block_sampling_streaming_read_next(ReadStream *stream,\n> > + void *user_data,\n> > + void *per_buffer_data)\n> > +{\n> > + BlockSamplerData *bs = user_data;\n> > +\n> > + return BlockSampler_HasMore(bs) ? BlockSampler_Next(bs) : InvalidBlockNumber;\n>\n> I don't see the point of BlockSampler_HasMore() anymore. I removed it in\n> the attached and made BlockSampler_Next() return InvalidBlockNumber\n> under the same conditions. Is there a reason not to do this? There\n> aren't other callers. If the BlockSampler_Next() wasn't part of an API,\n> we could just make it the streaming read callback, but that might be\n> weird as it is now.\n\nI agree. There is no reason to have BlockSampler_HasMore() after\nstreaming read API changes.\n\n> That and my other ideas in attached. Let me know what you think.\n\nI agree with your changes but I am not sure if others agree with all\nthe changes you have proposed. So, I didn't merge 0001 and your ideas\nyet, instead I wrote a commit message, added some comments, changed ->\n'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and\nattached it as 0002.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 4 Apr 2024 14:03:30 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 02:03:30PM +0300, Nazir Bilal Yavuz wrote:\n> \n> On Wed, 3 Apr 2024 at 23:44, Melanie Plageman <[email protected]> wrote:\n> >\n> > I don't see the point of BlockSampler_HasMore() anymore. I removed it in\n> > the attached and made BlockSampler_Next() return InvalidBlockNumber\n> > under the same conditions. Is there a reason not to do this? There\n> > aren't other callers. If the BlockSampler_Next() wasn't part of an API,\n> > we could just make it the streaming read callback, but that might be\n> > weird as it is now.\n> \n> I agree. There is no reason to have BlockSampler_HasMore() after\n> streaming read API changes.\n> \n> > That and my other ideas in attached. Let me know what you think.\n> \n> I agree with your changes but I am not sure if others agree with all\n> the changes you have proposed. So, I didn't merge 0001 and your ideas\n> yet, instead I wrote a commit message, added some comments, changed ->\n> 'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and\n> attached it as 0002.\n\nI couldn't quite let go of those changes to acquire_sample_rows(), so\nattached v9 0001 implements them as a preliminary patch before your\nanalyze streaming read user. I inlined heapam_scan_analyze_next_block()\nentirely and made heapam_scan_analyze_next_tuple() a static function in\ncommands/analyze.c (and tweaked the name).\n\nI made a few tweaks to your patch since it is on top of those changes\ninstead of preceding them. Then 0003 is removing BlockSampler_HasMore()\nsince it doesn't make sense to remove it before the streaming read user\nwas added.\n\n- Melanie",
"msg_date": "Sun, 7 Apr 2024 15:57:08 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 3:57 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Apr 04, 2024 at 02:03:30PM +0300, Nazir Bilal Yavuz wrote:\n> >\n> > On Wed, 3 Apr 2024 at 23:44, Melanie Plageman <[email protected]> wrote:\n> > >\n> > > I don't see the point of BlockSampler_HasMore() anymore. I removed it in\n> > > the attached and made BlockSampler_Next() return InvalidBlockNumber\n> > > under the same conditions. Is there a reason not to do this? There\n> > > aren't other callers. If the BlockSampler_Next() wasn't part of an API,\n> > > we could just make it the streaming read callback, but that might be\n> > > weird as it is now.\n> >\n> > I agree. There is no reason to have BlockSampler_HasMore() after\n> > streaming read API changes.\n> >\n> > > That and my other ideas in attached. Let me know what you think.\n> >\n> > I agree with your changes but I am not sure if others agree with all\n> > the changes you have proposed. So, I didn't merge 0001 and your ideas\n> > yet, instead I wrote a commit message, added some comments, changed ->\n> > 'if (bs->t >= bs->N || bs->m >= bs->n)' to 'if (K <= 0 || k <= 0)' and\n> > attached it as 0002.\n>\n> I couldn't quite let go of those changes to acquire_sample_rows(), so\n> attached v9 0001 implements them as a preliminary patch before your\n> analyze streaming read user. I inlined heapam_scan_analyze_next_block()\n> entirely and made heapam_scan_analyze_next_tuple() a static function in\n> commands/analyze.c (and tweaked the name).\n>\n> I made a few tweaks to your patch since it is on top of those changes\n> instead of preceding them. Then 0003 is removing BlockSampler_HasMore()\n> since it doesn't make sense to remove it before the streaming read user\n> was added.\n\nI realized there were a few outdated comments. Fixed in attached v10.\n\n- Melanie",
"msg_date": "Sun, 7 Apr 2024 16:59:26 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:\n> From 1dc2343661f3edb3b1bc4307afb0e956397eb76c Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sun, 7 Apr 2024 14:55:22 -0400\n> Subject: [PATCH v10 1/3] Make heapam_scan_analyze_next_[tuple|block] static.\n> \n> 27bc1772fc81 removed the table AM callbacks scan_analyze_next_block and\n> scan_analzye_next_tuple -- leaving their heap AM implementations only\n> called by acquire_sample_rows().\n\nUgh, I don't think 27bc1772fc81 makes much sense. But that's unrelated to this\nthread. I did raise that separately\nhttps://www.postgresql.org/message-id/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de\n\nUnless I seriously missed something, I see no alternative to reverting that\ncommit.\n\n\n> @@ -1206,11 +1357,13 @@ acquire_sample_rows(Relation onerel, int elevel,\n> \t\t\t\tbreak;\n> \n> \t\t\tprefetch_block = BlockSampler_Next(&prefetch_bs);\n> -\t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_block);\n> +\t\t\tPrefetchBuffer(scan->rs_base.rs_rd, MAIN_FORKNUM, prefetch_block);\n> \t\t}\n> \t}\n> #endif\n> \n> +\tscan->rs_cbuf = InvalidBuffer;\n> +\n> \t/* Outer loop over blocks to sample */\n> \twhile (BlockSampler_HasMore(&bs))\n> \t{\n\nI don't think it's good to move a lot of code *and* change how it is\nstructured in the same commit. Makes it much harder to actually see changes /\nmakes git blame harder to use / etc.\n\n\n\n> From 90d115c2401567be65bcf64393a6d3b39286779e Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sun, 7 Apr 2024 15:28:32 -0400\n> Subject: [PATCH v10 2/3] Use streaming read API in ANALYZE\n>\n> The ANALYZE command prefetches and reads sample blocks chosen by a\n> BlockSampler algorithm. Instead of calling Prefetch|ReadBuffer() for\n> each block, ANALYZE now uses the streaming API introduced in b5a9b18cd0.\n>\n> Author: Nazir Bilal Yavuz\n> Reviewed-by: Melanie Plageman\n> Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com\n> ---\n> src/backend/commands/analyze.c | 89 ++++++++++------------------------\n> 1 file changed, 26 insertions(+), 63 deletions(-)\n\nThat's a very nice demonstration of how this makes good prefetching easier...\n\n\n\n\n> From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sun, 7 Apr 2024 15:38:41 -0400\n> Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()\n> \n> A previous commit stopped using BlockSampler_HasMore() for flow control\n> in acquire_sample_rows(). There seems little use now for\n> BlockSampler_HasMore(). It should be sufficient to return\n> InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()\n> would have returned false. Remove BlockSampler_HasMore().\n> \n> Author: Melanie Plageman, Nazir Bilal Yavuz\n> Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com\n\nThe justification here seems somewhat odd. Sure, the previous commit stopped\nusing BlockSampler_HasMore in acquire_sample_rows - but only because it was\nmoved to block_sampling_streaming_read_next()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 15:00:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:\n> > From 1dc2343661f3edb3b1bc4307afb0e956397eb76c Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Sun, 7 Apr 2024 14:55:22 -0400\n> > Subject: [PATCH v10 1/3] Make heapam_scan_analyze_next_[tuple|block] static.\n> > \n> > 27bc1772fc81 removed the table AM callbacks scan_analyze_next_block and\n> > scan_analzye_next_tuple -- leaving their heap AM implementations only\n> > called by acquire_sample_rows().\n> \n> Ugh, I don't think 27bc1772fc81 makes much sense. But that's unrelated to this\n> thread. I did raise that separately\n> https://www.postgresql.org/message-id/20240407214001.jgpg5q3yv33ve6y3%40awork3.anarazel.de\n> \n> Unless I seriously missed something, I see no alternative to reverting that\n> commit.\n\nNoted. I'll give up on this refactor then. Lots of churn for no gain.\nAttached v11 is just Bilal's v8 patch rebased to apply cleanly and with\na few tweaks (I changed one of the loop conditions. All other changes\nare to comments and commit message).\n\n> > @@ -1206,11 +1357,13 @@ acquire_sample_rows(Relation onerel, int elevel,\n> > \t\t\t\tbreak;\n> > \n> > \t\t\tprefetch_block = BlockSampler_Next(&prefetch_bs);\n> > -\t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_block);\n> > +\t\t\tPrefetchBuffer(scan->rs_base.rs_rd, MAIN_FORKNUM, prefetch_block);\n> > \t\t}\n> > \t}\n> > #endif\n> > \n> > +\tscan->rs_cbuf = InvalidBuffer;\n> > +\n> > \t/* Outer loop over blocks to sample */\n> > \twhile (BlockSampler_HasMore(&bs))\n> > \t{\n> \n> I don't think it's good to move a lot of code *and* change how it is\n> structured in the same commit. Makes it much harder to actually see changes /\n> makes git blame harder to use / etc.\n\nYep.\n\n> > From 90d115c2401567be65bcf64393a6d3b39286779e Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Sun, 7 Apr 2024 15:28:32 -0400\n> > Subject: [PATCH v10 2/3] Use streaming read API in ANALYZE\n> >\n> > The ANALYZE command prefetches and reads sample blocks chosen by a\n> > BlockSampler algorithm. Instead of calling Prefetch|ReadBuffer() for\n> > each block, ANALYZE now uses the streaming API introduced in b5a9b18cd0.\n> >\n> > Author: Nazir Bilal Yavuz\n> > Reviewed-by: Melanie Plageman\n> > Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com\n> > ---\n> > src/backend/commands/analyze.c | 89 ++++++++++------------------------\n> > 1 file changed, 26 insertions(+), 63 deletions(-)\n> \n> That's a very nice demonstration of how this makes good prefetching easier...\n\nAgreed. Yay streaming read API and Bilal!\n\n> > From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Sun, 7 Apr 2024 15:38:41 -0400\n> > Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()\n> > \n> > A previous commit stopped using BlockSampler_HasMore() for flow control\n> > in acquire_sample_rows(). There seems little use now for\n> > BlockSampler_HasMore(). It should be sufficient to return\n> > InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()\n> > would have returned false. Remove BlockSampler_HasMore().\n> > \n> > Author: Melanie Plageman, Nazir Bilal Yavuz\n> > Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com\n> \n> The justification here seems somewhat odd. Sure, the previous commit stopped\n> using BlockSampler_HasMore in acquire_sample_rows - but only because it was\n> moved to block_sampling_streaming_read_next()?\n\nIt didn't stop using it. It stopped being useful. The reason it existed,\nas far as I can tell, was to use it as the while() loop condition in\nacquire_sample_rows(). I think it makes much more sense for\nBlockSampler_Next() to return InvalidBlockNumber when there are no more\nblocks -- not to assert you don't call it when there aren't any more\nblocks.\n\nI didn't want to change BlockSampler_Next() in the same commit as the\nstreaming read user and we can't remove BlockSampler_HasMore() without\nchanging BlockSampler_Next().\n\n- Melanie",
"msg_date": "Sun, 7 Apr 2024 18:26:31 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 10:26 AM Melanie Plageman\n<[email protected]> wrote:\n> On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:\n> > > src/backend/commands/analyze.c | 89 ++++++++++------------------------\n> > > 1 file changed, 26 insertions(+), 63 deletions(-)\n> >\n> > That's a very nice demonstration of how this makes good prefetching easier...\n>\n> Agreed. Yay streaming read API and Bilal!\n\n+1\n\nI found a few comments to tweak, just a couple of places that hadn't\ngot the memo after we renamed \"read stream\", and an obsolete mention\nof pinning buffers. I adjusted those directly.\n\nI ran some tests on a random basic Linux/ARM cloud box with a 7.6GB\ntable, and I got:\n\n cold hot\nmaster: 9025ms 199ms\npatched, io_combine_limit=1: 9025ms 191ms\npatched, io_combine_limit=default: 8729ms 191ms\n\nDespite being random, occasionally some I/Os must get merged, allowing\nslightly better random throughput when accessing disk blocks through a\n3000 IOPS drinking straw. Looking at strace, I see 29144 pread* calls\ninstead of 30071, which fits that theory. Let's see... if you roll a\nfair 973452-sided dice 30071 times, how many times do you expect to\nroll consecutive numbers? Each time you roll there is a 1/973452\nchance that you get the last number + 1, and we have 30071 tries\ngiving 30071/973452 = ~3%. 9025ms minus 3% is 8754ms. Seems about\nright.\n\nI am not sure why the hot number is faster exactly. (Anecdotally, I\ndid notice that in the cases that beat master semi-unexpectedly like\nthis, my software memory prefetch patch doesn't help or hurt, while in\nsome cases and on some CPUs there is little difference, and then that\npatch seems to get a speed-up like this, which might be a clue.\n*Shrug*, investigation needed.)\n\nPushed. Thanks Bilal and reviewers!\n\n\n",
"msg_date": "Mon, 8 Apr 2024 13:20:21 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 10:26 AM Melanie Plageman\n<[email protected]> wrote:\n> On Sun, Apr 07, 2024 at 03:00:00PM -0700, Andres Freund wrote:\n> > On 2024-04-07 16:59:26 -0400, Melanie Plageman wrote:\n> > > From 862b7ac81cdafcda7b525e02721da14e46265509 Mon Sep 17 00:00:00 2001\n> > > From: Melanie Plageman <[email protected]>\n> > > Date: Sun, 7 Apr 2024 15:38:41 -0400\n> > > Subject: [PATCH v10 3/3] Obsolete BlockSampler_HasMore()\n> > >\n> > > A previous commit stopped using BlockSampler_HasMore() for flow control\n> > > in acquire_sample_rows(). There seems little use now for\n> > > BlockSampler_HasMore(). It should be sufficient to return\n> > > InvalidBlockNumber from BlockSampler_Next() when BlockSample_HasMore()\n> > > would have returned false. Remove BlockSampler_HasMore().\n> > >\n> > > Author: Melanie Plageman, Nazir Bilal Yavuz\n> > > Discussion: https://postgr.es/m/flat/CAN55FZ0UhXqk9v3y-zW_fp4-WCp43V8y0A72xPmLkOM%2B6M%2BmJg%40mail.gmail.com\n> >\n> > The justification here seems somewhat odd. Sure, the previous commit stopped\n> > using BlockSampler_HasMore in acquire_sample_rows - but only because it was\n> > moved to block_sampling_streaming_read_next()?\n>\n> It didn't stop using it. It stopped being useful. The reason it existed,\n> as far as I can tell, was to use it as the while() loop condition in\n> acquire_sample_rows(). I think it makes much more sense for\n> BlockSampler_Next() to return InvalidBlockNumber when there are no more\n> blocks -- not to assert you don't call it when there aren't any more\n> blocks.\n>\n> I didn't want to change BlockSampler_Next() in the same commit as the\n> streaming read user and we can't remove BlockSampler_HasMore() without\n> changing BlockSampler_Next().\n\nI agree that the code looks useless if one condition implies the\nother, but isn't it good to keep that cross-check, perhaps\nreformulated as an assertion? I didn't look too hard at the maths, I\njust saw the words \"It is not obvious that this code matches Knuth's\nAlgorithm S ...\" and realised I'm not sure I have time to develop a\ngood opinion about this today. So I'll leave the 0002 change out for\nnow, as it's a tidy-up that can easily be applied in the next cycle.",
"msg_date": "Mon, 8 Apr 2024 15:46:42 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Wed, 3 Apr 2024 at 22:25, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> Thank you for looking into this!\n>\n> On Wed, 3 Apr 2024 at 20:17, Heikki Linnakangas <[email protected]> wrote:\n> >\n> > On 03/04/2024 13:31, Nazir Bilal Yavuz wrote:\n> > > Streaming API has been committed but the committed version has a minor\n> > > change, the read_stream_begin_relation function takes Relation instead\n> > > of BufferManagerRelation now. So, here is a v5 which addresses this\n> > > change.\n> >\n> > I'm getting a repeatable segfault / assertion failure with this:\n> >\n> > postgres=# CREATE TABLE tengiga (i int, filler text) with (fillfactor=10);\n> > CREATE TABLE\n> > postgres=# insert into tengiga select g, repeat('x', 900) from\n> > generate_series(1, 1400000) g;\n> > INSERT 0 1400000\n> > postgres=# set default_statistics_target = 10; ANALYZE tengiga;\n> > SET\n> > ANALYZE\n> > postgres=# set default_statistics_target = 100; ANALYZE tengiga;\n> > SET\n> > ANALYZE\n> > postgres=# set default_statistics_target =1000; ANALYZE tengiga;\n> > SET\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> >\n> > TRAP: failed Assert(\"BufferIsValid(hscan->rs_cbuf)\"), File:\n> > \"heapam_handler.c\", Line: 1079, PID: 262232\n> > postgres: heikki postgres [local]\n> > ANALYZE(ExceptionalCondition+0xa8)[0x56488a0de9d8]\n> > postgres: heikki postgres [local]\n> > ANALYZE(heapam_scan_analyze_next_block+0x63)[0x5648899ece34]\n> > postgres: heikki postgres [local] ANALYZE(+0x2d3f34)[0x564889b6af34]\n> > postgres: heikki postgres [local] ANALYZE(+0x2d2a3a)[0x564889b69a3a]\n> > postgres: heikki postgres [local] ANALYZE(analyze_rel+0x33e)[0x564889b68fa9]\n> > postgres: heikki postgres [local] ANALYZE(vacuum+0x4b3)[0x564889c2dcc0]\n> > postgres: heikki postgres [local] ANALYZE(ExecVacuum+0xd6f)[0x564889c2d7fe]\n> > postgres: heikki postgres [local]\n> > ANALYZE(standard_ProcessUtility+0x901)[0x564889f0b8b9]\n> > postgres: heikki postgres [local]\n> > ANALYZE(ProcessUtility+0x136)[0x564889f0afb1]\n> > postgres: heikki postgres [local] ANALYZE(+0x6728c8)[0x564889f098c8]\n> > postgres: heikki postgres [local] ANALYZE(+0x672b3b)[0x564889f09b3b]\n> > postgres: heikki postgres [local] ANALYZE(PortalRun+0x320)[0x564889f09015]\n> > postgres: heikki postgres [local] ANALYZE(+0x66b2c6)[0x564889f022c6]\n> > postgres: heikki postgres [local]\n> > ANALYZE(PostgresMain+0x80c)[0x564889f06fd7]\n> > postgres: heikki postgres [local] ANALYZE(+0x667876)[0x564889efe876]\n> > postgres: heikki postgres [local]\n> > ANALYZE(postmaster_child_launch+0xe6)[0x564889e1f4b3]\n> > postgres: heikki postgres [local] ANALYZE(+0x58e68e)[0x564889e2568e]\n> > postgres: heikki postgres [local] ANALYZE(+0x58b7f0)[0x564889e227f0]\n> > postgres: heikki postgres [local]\n> > ANALYZE(PostmasterMain+0x152b)[0x564889e2214d]\n> > postgres: heikki postgres [local] ANALYZE(+0x4444b4)[0x564889cdb4b4]\n> > /lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7f7d83b6724a]\n> > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f7d83b67305]\n> > postgres: heikki postgres [local] ANALYZE(_start+0x21)[0x564889971a61]\n> > 2024-04-03 20:15:49.157 EEST [262101] LOG: server process (PID 262232)\n> > was terminated by signal 6: Aborted\n>\n> I realized the same error while working on Jakub's benchmarking results.\n>\n> Cause: I was using the nblocks variable to check how many blocks will\n> be returned from the streaming API. But I realized that sometimes the\n> number returned from BlockSampler_Init() is not equal to the number of\n> blocks that BlockSampler_Next() will return as BlockSampling algorithm\n> decides how many blocks to return on the fly by using some random\n> seeds.\n\nI wanted to re-check this problem and I realized that I was wrong. I\ntried using nblocks again and this time there was no failure. I looked\nat block sampling logic and I am pretty sure that BlockSampler_Init()\nfunction correctly returns the number of blocks that\nBlockSampler_Next() will return. It seems 158f581923 fixed this issue\nas well.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 16 Apr 2024 18:10:04 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Mon, 8 Apr 2024 at 04:21, Thomas Munro <[email protected]> wrote:\n>\n> Pushed. Thanks Bilal and reviewers!\n\nI wanted to discuss what will happen to this patch now that\n27bc1772fc8 is reverted. I am continuing this thread but I can create\nanother thread if you prefer so.\n\nAfter the revert of 27bc1772fc8, acquire_sample_rows() became table-AM\nagnostic again. So, read stream changes might have to be pushed down\nnow but there are a couple of roadblocks like Melanie mentioned [1]\nbefore.\n\nQuote from Melanie [1]:\n\nOn Thu, 11 Apr 2024 at 19:19, Melanie Plageman\n<[email protected]> wrote:\n>\n> I am working on pushing streaming ANALYZE into heap AM code, and I ran\n> into a few roadblocks.\n>\n> If we want ANALYZE to make the ReadStream object in heap_beginscan()\n> (like the read stream implementation of heap sequential and TID range\n> scans do), I don't see any way around changing the scan_begin table AM\n> callback to take a BufferAccessStrategy at the least (and perhaps also\n> the BlockSamplerData).\n>\n> read_stream_begin_relation() doesn't just save the\n> BufferAccessStrategy in the ReadStream, it uses it to set various\n> other things in the ReadStream object. callback_private_data (which in\n> ANALYZE's case is the BlockSamplerData) is simply saved in the\n> ReadStream, so it could be set later, but that doesn't sound very\n> clean to me.\n>\n> As such, it seems like a cleaner alternative would be to add a table\n> AM callback for creating a read stream object that takes the\n> parameters of read_stream_begin_relation(). But, perhaps it is a bit\n> late for such additions.\n\nIf we do not want to add a new table AM callback like Melanie\nmentioned, it is pretty much required to pass BufferAccessStrategy and\nBlockSamplerData to the initscan().\n\n> It also opens us up to the question of whether or not sequential scan\n> should use such a callback instead of making the read stream object in\n> heap_beginscan().\n>\n> I am happy to write a patch that does any of the above. But, I want to\n> raise these questions, because perhaps I am simply missing an obvious\n> alternative solution.\n\nI wonder the same, I could not think of any alternative solution to\nthis problem.\n\nAnother quote from Melanie [2] in the same thread:\n\nOn Thu, 11 Apr 2024 at 20:48, Melanie Plageman\n<[email protected]> wrote:\n>\n> I will also say that, had this been 6 months ago, I would probably\n> suggest we restructure ANALYZE's table AM interface to accommodate\n> read stream setup and to address a few other things I find odd about\n> the current code. For example, I think creating a scan descriptor for\n> the analyze scan in acquire_sample_rows() is quite odd. It seems like\n> it would be better done in the relation_analyze callback. The\n> relation_analyze callback saves some state like the callbacks for\n> acquire_sample_rows() and the Buffer Access Strategy. But at least in\n> the heap implementation, it just saves them in static variables in\n> analyze.c. It seems like it would be better to save them in a useful\n> data structure that could be accessed later. We have access to pretty\n> much everything we need at that point (in the relation_analyze\n> callback). I also think heap's implementation of\n> table_beginscan_analyze() doesn't need most of\n> heap_beginscan()/initscan(), so doing this instead of something\n> ANALYZE specific seems more confusing than helpful.\n\nIf we want to implement ANALYZE specific counterparts of\nheap_beginscan()/initscan(); we may think of passing\nBufferAccessStrategy and BlockSamplerData to them.\n\nAlso, there is an ongoing(?) discussion about a few problems /\nimprovements about the acquire_sample_rows() mentioned at the end of\nthe 'Table AM Interface Enhancements' thread [3]. Should we wait for\nthese discussions to be resolved or can we resume working on this\npatch?\n\nAny kind of feedback would be appreciated.\n\n[1] https://www.postgresql.org/message-id/CAAKRu_ZxU6hucckrT1SOJxKfyN7q-K4KU1y62GhDwLBZWG%2BROg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAAKRu_YkphAPNbBR2jcLqnxGhDEWTKhYfLFY%3D0R_oG5LHBH7Gw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 29 Apr 2024 18:41:09 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Mon, 29 Apr 2024 at 18:41, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, 8 Apr 2024 at 04:21, Thomas Munro <[email protected]> wrote:\n> >\n> > Pushed. Thanks Bilal and reviewers!\n>\n> I wanted to discuss what will happen to this patch now that\n> 27bc1772fc8 is reverted. I am continuing this thread but I can create\n> another thread if you prefer so.\n\n041b96802ef is discussed in the 'Table AM Interface Enhancements'\nthread [1]. The main problems discussed about this commit is that the\nread stream API is not pushed to the heap-specific code and, because\nof that, the other AM implementations need to use read streams. To\npush read stream API to the heap-specific code, it is pretty much\nrequired to pass BufferAccessStrategy and BlockSamplerData to the\ninitscan().\n\nI am sharing the alternative version of this patch. The first patch\njust reverts 041b96802ef and the second patch is the alternative\nversion.\n\nIn this alternative version, the read stream API is not pushed to the\nheap-specific code, but it is controlled by the heap-specific code.\nThe SO_USE_READ_STREAMS_IN_ANALYZE flag is introduced and set in the\nheap-specific code if the scan type is 'ANALYZE'. This flag is used to\ndecide whether streaming API in ANALYZE will be used or not. If this\nflag is set, this means heap AMs and read stream API will be used. If\nit is not set, this means heap AMs will not be used and code falls\nback to the version before read streams.\n\nPros of the alternative version:\n\n* The existing AM implementations other than heap AM can continue to\nuse their AMs without any change.\n* AM implementations other than heap do not need to use read streams.\n* Upstream code uses the read stream API and benefits from that.\n\nCons of the alternative version:\n\n* 6 if cases are added to the acquire_sample_rows() function and 3 of\nthem are in the while loop.\n* Because of these changes, the code looks messy.\n\nAny kind of feedback would be appreciated.\n\n[1] https://www.postgresql.org/message-id/flat/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 15 May 2024 21:18:12 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Wed, May 15, 2024 at 2:18 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> On Mon, 29 Apr 2024 at 18:41, Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > On Mon, 8 Apr 2024 at 04:21, Thomas Munro <[email protected]> wrote:\n> > I wanted to discuss what will happen to this patch now that\n> > 27bc1772fc8 is reverted. I am continuing this thread but I can create\n> > another thread if you prefer so.\n>\n> 041b96802ef is discussed in the 'Table AM Interface Enhancements'\n> thread [1]. The main problems discussed about this commit is that the\n> read stream API is not pushed to the heap-specific code and, because\n> of that, the other AM implementations need to use read streams. To\n> push read stream API to the heap-specific code, it is pretty much\n> required to pass BufferAccessStrategy and BlockSamplerData to the\n> initscan().\n>\n> I am sharing the alternative version of this patch. The first patch\n> just reverts 041b96802ef and the second patch is the alternative\n> version.\n>\n> In this alternative version, the read stream API is not pushed to the\n> heap-specific code, but it is controlled by the heap-specific code.\n> The SO_USE_READ_STREAMS_IN_ANALYZE flag is introduced and set in the\n> heap-specific code if the scan type is 'ANALYZE'. This flag is used to\n> decide whether streaming API in ANALYZE will be used or not. If this\n> flag is set, this means heap AMs and read stream API will be used. If\n> it is not set, this means heap AMs will not be used and code falls\n> back to the version before read streams.\n\nPersonally, I think the alternative version here is the best option\nother than leaving what is in master. However, I would vote for\nkeeping what is in master because 1) where we are in the release\ntimeline and 2) the acquire_sample_rows() code, before streaming read,\nwas totally block-based anyway.\n\nIf we kept what was in master, do we need to document for table AMs\nhow to use read_stream_next_buffer() or can we assume they will look\nat the heap AM implementation?\n\n- Melanie\n\n\n",
"msg_date": "Mon, 20 May 2024 16:46:35 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Mon, May 20, 2024 at 10:46 PM Melanie Plageman <[email protected]>\nwrote:\n\n> On Wed, May 15, 2024 at 2:18 PM Nazir Bilal Yavuz <[email protected]>\n> wrote:\n> >\n> > On Mon, 29 Apr 2024 at 18:41, Nazir Bilal Yavuz <[email protected]>\n> wrote:\n> > >\n> > > On Mon, 8 Apr 2024 at 04:21, Thomas Munro <[email protected]>\n> wrote:\n> > > I wanted to discuss what will happen to this patch now that\n> > > 27bc1772fc8 is reverted. I am continuing this thread but I can create\n> > > another thread if you prefer so.\n> >\n> > 041b96802ef is discussed in the 'Table AM Interface Enhancements'\n> > thread [1]. The main problems discussed about this commit is that the\n> > read stream API is not pushed to the heap-specific code and, because\n> > of that, the other AM implementations need to use read streams. To\n> > push read stream API to the heap-specific code, it is pretty much\n> > required to pass BufferAccessStrategy and BlockSamplerData to the\n> > initscan().\n> >\n> > I am sharing the alternative version of this patch. The first patch\n> > just reverts 041b96802ef and the second patch is the alternative\n> > version.\n> >\n> > In this alternative version, the read stream API is not pushed to the\n> > heap-specific code, but it is controlled by the heap-specific code.\n> > The SO_USE_READ_STREAMS_IN_ANALYZE flag is introduced and set in the\n> > heap-specific code if the scan type is 'ANALYZE'. This flag is used to\n> > decide whether streaming API in ANALYZE will be used or not. If this\n> > flag is set, this means heap AMs and read stream API will be used. If\n> > it is not set, this means heap AMs will not be used and code falls\n> > back to the version before read streams.\n>\n> Personally, I think the alternative version here is the best option\n> other than leaving what is in master. However, I would vote for\n> keeping what is in master because 1) where we are in the release\n> timeline and 2) the acquire_sample_rows() code, before streaming read,\n> was totally block-based anyway.\n>\n> If we kept what was in master, do we need to document for table AMs\n> how to use read_stream_next_buffer() or can we assume they will look\n> at the heap AM implementation?\n>\n\nHi all,\n\nI ran into this with the PG17 beta3 and for our use-case we need to set up\nanother stream (using a different relation and/or fork, but using the same\nstrategy) in addition to the one that is passed in to the\nscan_analyze_next_block(), so to be able to do that it is necessary to have\nthe block sampler and the strategy from the original stream. Given that\nthis makes it very difficult (see below) to set up a different ReadStream\ninside the TAM unless you have the BlockSampler and the BufferReadStrategy,\nand the old interface did not have this problem, I would consider this a\nregression.\n\nThis would be possible to solve in a few different ways:\n\n 1. The alternate version proposed by Nazir allows you to decide which\n interface to use.\n 2. Reverting the patch entirely would also solve the problem.\n 3. Passing down the block sampler and the strategy to scan_begin() and\n move the ReadStream setup in analyze.c into initscan() in heapam.c, but\n this requires adding new parameters to this function.\n 4. Having accessors that allow you to get the block sampler and strategy\n from the ReadStream object.\n\nThe proposed solution 1 above would still not solve the problem of allowing\na user to set up a different or extra ReadStream if they want to use the\nnew ReadStream interface. Reverting the ReadStream patch entirely would\nalso deal with the regression, but I find the ReadStream interface very\nelegant since it moves the block sampling into a separate abstraction and\nwould like to use it, but right now there are some limitations if you want\nto use it fully. The third solution above would allow that, but it requires\na change in the signature of scan_begin(), which might not be the best at\nthis stage of development. Proposal 4 would allow you to construct a new\nstream based on the old one and might be a simple alternative solution as\nwell with less changes to the current code.\n\nIt is possible to capture the information in ProcessUtility() and\nre-compute all the parameters, but that is quite a lot of work to get\nright, especially considering that these computations are all over the\nplace and part of different functions at different stages (For example,\nvariable ring_size, needed to set up the buffer access strategy is computed\nin ExecVacuum(); variable targrows, used to set up the buffer sampler, is\ncomputed inside acquire_sample_rows(), which in turn requires to decide\nwhat attributes to analyze, which is computed in do_analyze_rel().)\n\nIt would be great if this could be fixed before the PG17 release now that\n27bc1772fc8 was reverted.\n--\nBest wishes,\nMats Kindahl, Timescale",
"msg_date": "Thu, 22 Aug 2024 09:31:06 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Aug 22, 2024 at 7:31 PM Mats Kindahl <[email protected]> wrote:\n> The alternate version proposed by Nazir allows you to decide which interface to use.\n> Reverting the patch entirely would also solve the problem.\n> Passing down the block sampler and the strategy to scan_begin() and move the ReadStream setup in analyze.c into initscan() in heapam.c, but this requires adding new parameters to this function.\n> Having accessors that allow you to get the block sampler and strategy from the ReadStream object.\n\nI'm a bit confused about how it can make sense to use the same\nBlockSampler with a side relation/fork. Could you point me at the\ncode?\n\n> It would be great if this could be fixed before the PG17 release now that 27bc1772fc8 was reverted.\n\nAck. Thinking...\n\nRandom thought: is there a wiki page or something where we can find\nout about all the table AM projects? For the successor to\n27bc1772fc8, I hope they'll be following along.\n\n\n",
"msg_date": "Sat, 24 Aug 2024 15:33:19 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Sat, Aug 24, 2024 at 5:31 AM Thomas Munro <[email protected]> wrote:\n\n> On Thu, Aug 22, 2024 at 7:31 PM Mats Kindahl <[email protected]> wrote:\n> > The alternate version proposed by Nazir allows you to deide which\n> interface to use.\n> > Reverting the patch entirely would also solve the problem.\n>\n\nAfter digging through the code a little more I discovered that\nthere actually is another one: move the ReadStream struct into\nread_stream.h.\n\n\n> > Passing down the block sampler and the strategy to scan_begin() and move\n> the ReadStream setup in analyze.c into initscan() in heapam.c, but this\n> requires adding new parameters to this function.\n> > Having accessors that allow you to get the block sampler and strategy\n> from the ReadStream object.\n>\n> I'm a bit confused about how it can make sense to use the same\n> BlockSampler with a side relation/fork. Could you point me at the\n> code?\n>\n\nSorry, that was a bit unclear. Intention was not to re-use the block\nsampler but to set a new one up with parameters from the original block\nsampler, which would require access to it. (The strategy is less of a\nproblem since only one is used.)\n\nTo elaborate on the situation:\n\nFor the TAM in question we have two different storage areas, both are\nheaps. Both relations use the same attributes \"publicly\" (they are\ninternally different, but we transform them to look the same). One of the\nrelations is the \"default\" one and is stored in rd_rel. In order to run\nANALYZE, we need to sample blocks from both relations, in slightly\ndifferent ways.\n\nWith the old interface, we faked the number of blocks in relation_size()\ncallback and claimed that there were N + M blocks. When then being asked\nabout a block by block number, we could easily pick the correct relation\nand just forward the call.\n\nWith the new ReadStream API, a read-stream is (automatically) set up on the\n\"default\" relation, but we can set up a separate read-stream inside the TAM\nfor the other relation. However, the difficulty is in setting it up\ncorrectly:\n\nWe cannot use the \"fake number of block\"-trick since the read stream does\nnot only compute the block number, but actually tries to read the buffer in\nthe relation provided when setting up the read stream, so a block number\noutside the range of this relation will not be found since it is in a\ndifferent relation.\n\nIf we could create our own read stream with both relations, that could be\nsolved and we could just implement the same logic, but direct it to the\ncorrect relations depending on where we want to read the block. Unless I am\nmistaken, there is already support for this since there is an array of\nin-progress I/O and it would be trivial to extend this with more\nrelations+forks, if you have access to the structure definition. The\nReadStream struct is, however, an opaque struct so it's hard to hack around\nwith it. Just making the struct declaration public would potentially solve\na lot of problems here. (See attached patch, which is close to the minimum\nof what is needed to allow extension writers to tweak the contents.)\n\nSince both relations are using the same attributes with the same\n\"analyzability\", having that information would be useful to compute the\ntargrows for setting up the additional stream, but it is computed in\ndo_analyze_rel() and not further propagated, so it needs to be re-computed\nif we want to set up a separate read-stream.\n\n\n> > It would be great if this could be fixed before the PG17 release now\n> that 27bc1772fc8 was reverted.\n>\n> Ack. Thinking...\n>\n\nRight now I think that just making the ReadStream struct available in the\nheader file is the best approach. It is a safe and low-risk fix (so\nsomething that can be added to a beta) and will allow extension writers to\nhack to their hearts' contents. In addition to that, being able to select\nwhat interface to use would also help.\n\n\n\n> Random thought: is there a wiki page or something where we can find\n> out about all the table AM projects? For the successor to\n> 27bc1772fc8, I hope they'll be following along.\n>\n\nAt this point, unfortunately not, we are quite early in this. Once I have\nsomething, I'll share.\n-- \nBest wishes,\nMats Kindahl, Timescale",
"msg_date": "Thu, 29 Aug 2024 15:15:40 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Thanks for the explanation. I think we should revert it. IMHO it was\na nice clean example of a streaming transformation, but unfortunately\nit transformed an API that nobody liked in the first place, and broke\nsome weird and wonderful workarounds. Let's try again in 18.\n\n\n",
"msg_date": "Wed, 4 Sep 2024 22:40:47 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 6:38 AM Thomas Munro <[email protected]> wrote:\n> Thanks for the explanation. I think we should revert it. IMHO it was\n> a nice clean example of a streaming transformation, but unfortunately\n> it transformed an API that nobody liked in the first place, and broke\n> some weird and wonderful workarounds. Let's try again in 18.\n\nThe problem I have with this is that we just released RC1. I suppose\nif we have to make this change it's better to do it sooner than later,\nbut are we sure we want to whack this around this close to final\nrelease?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 11:36:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 3:36 AM Robert Haas <[email protected]> wrote:\n> On Wed, Sep 4, 2024 at 6:38 AM Thomas Munro <[email protected]> wrote:\n> > Thanks for the explanation. I think we should revert it. IMHO it was\n> > a nice clean example of a streaming transformation, but unfortunately\n> > it transformed an API that nobody liked in the first place, and broke\n> > some weird and wonderful workarounds. Let's try again in 18.\n>\n> The problem I have with this is that we just released RC1. I suppose\n> if we have to make this change it's better to do it sooner than later,\n> but are we sure we want to whack this around this close to final\n> release?\n\nI hear you. But I definitely don't want to (and likely can't at this\npoint) make any of the other proposed changes, and I also don't want\nto break Timescale. That seems to leave only one option: go back to\nthe v16 API for RC2, and hope that the ongoing table AM discussions\nfor v18 (CF #4866) will fix all the problems for the people whose TAMs\ndon't quack like a \"heap\", and the people whose TAMs do and who would\nnot like to duplicate the code, and the people who want streaming I/O.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 11:34:21 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 1:34 AM Thomas Munro <[email protected]> wrote:\n\n> On Thu, Sep 5, 2024 at 3:36 AM Robert Haas <[email protected]> wrote:\n> > On Wed, Sep 4, 2024 at 6:38 AM Thomas Munro <[email protected]>\n> wrote:\n> > > Thanks for the explanation. I think we should revert it. IMHO it was\n> > > a nice clean example of a streaming transformation, but unfortunately\n> > > it transformed an API that nobody liked in the first place, and broke\n> > > some weird and wonderful workarounds. Let's try again in 18.\n> >\n> > The problem I have with this is that we just released RC1. I suppose\n> > if we have to make this change it's better to do it sooner than later,\n> > but are we sure we want to whack this around this close to final\n> > release?\n>\n> I hear you. But I definitely don't want to (and likely can't at this\n> point) make any of the other proposed changes, and I also don't want\n> to break Timescale. That seems to leave only one option: go back to\n> the v16 API for RC2, and hope that the ongoing table AM discussions\n> for v18 (CF #4866) will fix all the problems for the people whose TAMs\n> don't quack like a \"heap\", and the people whose TAMs do and who would\n> not like to duplicate the code, and the people who want streaming I/O.\n>\n\nForgive me for asking, but I am not entirely sure why the ReadStream struct\nis opaque. The usual reasons are:\n\n - You want to provide an ABI to allow extensions to work with new major\n versions without re-compiling. Right now it is necessary to recompile\n extensions anyway, this does not seem to apply. (Because there are a lot of\n other changes that you need when switching versions because of the lack of\n a stable ABI for other parts of the code. However, it might be that the\n goal is to support it eventually, and then it would make sense to start\n making structs opaque.)\n - You want to ensure that you can make modifications *inside* a major\n version without breaking ABIs and requiring a re-compile. In this case, you\n could still follow safe practice of adding new fields last, not relying on\n the size of the struct for anything (e.g., no arrays of these structures,\n just pointers to them), etc. However, if you want to be *very* safe and\n support very drastic changes inside a major version, it needs to be opaque,\n so this could be the reason.\n\nIs it either of these reasons, or is there another reason?\n\nMaking the ReadStream API non-opaque (that is, moving the definition to the\nheader file) would at least solve our problem (unless I am mistaken).\nHowever, I am ignorant about long-term plans which might affect this, so\nthere might be a good reason to revert it for reasons I am not aware of.\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Thu, Sep 5, 2024 at 1:34 AM Thomas Munro <[email protected]> wrote:On Thu, Sep 5, 2024 at 3:36 AM Robert Haas <[email protected]> wrote:\n> On Wed, Sep 4, 2024 at 6:38 AM Thomas Munro <[email protected]> wrote:\n> > Thanks for the explanation. I think we should revert it. IMHO it was\n> > a nice clean example of a streaming transformation, but unfortunately\n> > it transformed an API that nobody liked in the first place, and broke\n> > some weird and wonderful workarounds. Let's try again in 18.\n>\n> The problem I have with this is that we just released RC1. I suppose\n> if we have to make this change it's better to do it sooner than later,\n> but are we sure we want to whack this around this close to final\n> release?\n\nI hear you. But I definitely don't want to (and likely can't at this\npoint) make any of the other proposed changes, and I also don't want\nto break Timescale. That seems to leave only one option: go back to\nthe v16 API for RC2, and hope that the ongoing table AM discussions\nfor v18 (CF #4866) will fix all the problems for the people whose TAMs\ndon't quack like a \"heap\", and the people whose TAMs do and who would\nnot like to duplicate the code, and the people who want streaming I/O.\nForgive me for asking, but I am not entirely sure why the ReadStream struct is opaque. The usual reasons are:You want to provide an ABI to allow extensions to work with new major versions without re-compiling. Right now it is necessary to recompile extensions anyway, this does not seem to apply. (Because there are a lot of other changes that you need when switching versions because of the lack of a stable ABI for other parts of the code. However, it might be that the goal is to support it eventually, and then it would make sense to start making structs opaque.)You want to ensure that you can make modifications inside a major version without breaking ABIs and requiring a re-compile. In this case, you could still follow safe practice of adding new fields last, not relying on the size of the struct for anything (e.g., no arrays of these structures, just pointers to them), etc. However, if you want to be very safe and support very drastic changes inside a major version, it needs to be opaque, so this could be the reason.Is it either of these reasons, or is there another reason?Making the ReadStream API non-opaque (that is, moving the definition to the header file) would at least solve our problem (unless I am mistaken). However, I am ignorant about long-term plans which might affect this, so there might be a good reason to revert it for reasons I am not aware of.-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Thu, 5 Sep 2024 08:45:18 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 6:45 PM Mats Kindahl <[email protected]> wrote:\n> Forgive me for asking, but I am not entirely sure why the ReadStream struct is opaque. The usual reasons are:\n>\n> You want to provide an ABI to allow extensions to work with new major versions without re-compiling. Right now it is necessary to recompile extensions anyway, this does not seem to apply. (Because there are a lot of other changes that you need when switching versions because of the lack of a stable ABI for other parts of the code. However, it might be that the goal is to support it eventually, and then it would make sense to start making structs opaque.)\n> You want to ensure that you can make modifications inside a major version without breaking ABIs and requiring a re-compile. In this case, you could still follow safe practice of adding new fields last, not relying on the size of the struct for anything (e.g., no arrays of these structures, just pointers to them), etc. However, if you want to be very safe and support very drastic changes inside a major version, it needs to be opaque, so this could be the reason.\n>\n> Is it either of these reasons, or is there another reason?\n>\n> Making the ReadStream API non-opaque (that is, moving the definition to the header file) would at least solve our problem (unless I am mistaken). However, I am ignorant about long-term plans which might affect this, so there might be a good reason to revert it for reasons I am not aware of.\n\nThe second thing. Also there are very active plans[1] to change the\ninternal design of ReadStream in 18, since the goal is to drive true\nasynchronous I/O, and the idea of ReadStream was to create a simple\nAPI to let many consumers start using it, so that we can drive\nefficient modern system interfaces below that API, so having people\ndepending on how it works would not be great.\n\nBut let's talk about how that would actually look, for example if we\nexposed the struct or you took a photocopy of it... I think your idea\nmust be something like: if you could access struct ReadStream's\ninternals, you could replace stream->callback with an interceptor\ncallback, and if the BlockSampler had been given the fake N + M\nrelation size, the interceptor could overwrite\nstream->ios[next_io_index].op.smgr and return x - N if the intercepted\ncallback returned x >= N. (Small detail: need to check\nstream->fast_path and use 0 instead or something like that, but maybe\nwe could change that.) One minor problem that jumps out is that\nread_stream.c could inappropriately merge blocks from the two\nrelations into one I/O. Hmm, I guess you'd have to teach the\ninterceptor not to allow that: if switching between the two relation,\nand if the block number would coincide with\nstream->pending_read_blocknum + stream->pending_read_nblocks, it would\nneed to pick a new block instead (interfering with the block sampling\nalgorithm, but only very rarely). Is this what you had in mind, or\nsomething else?\n\n(BTW I have a patch to teach read_stream.c about multi-smgr-relation\nstreams, by adding a different constructor with a different callback\nthat returns smgr, fork, block instead of just the block, but it\ndidn't make it into 17.)\n\n[1] https://www.postgresql.org/message-id/flat/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah@brqs62irg4dt\n\n\n",
"msg_date": "Thu, 5 Sep 2024 21:12:07 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 05, 2024 at 09:12:07PM +1200, Thomas Munro wrote:\n> On Thu, Sep 5, 2024 at 6:45 PM Mats Kindahl <[email protected]> wrote:\n> > Making the ReadStream API non-opaque (that is, moving the definition\n> > to the header file) would at least solve our problem (unless I am\n> > mistaken). However, I am ignorant about long-term plans which might\n> > affect this, so there might be a good reason to revert it for\n> > reasons I am not aware of.\n> \n> The second thing.\n\nI am a bit confused about the status of this thread. Robert mentioned\nRC1, so I guess it pertains to v17 but I don't see it on the open item\nwiki list?\n\nDoes the above mean you are going to revert it for v17, Thomas? And if\nso, what exactly? The ANALYZE changes on top of the streaming read API\nor something else about that API that is being discussed on this thread?\n\nI am also asking because this feature (i.e. Use streaming read API in\nANALYZE) is being mentioned in the release announcement and that was\njust frozen for translations.\n\n\nMichael\n\n\n",
"msg_date": "Mon, 9 Sep 2024 17:36:39 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 3:36 AM Michael Banck <[email protected]> wrote:\n> I am a bit confused about the status of this thread. Robert mentioned\n> RC1, so I guess it pertains to v17 but I don't see it on the open item\n> wiki list?\n\nYes, v17. Alight, I'll add an item.\n\n> Does the above mean you are going to revert it for v17, Thomas? And if\n> so, what exactly? The ANALYZE changes on top of the streaming read API\n> or something else about that API that is being discussed on this thread?\n\nI might have been a little pessimistic in that assessment. Another\nworkaround that seems an awful lot cleaner and less invasive would be\nto offer a new ReadStream API function that provides access to block\nnumbers and the strategy, ie the arguments of v16's\nscan_analyze_next_block() function. Mats, what do you think about\nthis? (I haven't tried to preserve the prefetching behaviour, which\nprobably didn't actually too work for you in v16 anyway at a guess,\nI'm just looking for the absolute simplest thing we can do to resolve\nthis API mismatch.) TimeScale could then continue to use its v16\ncoding to handle the two-relations-in-a-trenchcoat problem, and we\ncould continue discussing how to make v18 better.\n\nI looked briefly at another non-heap-like table AM, the Citus Columnar\nTAM. I am not familiar with that code and haven't studied it deeply\nthis morning, but its _next_block() currently just returns true, so I\nthink it will somehow need to change to counting calls and returning\nfalse when it thinks its been called enough times (otherwise the loop\nin acquire_sample_rows() won't terminate, I think?). I suppose an\neasy way to do that without generating extra I/O or having to think\nhard about how to preserve the loop cound from v16 would be to use\nthis function.\n\nI think there are broadly three categories of TAMs with respect to\nANALYZE block sampling: those that are very heap-like (blocks of one\nSMgrRelation) and can just use the stream directly, those that are not\nat all heap-like (doing something completely different to sample\ntuples and ignoring the block aspect but using _next_block() to\ncontrol the loop), and then Timescale's case which is sort of\nsomewhere in between: almost heap-like from the point of view of this\nsampling code, ie working with blocks, but fudging the meaning of\nblock numbers, which we didn't anticipate. (I wonder if it fails to\nsample fairly across the underlying relation boundary anyway because\ntheir data densities must surely be quite different, but that's not\nwhat we're here to talk about.)\n\n. o O { We need that wiki page listing TAMs with links to the open\nsource ones... }",
"msg_date": "Tue, 10 Sep 2024 10:27:43 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 10:27 AM Thomas Munro <[email protected]> wrote:\n> Mats, what do you think about\n> this? (I haven't tried to preserve the prefetching behaviour, which\n> probably didn't actually too work for you in v16 anyway at a guess,\n> I'm just looking for the absolute simplest thing we can do to resolve\n> this API mismatch.) TimeScale could then continue to use its v16\n> coding to handle the two-relations-in-a-trenchcoat problem, and we\n> could continue discussing how to make v18 better.\n\n. o O { Spitballing here: if we add that tiny function I showed to get\nyou unstuck for v17, then later in v18, if we add a multi-relation\nReadStream constructor/callback (I have a patch somewhere, I want to\npropose that as it is needed for streaming recovery), you could\nconstruct a new ReadSteam of your own that is daisy-chained from that\none. You could keep using your N + M block numbering scheme if you\nwant to, and the callback of the new stream could decode the block\nnumbers and redirect to the appropriate relation + real block number.\nThat way you'd get I/O concurrency for both relations (for now just\nread-ahead advice, but see Andres's AIO v2 thread). That'd\nessentially be a more supported version of the 'access the struct\ninternals' idea (or at least my understanding of what you had in\nmind), through daisy-chained streams. A little weird maybe, and maybe\nthe redesign work will result in something completely\ndifferent/better... just a thought... }\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:04:00 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 11:12 AM Thomas Munro <[email protected]> wrote:\n\n> On Thu, Sep 5, 2024 at 6:45 PM Mats Kindahl <[email protected]> wrote:\n> > Forgive me for asking, but I am not entirely sure why the ReadStream\n> struct is opaque. The usual reasons are:\n> >\n> > You want to provide an ABI to allow extensions to work with new major\n> versions without re-compiling. Right now it is necessary to recompile\n> extensions anyway, this does not seem to apply. (Because there are a lot of\n> other changes that you need when switching versions because of the lack of\n> a stable ABI for other parts of the code. However, it might be that the\n> goal is to support it eventually, and then it would make sense to start\n> making structs opaque.)\n> > You want to ensure that you can make modifications inside a major\n> version without breaking ABIs and requiring a re-compile. In this case, you\n> could still follow safe practice of adding new fields last, not relying on\n> the size of the struct for anything (e.g., no arrays of these structures,\n> just pointers to them), etc. However, if you want to be very safe and\n> support very drastic changes inside a major version, it needs to be opaque,\n> so this could be the reason.\n> >\n> > Is it either of these reasons, or is there another reason?\n> >\n> > Making the ReadStream API non-opaque (that is, moving the definition to\n> the header file) would at least solve our problem (unless I am mistaken).\n> However, I am ignorant about long-term plans which might affect this, so\n> there might be a good reason to revert it for reasons I am not aware of.\n>\n> The second thing. Also there are very active plans[1] to change the\n> internal design of ReadStream in 18, since the goal is to drive true\n> asynchronous I/O, and the idea of ReadStream was to create a simple\n> API to let many consumers start using it, so that we can drive\n> efficient modern system interfaces below that API, so having people\n> depending on how it works would not be great.\n>\n\nThat is understandable, since you usually do not want to have to re-compile\nthe extension for different minor versions. However, it would be a rare\ncase with extensions that are meddling with this, so might not turn out to\nbe a big problem in reality, as long as it is very clear to all involved\nthat this might change and that you make an effort to avoid binary\nincompatibility by removing or changing types for fields.\n\n\n> But let's talk about how that would actually look, for example if we\n> exposed the struct or you took a photocopy of it... I think your idea\n> must be something like: if you could access struct ReadStream's\n> internals, you could replace stream->callback with an interceptor\n> callback, and if the BlockSampler had been given the fake N + M\n> relation size, the interceptor could overwrite\n> stream->ios[next_io_index].op.smgr and return x - N if the intercepted\n> callback returned x >= N. (Small detail: need to check\n> stream->fast_path and use 0 instead or something like that, but maybe\n> we could change that.)\n\n\nYes, this is what I had in mind, but I did not dig too deeply into the code.\n\n\n> One minor problem that jumps out is that\n> read_stream.c could inappropriately merge blocks from the two\n> relations into one I/O. Hmm, I guess you'd have to teach the\n> interceptor not to allow that: if switching between the two relation,\n> and if the block number would coincide with\n> stream->pending_read_blocknum + stream->pending_read_nblocks, it would\n> need to pick a new block instead (interfering with the block sampling\n> algorithm, but only very rarely). Is this what you had in mind, or\n> something else?\n>\n\nHmmm... I didn't look too closely at this. Since the block number comes\nfrom the callback, I guess we could make sure to have a \"padding\" block\nbetween the regions so that we \"break\" any suite of blocks, which I think\nis what you mean with \"teach the interceptor not to allow that\", but I\nwould have to write a patch to make sure.\n\n\n>\n> (BTW I have a patch to teach read_stream.c about multi-smgr-relation\n> streams, by adding a different constructor with a different callback\n> that returns smgr, fork, block instead of just the block, but it\n> didn't make it into 17.)\n>\n\nWithout having looked at the patch, this sounds like the correct way to do\nit.\n\n\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah@brqs62irg4dt\n>\n\n\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Thu, Sep 5, 2024 at 11:12 AM Thomas Munro <[email protected]> wrote:On Thu, Sep 5, 2024 at 6:45 PM Mats Kindahl <[email protected]> wrote:\n> Forgive me for asking, but I am not entirely sure why the ReadStream struct is opaque. The usual reasons are:\n>\n> You want to provide an ABI to allow extensions to work with new major versions without re-compiling. Right now it is necessary to recompile extensions anyway, this does not seem to apply. (Because there are a lot of other changes that you need when switching versions because of the lack of a stable ABI for other parts of the code. However, it might be that the goal is to support it eventually, and then it would make sense to start making structs opaque.)\n> You want to ensure that you can make modifications inside a major version without breaking ABIs and requiring a re-compile. In this case, you could still follow safe practice of adding new fields last, not relying on the size of the struct for anything (e.g., no arrays of these structures, just pointers to them), etc. However, if you want to be very safe and support very drastic changes inside a major version, it needs to be opaque, so this could be the reason.\n>\n> Is it either of these reasons, or is there another reason?\n>\n> Making the ReadStream API non-opaque (that is, moving the definition to the header file) would at least solve our problem (unless I am mistaken). However, I am ignorant about long-term plans which might affect this, so there might be a good reason to revert it for reasons I am not aware of.\n\nThe second thing. Also there are very active plans[1] to change the\ninternal design of ReadStream in 18, since the goal is to drive true\nasynchronous I/O, and the idea of ReadStream was to create a simple\nAPI to let many consumers start using it, so that we can drive\nefficient modern system interfaces below that API, so having people\ndepending on how it works would not be great.That is understandable, since you usually do not want to have to re-compile the extension for different minor versions. However, it would be a rare case with extensions that are meddling with this, so might not turn out to be a big problem in reality, as long as it is very clear to all involved that this might change and that you make an effort to avoid binary incompatibility by removing or changing types for fields. \nBut let's talk about how that would actually look, for example if we\nexposed the struct or you took a photocopy of it... I think your idea\nmust be something like: if you could access struct ReadStream's\ninternals, you could replace stream->callback with an interceptor\ncallback, and if the BlockSampler had been given the fake N + M\nrelation size, the interceptor could overwrite\nstream->ios[next_io_index].op.smgr and return x - N if the intercepted\ncallback returned x >= N. (Small detail: need to check\nstream->fast_path and use 0 instead or something like that, but maybe\nwe could change that.) Yes, this is what I had in mind, but I did not dig too deeply into the code. One minor problem that jumps out is that\nread_stream.c could inappropriately merge blocks from the two\nrelations into one I/O. Hmm, I guess you'd have to teach the\ninterceptor not to allow that: if switching between the two relation,\nand if the block number would coincide with\nstream->pending_read_blocknum + stream->pending_read_nblocks, it would\nneed to pick a new block instead (interfering with the block sampling\nalgorithm, but only very rarely). Is this what you had in mind, or\nsomething else?Hmmm... I didn't look too closely at this. Since the block number comes from the callback, I guess we could make sure to have a \"padding\" block between the regions so that we \"break\" any suite of blocks, which I think is what you mean with \"teach the interceptor not to allow that\", but I would have to write a patch to make sure. \n\n(BTW I have a patch to teach read_stream.c about multi-smgr-relation\nstreams, by adding a different constructor with a different callback\nthat returns smgr, fork, block instead of just the block, but it\ndidn't make it into 17.)Without having looked at the patch, this sounds like the correct way to do it. \n\n[1] https://www.postgresql.org/message-id/flat/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah@brqs62irg4dt\n-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Tue, 10 Sep 2024 19:07:18 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 12:28 AM Thomas Munro <[email protected]>\nwrote:\n\n> On Tue, Sep 10, 2024 at 3:36 AM Michael Banck <[email protected]> wrote:\n> > I am a bit confused about the status of this thread. Robert mentioned\n> > RC1, so I guess it pertains to v17 but I don't see it on the open item\n> > wiki list?\n>\n> Yes, v17. Alight, I'll add an item.\n>\n> > Does the above mean you are going to revert it for v17, Thomas? And if\n> > so, what exactly? The ANALYZE changes on top of the streaming read API\n> > or something else about that API that is being discussed on this thread?\n>\n> I might have been a little pessimistic in that assessment. Another\n> workaround that seems an awful lot cleaner and less invasive would be\n> to offer a new ReadStream API function that provides access to block\n> numbers and the strategy, ie the arguments of v16's\n> scan_analyze_next_block() function. Mats, what do you think about\n> this? (I haven't tried to preserve the prefetching behaviour, which\n> probably didn't actually too work for you in v16 anyway at a guess,\n> I'm just looking for the absolute simplest thing we can do to resolve\n> this API mismatch.) TimeScale could then continue to use its v16\n> coding to handle the two-relations-in-a-trenchcoat problem, and we\n> could continue discussing how to make v18 better.\n>\n\nIn the original code we could call the methods with an \"adjusted\" block\nnumber, so the entire logic worked as before because we could just\nrecursively forward the call with modified parameters. This is a little\ndifferent with the new API.\n\n\n> I looked briefly at another non-heap-like table AM, the Citus Columnar\n> TAM. I am not familiar with that code and haven't studied it deeply\n> this morning, but its _next_block() currently just returns true, so I\n> think it will somehow need to change to counting calls and returning\n> false when it thinks its been called enough times (otherwise the loop\n> in acquire_sample_rows() won't terminate, I think?). I suppose an\n> easy way to do that without generating extra I/O or having to think\n> hard about how to preserve the loop cound from v16 would be to use\n> this function.\n>\n\nYes, but we are re-using the heapam so forwarding the call to it, which not\nonly fetches the next block it also reads the buffer. Since you could just\npass in the block number before, it just worked.\n\nAs mentioned, we intended to set up a new ReadStream for the \"internal\"\nrelation ourselves (I think this is what you mean with \"daisy-chain\" in the\nfollowup to this mail), but then you need targrows, which is based on\nvacattrstats, which is computed with code that is currently either inline\n(the loop over the attributes in do_analyze_rel), or static (the\nexamine_attribute function). We can write our own code for this, it would\nhelp to have the code that does this work callable, or be able to extract\nparameters from the existing readstream to at least get a hint. This would\nallow us to just get the vacuum attribute stats for an arbitrary relation\nand then run the same computations as in do_analyze_rel. Being able to do\nthe same for the indexes is less important since this is an \"internal\"\nrelation and the \"public\" indexes are the ones that matter.\n\nI attached a tentative patch for this, just doing some refactorings, and\nwill see if that is sufficient for the current work by trying to use it. (I\nthought I would be able to verify this today, but am a little delayed so\nI'm sending this anyway.)\n\nA patch like this is a minimal refactoring so should be safe even in an RC.\nI have deliberately not tried to do a more serious refactoring although I\nsee that there are some duplications when doing the same work with the\nindexes and it would probably be possible to make a more generic function\nfor this.\n\n\n> I think there are broadly three categories of TAMs with respect to\n> ANALYZE block sampling: those that are very heap-like (blocks of one\n> SMgrRelation) and can just use the stream directly, those that are not\n> at all heap-like (doing something completely different to sample\n> tuples and ignoring the block aspect but using _next_block() to\n> control the loop), and then Timescale's case which is sort of\n> somewhere in between: almost heap-like from the point of view of this\n> sampling code, ie working with blocks, but fudging the meaning of\n> block numbers, which we didn't anticipate.\n\n\nIn this case the block numbers are only from a different relation, so they\nare still valid blocks, just encoded in a funny way. The block numbers\ntrick is just a hack, but the gist is that we want to sample an\narbitrary number of relations/forks when running analysis, not just the\n\"front-facing\" one.\n\n\n> (I wonder if it fails to\n> sample fairly across the underlying relation boundary anyway because\n> their data densities must surely be quite different, but that's not\n> what we're here to talk about.)\n>\n\nYes, they are, so this is kind-of-a-hack-to-get-it-roughly-correct. The\nideal scenario would be to be able to run the same analysis of that is done\nin do_analyze_rel on the \"hidden\" relation to get an accurate targetrows.\nThis is what I am trying now with the attached patch.\n\n\n>\n> . o O { We need that wiki page listing TAMs with links to the open\n> source ones... }\n\n-- \nBest wishes,\nMats Kindahl, Timescale",
"msg_date": "Fri, 13 Sep 2024 10:01:23 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 6:04 AM Thomas Munro <[email protected]> wrote:\n\n> On Tue, Sep 10, 2024 at 10:27 AM Thomas Munro <[email protected]>\n> wrote:\n> > Mats, what do you think about\n> > this? (I haven't tried to preserve the prefetching behaviour, which\n> > probably didn't actually too work for you in v16 anyway at a guess,\n> > I'm just looking for the absolute simplest thing we can do to resolve\n> > this API mismatch.) TimeScale could then continue to use its v16\n> > coding to handle the two-relations-in-a-trenchcoat problem, and we\n> > could continue discussing how to make v18 better.\n>\n> . o O { Spitballing here: if we add that tiny function I showed to get\n> you unstuck for v17, then later in v18, if we add a multi-relation\n> ReadStream constructor/callback (I have a patch somewhere, I want to\n> propose that as it is needed for streaming recovery), you could\n> construct a new ReadSteam of your own that is daisy-chained from that\n> one. You could keep using your N + M block numbering scheme if you\n> want to, and the callback of the new stream could decode the block\n> numbers and redirect to the appropriate relation + real block number.\n>\n\nI think it is good to make as small changes as possible to the RC, so agree\nwith this approach. Looking at the patch. I think it will work, but I'll do\nsome experimentation with the patch.\n\nJust asking, is there any particular reason why you do not want to *add*\nnew functions for opaque objects inside a major release? After all, that\nwas the reason they were opaque from the beginning and extending with new\nfunctions would not break any existing code, not even from the ABI\nperspective.\n\n\n> That way you'd get I/O concurrency for both relations (for now just\n> read-ahead advice, but see Andres's AIO v2 thread). That'd\n> essentially be a more supported version of the 'access the struct\n> internals' idea (or at least my understanding of what you had in\n> mind), through daisy-chained streams. A little weird maybe, and maybe\n> the redesign work will result in something completely\n> different/better... just a thought... }\n>\n\nI'll take a look at the thread. I really think the ReadStream abstraction\nis a good step in the right direction.\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Tue, Sep 10, 2024 at 6:04 AM Thomas Munro <[email protected]> wrote:On Tue, Sep 10, 2024 at 10:27 AM Thomas Munro <[email protected]> wrote:\n> Mats, what do you think about\n> this? (I haven't tried to preserve the prefetching behaviour, which\n> probably didn't actually too work for you in v16 anyway at a guess,\n> I'm just looking for the absolute simplest thing we can do to resolve\n> this API mismatch.) TimeScale could then continue to use its v16\n> coding to handle the two-relations-in-a-trenchcoat problem, and we\n> could continue discussing how to make v18 better.\n\n. o O { Spitballing here: if we add that tiny function I showed to get\nyou unstuck for v17, then later in v18, if we add a multi-relation\nReadStream constructor/callback (I have a patch somewhere, I want to\npropose that as it is needed for streaming recovery), you could\nconstruct a new ReadSteam of your own that is daisy-chained from that\none. You could keep using your N + M block numbering scheme if you\nwant to, and the callback of the new stream could decode the block\nnumbers and redirect to the appropriate relation + real block number.I think it is good to make as small changes as possible to the RC, so agree with this approach. Looking at the patch. I think it will work, but I'll do some experimentation with the patch.Just asking, is there any particular reason why you do not want to *add* new functions for opaque objects inside a major release? After all, that was the reason they were opaque from the beginning and extending with new functions would not break any existing code, not even from the ABI perspective. \nThat way you'd get I/O concurrency for both relations (for now just\nread-ahead advice, but see Andres's AIO v2 thread). That'd\nessentially be a more supported version of the 'access the struct\ninternals' idea (or at least my understanding of what you had in\nmind), through daisy-chained streams. A little weird maybe, and maybe\nthe redesign work will result in something completely\ndifferent/better... just a thought... }\nI'll take a look at the thread. I really think the ReadStream abstraction is a good step in the right direction.-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Fri, 13 Sep 2024 10:10:13 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 10:10 AM Mats Kindahl <[email protected]> wrote:\n\n> On Tue, Sep 10, 2024 at 6:04 AM Thomas Munro <[email protected]>\n> wrote:\n>\n>> On Tue, Sep 10, 2024 at 10:27 AM Thomas Munro <[email protected]>\n>> wrote:\n>> > Mats, what do you think about\n>> > this? (I haven't tried to preserve the prefetching behaviour, which\n>> > probably didn't actually too work for you in v16 anyway at a guess,\n>> > I'm just looking for the absolute simplest thing we can do to resolve\n>> > this API mismatch.) TimeScale could then continue to use its v16\n>> > coding to handle the two-relations-in-a-trenchcoat problem, and we\n>> > could continue discussing how to make v18 better.\n>>\n>> . o O { Spitballing here: if we add that tiny function I showed to get\n>> you unstuck for v17, then later in v18, if we add a multi-relation\n>> ReadStream constructor/callback (I have a patch somewhere, I want to\n>> propose that as it is needed for streaming recovery), you could\n>> construct a new ReadSteam of your own that is daisy-chained from that\n>> one. You could keep using your N + M block numbering scheme if you\n>> want to, and the callback of the new stream could decode the block\n>> numbers and redirect to the appropriate relation + real block number.\n>>\n>\n> I think it is good to make as small changes as possible to the RC, so\n> agree with this approach. Looking at the patch. I think it will work, but\n> I'll do some experimentation with the patch.\n>\n> Just asking, is there any particular reason why you do not want to *add*\n> new functions for opaque objects inside a major release? After all, that\n> was the reason they were opaque from the beginning and extending with new\n> functions would not break any existing code, not even from the ABI\n> perspective.\n>\n>\n>> That way you'd get I/O concurrency for both relations (for now just\n>> read-ahead advice, but see Andres's AIO v2 thread). That'd\n>> essentially be a more supported version of the 'access the struct\n>> internals' idea (or at least my understanding of what you had in\n>> mind), through daisy-chained streams. A little weird maybe, and maybe\n>> the redesign work will result in something completely\n>> different/better... just a thought... }\n>>\n>\n> I'll take a look at the thread. I really think the ReadStream abstraction\n> is a good step in the right direction.\n> --\n> Best wishes,\n> Mats Kindahl, Timescale\n>\n\nHi Thomas,\n\nI used the combination of your patch and making the computation of\nvacattrstats for a relation available through the API and managed to\nimplement something that I think does the right thing. (I just sampled a\nfew different statistics to check if they seem reasonable, like most common\nvals and most common freqs.) See attached patch.\n\nI need the vacattrstats to set up the two streams for the internal\nrelations. I can just re-implement them in the same way as is already done,\nbut this seems like a small change that avoids unnecessary code\nduplication.\n-- \nBest wishes,\nMats Kindahl, Timescale",
"msg_date": "Sat, 14 Sep 2024 14:14:29 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Sun, Sep 15, 2024 at 12:14 AM Mats Kindahl <[email protected]> wrote:\n> I used the combination of your patch and making the computation of vacattrstats for a relation available through the API and managed to implement something that I think does the right thing. (I just sampled a few different statistics to check if they seem reasonable, like most common vals and most common freqs.) See attached patch.\n\nCool. I went ahead and committed that small new function and will\nmark the open item closed.\n\n> I need the vacattrstats to set up the two streams for the internal relations. I can just re-implement them in the same way as is already done, but this seems like a small change that avoids unnecessary code duplication.\n\nUnfortunately we're not in a phase where we can make non-essential\nchanges, we're right about to release and we're only committing fixes,\nand it seems like you have a way forward (albeit with some\nduplication). We can keep talking about that for v18.\n\n From your earlier email:\n> I'll take a look at the thread. I really think the ReadStream abstraction is a good step in the right direction.\n\nHere's something you or your colleagues might be interested in: I was\nlooking around for a fun extension to streamify as a demo of the\ntechnology, and I finished up writing a quick patch to streamify\npgvector's HNSW index scan, which worked well enough to share[1] (I\nthink it should in principle be able to scale with the number of graph\nconnections, at least 16x), but then people told me that it's of\nlimited interest because everybody knows that HNSW indexes have to fit\nin memory (I think there may also be memory prefetch streaming\nopportunities, unexamined for now). But that made me wonder what the\npeople with the REALLY big indexes do for hyperdimensional graph\nsearch on a scale required to build Skynet, and that led me back to\nTimescale pgvectorscale[2]. I see two obvious signs that this thing\nis eminently and profitably streamifiable: (1) The stated aim is\noptimising for indexes that don't fit in memory, hence \"Disk\" in the\nname of the research project it is inspired by, (2) I see that\nDIskANN[3] is aggressively using libaio (Linux) and overlapped/IOCP\n(Windows). So now I am waiting patiently for a Rustacean to show up\nwith patches for pgvectorscale to use ReadStream, which would already\nget read-ahead advice and vectored I/O (Linux, macOS, FreeBSD soon\nhopefully), and hopefully also provide a nice test case for the AIO\npatch set which redirects buffer reads through io_uring (Linux,\nbasically the newer better libaio) or background I/O workers (other\nOSes, which works surprisingly competitively). Just BTW for\ncomparison with DiskANN we have also had early POC-quality patches\nthat drive AIO with overlapped/IOCP (Windows) which will eventually be\nrebased and proposed (Windows isn't really a primary target but we\nwanted to validate that the stuff we're working on has abstractions\nthat will map to the obvious system APIs found in the systems\nPostgreSQL targets). For completeness, I've also had it mostly\nworking on the POSIX AIO of FreeBSD, HP-UX and AIX (though we dropped\nsupport for those last two so that was a bit of a dead end).\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ_7NKd46nx1wbyXWriuZSNzsTfm%2BrhEuvU6nxZi3-KVw%40mail.gmail.com\n[2] https://github.com/timescale/pgvectorscale\n[3] https://github.com/microsoft/DiskANN\n\n\n",
"msg_date": "Wed, 18 Sep 2024 15:13:20 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 5:13 AM Thomas Munro <[email protected]> wrote:\n\n> On Sun, Sep 15, 2024 at 12:14 AM Mats Kindahl <[email protected]> wrote:\n> > I used the combination of your patch and making the computation of\n> vacattrstats for a relation available through the API and managed to\n> implement something that I think does the right thing. (I just sampled a\n> few different statistics to check if they seem reasonable, like most common\n> vals and most common freqs.) See attached patch.\n>\n> Cool. I went ahead and committed that small new function and will\n> mark the open item closed.\n>\n\nThank you Thomas, this will help a lot.\n\n\n> > I need the vacattrstats to set up the two streams for the internal\n> relations. I can just re-implement them in the same way as is already done,\n> but this seems like a small change that avoids unnecessary code duplication.\n>\n> Unfortunately we're not in a phase where we can make non-essential\n> changes, we're right about to release and we're only committing fixes,\n> and it seems like you have a way forward (albeit with some\n> duplication). We can keep talking about that for v18.\n>\n\nYes, I can work around this by re-implementing the same code that is\npresent in PostgreSQL.\n\n\n>\n> From your earlier email:\n> > I'll take a look at the thread. I really think the ReadStream\n> abstraction is a good step in the right direction.\n>\n> Here's something you or your colleagues might be interested in: I was\n> looking around for a fun extension to streamify as a demo of the\n> technology, and I finished up writing a quick patch to streamify\n> pgvector's HNSW index scan, which worked well enough to share[1] (I\n> think it should in principle be able to scale with the number of graph\n> connections, at least 16x), but then people told me that it's of\n> limited interest because everybody knows that HNSW indexes have to fit\n> in memory (I think there may also be memory prefetch streaming\n> opportunities, unexamined for now). But that made me wonder what the\n> people with the REALLY big indexes do for hyperdimensional graph\n> search on a scale required to build Skynet, and that led me back to\n> Timescale pgvectorscale[2]. I see two obvious signs that this thing\n> is eminently and profitably streamifiable: (1) The stated aim is\n> optimising for indexes that don't fit in memory, hence \"Disk\" in the\n> name of the research project it is inspired by, (2) I see that\n> DIskANN[3] is aggressively using libaio (Linux) and overlapped/IOCP\n> (Windows). So now I am waiting patiently for a Rustacean to show up\n> with patches for pgvectorscale to use ReadStream, which would already\n> get read-ahead advice and vectored I/O (Linux, macOS, FreeBSD soon\n> hopefully), and hopefully also provide a nice test case for the AIO\n> patch set which redirects buffer reads through io_uring (Linux,\n> basically the newer better libaio) or background I/O workers (other\n> OSes, which works surprisingly competitively). Just BTW for\n> comparison with DiskANN we have also had early POC-quality patches\n> that drive AIO with overlapped/IOCP (Windows) which will eventually be\n> rebased and proposed (Windows isn't really a primary target but we\n> wanted to validate that the stuff we're working on has abstractions\n> that will map to the obvious system APIs found in the systems\n> PostgreSQL targets). For completeness, I've also had it mostly\n> working on the POSIX AIO of FreeBSD, HP-UX and AIX (though we dropped\n> support for those last two so that was a bit of a dead end).\n\n\n\n\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGJ_7NKd46nx1wbyXWriuZSNzsTfm%2BrhEuvU6nxZi3-KVw%40mail.gmail.com\n> [2] https://github.com/timescale/pgvectorscale\n> [3] https://github.com/microsoft/DiskANN\n>\n\nThanks Thomas, this looks really interesting. I've forwarded it to the\npgvectorscale team.\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Wed, Sep 18, 2024 at 5:13 AM Thomas Munro <[email protected]> wrote:On Sun, Sep 15, 2024 at 12:14 AM Mats Kindahl <[email protected]> wrote:\n> I used the combination of your patch and making the computation of vacattrstats for a relation available through the API and managed to implement something that I think does the right thing. (I just sampled a few different statistics to check if they seem reasonable, like most common vals and most common freqs.) See attached patch.\n\nCool. I went ahead and committed that small new function and will\nmark the open item closed.Thank you Thomas, this will help a lot. \n> I need the vacattrstats to set up the two streams for the internal relations. I can just re-implement them in the same way as is already done, but this seems like a small change that avoids unnecessary code duplication.\n\nUnfortunately we're not in a phase where we can make non-essential\nchanges, we're right about to release and we're only committing fixes,\nand it seems like you have a way forward (albeit with some\nduplication). We can keep talking about that for v18.Yes, I can work around this by re-implementing the same code that is present in PostgreSQL. \n\n From your earlier email:\n> I'll take a look at the thread. I really think the ReadStream abstraction is a good step in the right direction.\n\nHere's something you or your colleagues might be interested in: I was\nlooking around for a fun extension to streamify as a demo of the\ntechnology, and I finished up writing a quick patch to streamify\npgvector's HNSW index scan, which worked well enough to share[1] (I\nthink it should in principle be able to scale with the number of graph\nconnections, at least 16x), but then people told me that it's of\nlimited interest because everybody knows that HNSW indexes have to fit\nin memory (I think there may also be memory prefetch streaming\nopportunities, unexamined for now). But that made me wonder what the\npeople with the REALLY big indexes do for hyperdimensional graph\nsearch on a scale required to build Skynet, and that led me back to\nTimescale pgvectorscale[2]. I see two obvious signs that this thing\nis eminently and profitably streamifiable: (1) The stated aim is\noptimising for indexes that don't fit in memory, hence \"Disk\" in the\nname of the research project it is inspired by, (2) I see that\nDIskANN[3] is aggressively using libaio (Linux) and overlapped/IOCP\n(Windows). So now I am waiting patiently for a Rustacean to show up\nwith patches for pgvectorscale to use ReadStream, which would already\nget read-ahead advice and vectored I/O (Linux, macOS, FreeBSD soon\nhopefully), and hopefully also provide a nice test case for the AIO\npatch set which redirects buffer reads through io_uring (Linux,\nbasically the newer better libaio) or background I/O workers (other\nOSes, which works surprisingly competitively). Just BTW for\ncomparison with DiskANN we have also had early POC-quality patches\nthat drive AIO with overlapped/IOCP (Windows) which will eventually be\nrebased and proposed (Windows isn't really a primary target but we\nwanted to validate that the stuff we're working on has abstractions\nthat will map to the obvious system APIs found in the systems\nPostgreSQL targets). For completeness, I've also had it mostly\nworking on the POSIX AIO of FreeBSD, HP-UX and AIX (though we dropped\nsupport for those last two so that was a bit of a dead end). \n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ_7NKd46nx1wbyXWriuZSNzsTfm%2BrhEuvU6nxZi3-KVw%40mail.gmail.com\n[2] https://github.com/timescale/pgvectorscale\n[3] https://github.com/microsoft/DiskANN\nThanks Thomas, this looks really interesting. I've forwarded it to the pgvectorscale team.-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Fri, 20 Sep 2024 08:36:42 +0200",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use streaming read API in ANALYZE"
}
] |
[
{
"msg_contents": "Is it possible to launch an autovacuum from within an extension?\n\nI'm developing an index access method. After the index gets built it needs\nsome cleanup and optimization. I'd prefer to do this in the\namvacuumcleanup() method so it can happen periodically and asynchronously.\n\nI could fire up a background worker to do the job, but it would be a lot\nsimpler to call please_launch_autovacuum_right_now();\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nIs it possible to launch an autovacuum from within an extension?I'm developing an index access method. After the index gets built it needs some cleanup and optimization. I'd prefer to do this in the amvacuumcleanup() method so it can happen periodically and asynchronously.I could fire up a background worker to do the job, but it would be a lot simpler to call please_launch_autovacuum_right_now();-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Mon, 19 Feb 2024 15:15:29 -0600",
"msg_from": "Chris Cleveland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible to trigger autovacuum?"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 03:15:29PM -0600, Chris Cleveland wrote:\n> Is it possible to launch an autovacuum from within an extension?\n> \n> I'm developing an index access method. After the index gets built it needs\n> some cleanup and optimization. I'd prefer to do this in the\n> amvacuumcleanup() method so it can happen periodically and asynchronously.\n> \n> I could fire up a background worker to do the job, but it would be a lot\n> simpler to call please_launch_autovacuum_right_now();\n\nThe autovacuum launcher can be stopped in its nap with signals, like a\nSIGHUP. So you could rely on that to force a job to happen on a given\ndatabase based on the timing you're aiming for.\n--\nMichael",
"msg_date": "Tue, 20 Feb 2024 08:02:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible to trigger autovacuum?"
},
{
"msg_contents": "On 2024-Feb-19, Chris Cleveland wrote:\n\n> Is it possible to launch an autovacuum from within an extension?\n> \n> I'm developing an index access method. After the index gets built it\n> needs some cleanup and optimization. I'd prefer to do this in the\n> amvacuumcleanup() method so it can happen periodically and\n> asynchronously.\n\nAutovacuum has a mechanism to be requested work -- grep the tree for\nAutoVacuumRequestWork and AutoVacuumWorkItemType. Currently its only\nuse is BRIN autosummarization, but it's possible to add others by\npatching the core code. If you want to propose the idea of making it\nextensible, I think it would serve not only your present use case but\nplenty of others, too.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Feb 2024 09:59:08 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible to trigger autovacuum?"
}
] |
[
{
"msg_contents": "Hi\n\nHi\n\n\nWith the addition of \"pg_sync_replication_slots()\", there is now a use-case for\nincluding \"dbname\" in \"primary_conninfo\" and the docs have changed from\nstating [1]:\n\n Do not specify a database name in the primary_conninfo string.\n\nto [2]:\n\n For replication slot synchronization (see Section 48.2.3), it is also\n necessary to specify a valid dbname in the primary_conninfo string. This will\n only be used for slot synchronization. It is ignored for streaming.\n\n[1] https://www.postgresql.org/docs/16/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n[2] https://www.postgresql.org/docs/devel/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n\nHowever, when setting up a standby (with the intent of creating a logical\nstandby) with pg_basebackup, providing the -R/-write-recovery-conf option\nresults in a \"primary_conninfo\" string being generated without a \"dbname\"\nparameter, which requires a manual change should one intend to use\n\"pg_sync_replication_slots()\".\n\nI can't see any reason for continuing to omit \"dbname\", so suggest it should\nonly continue to be omitted for 16 and earlier; see attached patch.\n\nNote that this does mean that if the conninfo string provided to pg_basebackup\ndoes not contain \"dbname\", the generated \"primary_conninfo\" GUC will default to\n\"dbname=replication\". It would be easy enough to suppress this, but AFAICS\nthere's no way to tell if it was explicitly supplied by the user, in which case\nit would be surprising if it were omitted from the final \"primary_conninfo\"\nstring.\n\nThe only other place where GenerateRecoveryConfig() is called is pg_rewind;\nI can't see any adverse affects from the proposed change. But it's perfectly\npossible there's something blindingly obvious I'm overlooking.\n\n\n\nRegards\n\nIan Barwick",
"msg_date": "Tue, 20 Feb 2024 08:34:22 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 00:34, Ian Lawrence Barwick <[email protected]> wrote:\n> With the addition of \"pg_sync_replication_slots()\", there is now a use-case for\n> including \"dbname\" in \"primary_conninfo\" and the docs have changed from\n> stating [1]:\n>\n> Do not specify a database name in the primary_conninfo string.\n>\n> to [2]:\n>\n> For replication slot synchronization (see Section 48.2.3), it is also\n> necessary to specify a valid dbname in the primary_conninfo string. This will\n> only be used for slot synchronization. It is ignored for streaming.\n>\n> [1] https://www.postgresql.org/docs/16/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n> [2] https://www.postgresql.org/docs/devel/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n\nSounds like that documentation should be updated in the same way as\nwas done for pg_basebackup/pg_receivewal in commit cca97ce6a665. When\nconsidering middleware/proxies having dbname in there can be useful\neven for older PG versions.\n\n> I can't see any reason for continuing to omit \"dbname\", so suggest it should\n> only continue to be omitted for 16 and earlier; see attached patch.\n\nYeah, that seems like a good change. Though, I'm wondering if the\nversion check is actually necessary.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 01:21:56 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:04 AM Ian Lawrence Barwick <[email protected]> wrote:\n>\n>\n> With the addition of \"pg_sync_replication_slots()\", there is now a use-case for\n> including \"dbname\" in \"primary_conninfo\" and the docs have changed from\n> stating [1]:\n>\n> Do not specify a database name in the primary_conninfo string.\n>\n> to [2]:\n>\n> For replication slot synchronization (see Section 48.2.3), it is also\n> necessary to specify a valid dbname in the primary_conninfo string. This will\n> only be used for slot synchronization. It is ignored for streaming.\n>\n> [1] https://www.postgresql.org/docs/16/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n> [2] https://www.postgresql.org/docs/devel/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY\n>\n> However, when setting up a standby (with the intent of creating a logical\n> standby) with pg_basebackup, providing the -R/-write-recovery-conf option\n> results in a \"primary_conninfo\" string being generated without a \"dbname\"\n> parameter, which requires a manual change should one intend to use\n> \"pg_sync_replication_slots()\".\n>\n> I can't see any reason for continuing to omit \"dbname\", so suggest it should\n> only continue to be omitted for 16 and earlier; see attached patch.\n>\n> Note that this does mean that if the conninfo string provided to pg_basebackup\n> does not contain \"dbname\", the generated \"primary_conninfo\" GUC will default to\n> \"dbname=replication\".\n>\n\nWhen I tried, it defaulted to user name:\nDefault case connection string:\nprimary_conninfo = 'user=KapilaAm\npassfile=''C:\\\\\\\\Users\\\\\\\\kapilaam\\\\\\\\AppData\\\\\\\\Roaming/postgresql/pgpass.conf''\nchannel_binding=disable dbname=KapilaAm port=5432 sslmode=disable\nsslcompression=0 sslcertmode=disable sslsni=1\nssl_min_protocol_version=TLSv1.2 gssencmode=disable\nkrbsrvname=postgres gssdelegation=0 target_session_attrs=any\nload_balance_hosts=disable'\n\nWhen I specified -d \"dbname = postgres\" during backup:\nprimary_conninfo = 'user=KapilaAm\npassfile=''C:\\\\\\\\Users\\\\\\\\kapilaam\\\\\\\\AppData\\\\\\\\Roaming/postgresql/pgpass.conf''\nchannel_binding=disable dbname=KapilaAm port=5432 sslmode=disable\nsslcompression=0 sslcertmode=disable sslsni=1\nssl_min_protocol_version=TLSv1.2 gssencmode=disable\nkrbsrvname=postgres gssdelegation=0 target_session_attrs=any\nload_balance_hosts=disable'\n\nI think it makes sense to include dbname in the connection string\nprogrammatically (with some option) for slot sync functionality but it\nis not clear to me if there is any impact of the same when the standby\nis not set up to sync up logical slots. I haven't checked but if there\nis a way to distinguish the case where we write dbname only when\nspecified by the user then that would be great but this is worth\nconsidering even without that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:27:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Ian,\r\n\r\nThanks for making the patch.\r\n\r\n> With the addition of \"pg_sync_replication_slots()\", there is now a use-case for\r\n> including \"dbname\" in \"primary_conninfo\" and the docs have changed from\r\n> stating [1]:\r\n> \r\n> Do not specify a database name in the primary_conninfo string.\r\n> \r\n> to [2]:\r\n> \r\n> For replication slot synchronization (see Section 48.2.3), it is also\r\n> necessary to specify a valid dbname in the primary_conninfo string. This will\r\n> only be used for slot synchronization. It is ignored for streaming.\r\n> \r\n> [1]\r\n> https://www.postgresql.org/docs/16/runtime-config-replication.html#RUNTIME\r\n> -CONFIG-REPLICATION-STANDBY\r\n> [2]\r\n> https://www.postgresql.org/docs/devel/runtime-config-replication.html#RUNTI\r\n> ME-CONFIG-REPLICATION-STANDBY\r\n> \r\n> However, when setting up a standby (with the intent of creating a logical\r\n> standby) with pg_basebackup, providing the -R/-write-recovery-conf option\r\n> results in a \"primary_conninfo\" string being generated without a \"dbname\"\r\n> parameter, which requires a manual change should one intend to use\r\n> \"pg_sync_replication_slots()\".\r\n> \r\n> I can't see any reason for continuing to omit \"dbname\", so suggest it should\r\n> only continue to be omitted for 16 and earlier; see attached patch.\r\n\r\nHmm, I also cannot find a reason, but we can document the change.\r\n\r\n> Note that this does mean that if the conninfo string provided to pg_basebackup\r\n> does not contain \"dbname\", the generated \"primary_conninfo\" GUC will default to\r\n> \"dbname=replication\". It would be easy enough to suppress this, but AFAICS\r\n> there's no way to tell if it was explicitly supplied by the user, in which case\r\n> it would be surprising if it were omitted from the final \"primary_conninfo\"\r\n> string.\r\n\r\nI found an inconsistency. When I ran ` pg_basebackup -D data_N2 -U postgres -R`,\r\ndbname would be set as username.\r\n\r\n```\r\nprimary_conninfo = 'user=postgres ... dbname=postgres\r\n```\r\n\r\nHowever, when I ran `pg_basebackup -D data_N2 -d \"user=postgres\" -R`,\r\ndbname would be set as \"replication\". Is it an intentional item?\r\n\r\n```\r\nprimary_conninfo = 'user=postgres ... dbname=replication...\r\n```\r\n\r\n> The only other place where GenerateRecoveryConfig() is called is pg_rewind;\r\n> I can't see any adverse affects from the proposed change. But it's perfectly\r\n> possible there's something blindingly obvious I'm overlooking.\r\n\r\nOn-going feature pg_createsubscriber[1] also uses GenerateRecoveryConfig(), but\r\nI can't see any bad effects. The output is being used to make consistent standby\r\nfrom the primary. Even if dbname is set in the primary_conninfo, it would be ignored.\r\n\r\n[1]: https://commitfest.postgresql.org/47/4637/\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 20 Feb 2024 10:48:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:18 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n> I found an inconsistency. When I ran ` pg_basebackup -D data_N2 -U postgres -R`,\n> dbname would be set as username.\n>\n> ```\n> primary_conninfo = 'user=postgres ... dbname=postgres\n> ```\n>\n> However, when I ran `pg_basebackup -D data_N2 -d \"user=postgres\" -R`,\n> dbname would be set as \"replication\". Is it an intentional item?\n>\n> ```\n> primary_conninfo = 'user=postgres ... dbname=replication...\n> ```\n\nSeems weird to me. You don't use dbname=replication to ask for a\nreplication connection, so why would we ever end up with that\nanywhere? And especially in only one of two such closely related\ncases?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:36:09 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Robert,\r\n\r\n> Seems weird to me. You don't use dbname=replication to ask for a\r\n> replication connection, so why would we ever end up with that\r\n> anywhere? And especially in only one of two such closely related\r\n> cases?\r\n\r\nJust FYI - here is an extreme case. And note that I have applied proposed patch.\r\n\r\nWhen `pg_basebackup -D data_N2 -R` is used:\r\n```\r\nprimary_conninfo = 'user=hayato ... dbname=hayato ...\r\n```\r\n\r\nBut when `pg_basebackup -d \"\" -D data_N2 -R` is used:\r\n```\r\nprimary_conninfo = 'user=hayato ... dbname=replication\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 20 Feb 2024 12:28:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:58 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n> > Seems weird to me. You don't use dbname=replication to ask for a\n> > replication connection, so why would we ever end up with that\n> > anywhere? And especially in only one of two such closely related\n> > cases?\n>\n> Just FYI - here is an extreme case. And note that I have applied proposed patch.\n>\n> When `pg_basebackup -D data_N2 -R` is used:\n> ```\n> primary_conninfo = 'user=hayato ... dbname=hayato ...\n> ```\n>\n> But when `pg_basebackup -d \"\" -D data_N2 -R` is used:\n> ```\n> primary_conninfo = 'user=hayato ... dbname=replication\n> ```\n\nIt seems like maybe somebody should look into why this is happening,\nand perhaps fix it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 19:56:10 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Robert,\r\n\r\n> > Just FYI - here is an extreme case. And note that I have applied proposed patch.\r\n> >\r\n> > When `pg_basebackup -D data_N2 -R` is used:\r\n> > ```\r\n> > primary_conninfo = 'user=hayato ... dbname=hayato ...\r\n> > ```\r\n> >\r\n> > But when `pg_basebackup -d \"\" -D data_N2 -R` is used:\r\n> > ```\r\n> > primary_conninfo = 'user=hayato ... dbname=replication\r\n> > ```\r\n> \r\n> It seems like maybe somebody should look into why this is happening,\r\n> and perhaps fix it.\r\n\r\nI think this caused from below part [1] in GetConnection().\r\n\r\nIf both dbname and connection_string are the NULL, we will enter the else part\r\nand NULL would be substituted - {\"dbnmae\", NULL} key-value pair is generated\r\nonly here.\r\n\r\nThen, in PQconnectdbParams()->PQconnectStartParams->pqConnectOptions2(),\r\nthe strange part would be found and replaced to the username [2].\r\n\r\nI think if both the connection string and the dbname are NULL, it should be\r\nconsidered as the physical replication connection. here is a patch to fix it.\r\nAfter the application, below two examples can output \"dbname=replication\".\r\nYou can also confirm.\r\n\r\n```\r\npg_basebackup -D data_N2 -U postgres\r\npg_basebackup -D data_N2 -R -v\r\n\r\n-> primary_conninfo = 'user=postgres ... dbname=replication ...\r\n```\r\n\r\n[1]\r\n```\r\n\telse\r\n\t{\r\n\t\tkeywords = pg_malloc0((argcount + 1) * sizeof(*keywords));\r\n\t\tvalues = pg_malloc0((argcount + 1) * sizeof(*values));\r\n\t\tkeywords[i] = \"dbname\";\r\n\t\tvalues[i] = dbname;\r\n\t\ti++;\r\n\t}\r\n```\r\n\r\n[2]\r\n```\r\n\t/*\r\n\t * If database name was not given, default it to equal user name\r\n\t */\r\n\tif (conn->dbName == NULL || conn->dbName[0] == '\\0')\r\n\t{\r\n\t\tfree(conn->dbName);\r\n\t\tconn->dbName = strdup(conn->pguser);\r\n\t\tif (!conn->dbName)\r\n\t\t\tgoto oom_error;\r\n\t}\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Wed, 21 Feb 2024 02:15:57 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "At Tue, 20 Feb 2024 19:56:10 +0530, Robert Haas <[email protected]> wrote in \n> It seems like maybe somebody should look into why this is happening,\n> and perhaps fix it.\n\nGetConnection()@streamutil.c wants to ensure conninfo has a fallback\ndatabase name (\"replication\"). However, the function seems to be\nignoring the case where neither dbname nor connection string is given,\nwhich is the first case Kuroda-san raised. The second case is the\nintended behavior of the function.\n\n>\t/* pg_recvlogical uses dbname only; others use connection_string only. */\n>\tAssert(dbname == NULL || connection_string == NULL);\n\nAnd the function incorrectly assumes that the connection string\nrequires \"dbname=replication\".\n\n>\t * Merge the connection info inputs given in form of connection string,\n>\t * options and default values (dbname=replication, replication=true, etc.)\n\nBut the name is a pseudo database name only used by pg_hba.conf\n(maybe) , which cannot be used as an actual database name (without\nquotation marks, or unless it is actually created). The function\nshould not add the fallback database name because the connection\nstring for physical replication connection doesn't require the dbname\nparameter. (attached patch)\n\nAbout the proposed patch, pg_basebackup cannot verify the validity of\nthe dbname. It could be problematic.\n\nAlthough I haven't looked the original thread, it seems that the\ndbname is used only by pg_sync_replication_slots(). If it is true,\ncouldn't we make the SQL function require a database name to make a\nconnection, instead of requiring it in physical-replication conninfo?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 21 Feb 2024 12:04:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 2:04 PM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n>\n> About the proposed patch, pg_basebackup cannot verify the validity of\n> the dbname. It could be problematic.\n>\n> Although I haven't looked the original thread, it seems that the\n> dbname is used only by pg_sync_replication_slots(). If it is true,\n> couldn't we make the SQL function require a database name to make a\n> connection, instead of requiring it in physical-replication conninfo?\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nI agree. If the intention is to meet the new requirement of the sync-slot\npatch which requires a dbname in the primary_conninfo, then pseudo dbnames\nwill not work, whether it be the username or just \"replication\". I feel if\nthe user does not specify dbname explicitly in pg_basebackup it should be\nleft blank in the generated primary_conninfo string as well.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, Feb 21, 2024 at 2:04 PM Kyotaro Horiguchi <[email protected]> wrote:\nAbout the proposed patch, pg_basebackup cannot verify the validity of\nthe dbname. It could be problematic.\n\nAlthough I haven't looked the original thread, it seems that the\ndbname is used only by pg_sync_replication_slots(). If it is true,\ncouldn't we make the SQL function require a database name to make a\nconnection, instead of requiring it in physical-replication conninfo?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software CenterI agree. If the intention is to meet the new requirement of the sync-slot patch which requires a dbname in the primary_conninfo, then pseudo dbnames will not work, whether it be the username or just \"replication\". I feel if the user does not specify dbname explicitly in pg_basebackup it should be left blank in the generated primary_conninfo string as well. regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 21 Feb 2024 14:37:21 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 2:04 PM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n>\n> Although I haven't looked the original thread, it seems that the\n> dbname is used only by pg_sync_replication_slots(). If it is true,\n> couldn't we make the SQL function require a database name to make a\n> connection, instead of requiring it in physical-replication conninfo?\n>\n>\n>\nIn the original thread, the intention is to not just provide this\nfunctionality using the function pg_sync_replication_slots(), but provide\na GUC option on standbys to sync logical replication slots periodically\neven without calling that function. This requires connecting to a database.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, Feb 21, 2024 at 2:04 PM Kyotaro Horiguchi <[email protected]> wrote:\nAlthough I haven't looked the original thread, it seems that the\ndbname is used only by pg_sync_replication_slots(). If it is true,\ncouldn't we make the SQL function require a database name to make a\nconnection, instead of requiring it in physical-replication conninfo?\nIn the original thread, the intention is to not just provide this functionality using the function pg_sync_replication_slots(), but provide a GUC option on standbys to sync logical replication slots periodically even without calling that function. This requires connecting to a database.regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 21 Feb 2024 14:41:59 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> GetConnection()@streamutil.c wants to ensure conninfo has a fallback\n> database name (\"replication\"). However, the function seems to be\n> ignoring the case where neither dbname nor connection string is given,\n> which is the first case Kuroda-san raised. The second case is the\n> intended behavior of the function.\n> \n> >\t/* pg_recvlogical uses dbname only; others use connection_string only.\n> */\n> >\tAssert(dbname == NULL || connection_string == NULL);\n> \n> And the function incorrectly assumes that the connection string\n> requires \"dbname=replication\".\n> \n> >\t * Merge the connection info inputs given in form of connection string,\n> >\t * options and default values (dbname=replication, replication=true,\n> etc.)\n> \n> But the name is a pseudo database name only used by pg_hba.conf\n> (maybe) , which cannot be used as an actual database name (without\n> quotation marks, or unless it is actually created). The function\n> should not add the fallback database name because the connection\n> string for physical replication connection doesn't require the dbname\n> parameter. (attached patch)\n\nI was also missing, but the requirement was that the dbname should be included\nonly when the dbname option was explicitly specified [1]. Even mine and yours\ncannot handle like that. Libpq function PQconnectdbParams()->pqConnectOptions2()\nfills all the parameter to PGconn, at that time the information whether it is\nintentionally specified or not is discarded. Then, GenerateRecoveryConfig() would\njust write down all the connection parameters from PGconn, they cannot recognize.\n\nOne approach is that based on Horiguchi-san's one and initial patch, we can\navoid writing when the dbname is same as the username.\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1KH1d1J5giPMZVOtMe0iqncf1CpNwkBKoYAmXdC-kEGZg%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/ \n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 05:08:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 4:09 PM Hayato Kuroda (Fujitsu) <\[email protected]> wrote:\n\n> Dear Horiguchi-san,\n>\n> > GetConnection()@streamutil.c wants to ensure conninfo has a fallback\n> > database name (\"replication\"). However, the function seems to be\n> > ignoring the case where neither dbname nor connection string is given,\n> > which is the first case Kuroda-san raised. The second case is the\n> > intended behavior of the function.\n> >\n> > > /* pg_recvlogical uses dbname only; others use connection_string\n> only.\n> > */\n> > > Assert(dbname == NULL || connection_string == NULL);\n> >\n> > And the function incorrectly assumes that the connection string\n> > requires \"dbname=replication\".\n> >\n> > > * Merge the connection info inputs given in form of connection\n> string,\n> > > * options and default values (dbname=replication,\n> replication=true,\n> > etc.)\n> >\n> > But the name is a pseudo database name only used by pg_hba.conf\n> > (maybe) , which cannot be used as an actual database name (without\n> > quotation marks, or unless it is actually created). The function\n> > should not add the fallback database name because the connection\n> > string for physical replication connection doesn't require the dbname\n> > parameter. (attached patch)\n>\n> I was also missing, but the requirement was that the dbname should be\n> included\n> only when the dbname option was explicitly specified [1]. Even mine and\n> yours\n> cannot handle like that. Libpq function\n> PQconnectdbParams()->pqConnectOptions2()\n> fills all the parameter to PGconn, at that time the information whether it\n> is\n> intentionally specified or not is discarded. Then,\n> GenerateRecoveryConfig() would\n> just write down all the connection parameters from PGconn, they cannot\n> recognize.\n>\n> Well, one option is that if a default dbname should be written to the\nconfiguration file, then \"postgres' is a better option than \"replication\"\nor \"username\" as the default option, at least a db of that name exists.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, Feb 21, 2024 at 4:09 PM Hayato Kuroda (Fujitsu) <[email protected]> wrote:Dear Horiguchi-san,\n\n> GetConnection()@streamutil.c wants to ensure conninfo has a fallback\n> database name (\"replication\"). However, the function seems to be\n> ignoring the case where neither dbname nor connection string is given,\n> which is the first case Kuroda-san raised. The second case is the\n> intended behavior of the function.\n> \n> > /* pg_recvlogical uses dbname only; others use connection_string only.\n> */\n> > Assert(dbname == NULL || connection_string == NULL);\n> \n> And the function incorrectly assumes that the connection string\n> requires \"dbname=replication\".\n> \n> > * Merge the connection info inputs given in form of connection string,\n> > * options and default values (dbname=replication, replication=true,\n> etc.)\n> \n> But the name is a pseudo database name only used by pg_hba.conf\n> (maybe) , which cannot be used as an actual database name (without\n> quotation marks, or unless it is actually created). The function\n> should not add the fallback database name because the connection\n> string for physical replication connection doesn't require the dbname\n> parameter. (attached patch)\n\nI was also missing, but the requirement was that the dbname should be included\nonly when the dbname option was explicitly specified [1]. Even mine and yours\ncannot handle like that. Libpq function PQconnectdbParams()->pqConnectOptions2()\nfills all the parameter to PGconn, at that time the information whether it is\nintentionally specified or not is discarded. Then, GenerateRecoveryConfig() would\njust write down all the connection parameters from PGconn, they cannot recognize.\nWell, one option is that if a default dbname should be written to the configuration file, then \"postgres' is a better option than \"replication\" or \"username\" as the default option, at least a db of that name exists.regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 21 Feb 2024 16:28:36 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 7:46 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > Just FYI - here is an extreme case. And note that I have applied proposed patch.\n> > >\n> > > When `pg_basebackup -D data_N2 -R` is used:\n> > > ```\n> > > primary_conninfo = 'user=hayato ... dbname=hayato ...\n> > > ```\n> > >\n> > > But when `pg_basebackup -d \"\" -D data_N2 -R` is used:\n> > > ```\n> > > primary_conninfo = 'user=hayato ... dbname=replication\n> > > ```\n> >\n> > It seems like maybe somebody should look into why this is happening,\n> > and perhaps fix it.\n>\n> I think this caused from below part [1] in GetConnection().\n>\n> If both dbname and connection_string are the NULL, we will enter the else part\n> and NULL would be substituted - {\"dbnmae\", NULL} key-value pair is generated\n> only here.\n>\n> Then, in PQconnectdbParams()->PQconnectStartParams->pqConnectOptions2(),\n> the strange part would be found and replaced to the username [2].\n>\n> I think if both the connection string and the dbname are NULL, it should be\n> considered as the physical replication connection. here is a patch to fix it.\n>\n\nWhen dbname is NULL or not given, it defaults to username. This\nfollows the specs of the connection string. See (dbname #\nThe database name. Defaults to be the same as the user name...) [1].\nYour patch breaks that specs, so I don't think it is correct.\n\n[1] - https://www.postgresql.org/docs/devel/libpq-connect.html#LIBPQ-CONNSTRING\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Feb 2024 11:34:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 8:34 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 20 Feb 2024 19:56:10 +0530, Robert Haas <[email protected]> wrote in\n> > It seems like maybe somebody should look into why this is happening,\n> > and perhaps fix it.\n>\n> GetConnection()@streamutil.c wants to ensure conninfo has a fallback\n> database name (\"replication\"). However, the function seems to be\n> ignoring the case where neither dbname nor connection string is given,\n> which is the first case Kuroda-san raised. The second case is the\n> intended behavior of the function.\n>\n> > /* pg_recvlogical uses dbname only; others use connection_string only. */\n> > Assert(dbname == NULL || connection_string == NULL);\n>\n> And the function incorrectly assumes that the connection string\n> requires \"dbname=replication\".\n>\n> > * Merge the connection info inputs given in form of connection string,\n> > * options and default values (dbname=replication, replication=true, etc.)\n>\n> But the name is a pseudo database name only used by pg_hba.conf\n> (maybe) , which cannot be used as an actual database name (without\n> quotation marks, or unless it is actually created). The function\n> should not add the fallback database name because the connection\n> string for physical replication connection doesn't require the dbname\n> parameter. (attached patch)\n>\n\nWe do append dbname=replication even in libpqrcv_connect for .pgpass\nlookup as mentioned in comments. See below:\nlibpqrcv_connect()\n{\n....\nelse\n{\n/*\n* The database name is ignored by the server in replication mode,\n* but specify \"replication\" for .pgpass lookup.\n*/\nkeys[++i] = \"dbname\";\nvals[i] = \"replication\";\n}\n...\n}\n\nI think as part of this effort we should just add dbname to\nprimary_conninfo written in postgresql.auto.conf file. As above, the\nquestion is valid whether we should do it just for 17 or for prior\nversions. Let's discuss more on that. I am not sure of the use case\nfor versions before 17 but commit cca97ce6a665 mentioned that some\nmiddleware or proxies might however need to know the dbname to make\nthe correct routing decision for the connection. Does that apply here\nas well? If so, we can do it and update the docs, otherwise, let's do\nit just for 17.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Feb 2024 12:17:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> When dbname is NULL or not given, it defaults to username. This\r\n> follows the specs of the connection string. See (dbname #\r\n> The database name. Defaults to be the same as the user name...) [1].\r\n> Your patch breaks that specs, so I don't think it is correct.\r\n\r\nI have proposed the point because I thought pg_basebackup basically wanted to do\r\na physical replication. But if the general libpq rule is stronger than it, we\r\nshould not apply my add_NULL_check.txt. Let's forget it.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 27 Feb 2024 08:30:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> We do append dbname=replication even in libpqrcv_connect for .pgpass\r\n> lookup as mentioned in comments. See below:\r\n> libpqrcv_connect()\r\n> {\r\n> ....\r\n> else\r\n> {\r\n> /*\r\n> * The database name is ignored by the server in replication mode,\r\n> * but specify \"replication\" for .pgpass lookup.\r\n> */\r\n> keys[++i] = \"dbname\";\r\n> vals[i] = \"replication\";\r\n> }\r\n> ...\r\n> }\r\n\r\nOK. So we must add the value for the authorization, right?\r\nI think it should be described even in GetConnection().\r\n\r\n> I think as part of this effort we should just add dbname to\r\n> primary_conninfo written in postgresql.auto.conf file. As above, the\r\n> question is valid whether we should do it just for 17 or for prior\r\n> versions. Let's discuss more on that. I am not sure of the use case\r\n> for versions before 17 but commit cca97ce6a665 mentioned that some\r\n> middleware or proxies might however need to know the dbname to make\r\n> the correct routing decision for the connection. Does that apply here\r\n> as well? If so, we can do it and update the docs, otherwise, let's do\r\n> it just for 17.\r\n\r\nHmm, I might lose your requirements again. So, we must satisfy all of below\r\nones, right?\r\n1) add {\"dbname\", \"replication\"} key-value pair to look up .pgpass file correctly.\r\n2) If the -R is given, output dbname=xxx value to be used by slotsync worker.\r\n3) If the dbname is not given in the connection string, the same string as\r\n username must be used to keep the libpq connection rule.\r\n4) No need to add dbname=replication pair \r\n\r\nPSA the patch for implementing it. It is basically same as Ian's one.\r\nHowever, this patch still cannot satisfy the condition 3).\r\n\r\n`pg_basebackup -D data_N2 -d \"user=postgres\" -R`\r\n-> dbname would not be appeared in primary_conninfo.\r\n\r\nThis is because `if (connection_string)` case in GetConnection() explicy override\r\na dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\r\nbefore the overriding, but it is no-op. Because The replacement of the dbname in\r\npqConnectOptions2() would be done only for the valid (=lastly specified)\r\nconnection options.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 27 Feb 2024 08:30:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "> PSA the patch for implementing it. It is basically same as Ian's one.\r\n> However, this patch still cannot satisfy the condition 3).\r\n> \r\n> `pg_basebackup -D data_N2 -d \"user=postgres\" -R`\r\n> -> dbname would not be appeared in primary_conninfo.\r\n> \r\n> This is because `if (connection_string)` case in GetConnection() explicy override\r\n> a dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\r\n> before the overriding, but it is no-op. Because The replacement of the dbname in\r\n> pqConnectOptions2() would be done only for the valid (=lastly specified)\r\n> connection options.\r\n\r\nOh, this patch missed the straightforward case:\r\n\r\npg_basebackup -D data_N2 -d \"user=postgres dbname=replication\" -R\r\n-> dbname would not be appeared in primary_conninfo.\r\n\r\nSo I think it cannot be applied as-is. Sorry for sharing the bad item.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 27 Feb 2024 08:37:34 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 2:00 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > We do append dbname=replication even in libpqrcv_connect for .pgpass\n> > lookup as mentioned in comments. See below:\n> > libpqrcv_connect()\n> > {\n> > ....\n> > else\n> > {\n> > /*\n> > * The database name is ignored by the server in replication mode,\n> > * but specify \"replication\" for .pgpass lookup.\n> > */\n> > keys[++i] = \"dbname\";\n> > vals[i] = \"replication\";\n> > }\n> > ...\n> > }\n>\n> OK. So we must add the value for the authorization, right?\n> I think it should be described even in GetConnection().\n>\n> > I think as part of this effort we should just add dbname to\n> > primary_conninfo written in postgresql.auto.conf file. As above, the\n> > question is valid whether we should do it just for 17 or for prior\n> > versions. Let's discuss more on that. I am not sure of the use case\n> > for versions before 17 but commit cca97ce6a665 mentioned that some\n> > middleware or proxies might however need to know the dbname to make\n> > the correct routing decision for the connection. Does that apply here\n> > as well? If so, we can do it and update the docs, otherwise, let's do\n> > it just for 17.\n>\n> Hmm, I might lose your requirements again. So, we must satisfy all of below\n> ones, right?\n> 1) add {\"dbname\", \"replication\"} key-value pair to look up .pgpass file correctly.\n> 2) If the -R is given, output dbname=xxx value to be used by slotsync worker.\n> 3) If the dbname is not given in the connection string, the same string as\n> username must be used to keep the libpq connection rule.\n> 4) No need to add dbname=replication pair\n>\n\nPoint 1) and 4) seems contradictory or maybe I am missing something.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 18:37:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 2:07 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > PSA the patch for implementing it. It is basically same as Ian's one.\n> > However, this patch still cannot satisfy the condition 3).\n> >\n> > `pg_basebackup -D data_N2 -d \"user=postgres\" -R`\n> > -> dbname would not be appeared in primary_conninfo.\n> >\n> > This is because `if (connection_string)` case in GetConnection() explicy override\n> > a dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\n> > before the overriding, but it is no-op. Because The replacement of the dbname in\n> > pqConnectOptions2() would be done only for the valid (=lastly specified)\n> > connection options.\n>\n> Oh, this patch missed the straightforward case:\n>\n> pg_basebackup -D data_N2 -d \"user=postgres dbname=replication\" -R\n> -> dbname would not be appeared in primary_conninfo.\n>\n> So I think it cannot be applied as-is. Sorry for sharing the bad item.\n>\n\nCan you please share the patch that can be considered for review?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Mar 2024 17:15:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Mon, 11 Mar 2024 at 17:16, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Feb 27, 2024 at 2:07 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > PSA the patch for implementing it. It is basically same as Ian's one.\n> > > However, this patch still cannot satisfy the condition 3).\n> > >\n> > > `pg_basebackup -D data_N2 -d \"user=postgres\" -R`\n> > > -> dbname would not be appeared in primary_conninfo.\n> > >\n> > > This is because `if (connection_string)` case in GetConnection() explicy override\n> > > a dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\n> > > before the overriding, but it is no-op. Because The replacement of the dbname in\n> > > pqConnectOptions2() would be done only for the valid (=lastly specified)\n> > > connection options.\n> >\n> > Oh, this patch missed the straightforward case:\n> >\n> > pg_basebackup -D data_N2 -d \"user=postgres dbname=replication\" -R\n> > -> dbname would not be appeared in primary_conninfo.\n> >\n> > So I think it cannot be applied as-is. Sorry for sharing the bad item.\n> >\n>\n> Can you please share the patch that can be considered for review?\n\nHere is a patch with few changes: a) Removed the version check based\non discussion from [1] b) updated the documentation.\nI have tested various scenarios with the attached patch, the dbname\nthat is written in postgresql.auto.conf for each of the cases with\npg_basebackup is given below:\n1) ./pg_basebackup -U vignesh -R\n-> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\nwill have dbname as username specified)\n2) ./pg_basebackup -D data -R\n-> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\nwill have the dbname as username (which is the same as the OS user\nwhile setting defaults))\n3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n-> primary_conninfo = \"dbname=replication\" (In this case\nprimary_conninfo will have dbname as replication which is the default\nvalue from GetConnection as connection string is specified)\n4) ./pg_basebackup -d \"user=vignesh dbname=postgres\" -D data -R\n-> primary_conninfo = \"dbname=postgres\" (In this case primary_conninfo\nwill have the dbname as the dbname specified)\n5) ./pg_basebackup -d \"\" -D data -R\n-> primary_conninfo = \"dbname=replication\" (In this case\nprimary_conninfo will have dbname as replication which is the default\nvalue from GetConnection as connection string is specified)\n\nI have mentioned the reasons as to why dbname is being set to\nreplication or username in each of the cases. How about replacing\nthese values in postgresql.auto.conf manually.\n\n[1] - https://www.postgresql.org/message-id/CAGECzQTh9oB3nu98DsHMpRaVaqXPDRgTDEikY82OAKYF0%3DhVMA%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Tue, 12 Mar 2024 17:12:57 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n>\n> On Mon, 11 Mar 2024 at 17:16, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Feb 27, 2024 at 2:07 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > > PSA the patch for implementing it. It is basically same as Ian's one.\n> > > > However, this patch still cannot satisfy the condition 3).\n> > > >\n> > > > `pg_basebackup -D data_N2 -d \"user=postgres\" -R`\n> > > > -> dbname would not be appeared in primary_conninfo.\n> > > >\n> > > > This is because `if (connection_string)` case in GetConnection() explicy override\n> > > > a dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\n> > > > before the overriding, but it is no-op. Because The replacement of the dbname in\n> > > > pqConnectOptions2() would be done only for the valid (=lastly specified)\n> > > > connection options.\n> > >\n> > > Oh, this patch missed the straightforward case:\n> > >\n> > > pg_basebackup -D data_N2 -d \"user=postgres dbname=replication\" -R\n> > > -> dbname would not be appeared in primary_conninfo.\n> > >\n> > > So I think it cannot be applied as-is. Sorry for sharing the bad item.\n> > >\n> >\n> > Can you please share the patch that can be considered for review?\n>\n> Here is a patch with few changes: a) Removed the version check based\n> on discussion from [1] b) updated the documentation.\n> I have tested various scenarios with the attached patch, the dbname\n> that is written in postgresql.auto.conf for each of the cases with\n> pg_basebackup is given below:\n> 1) ./pg_basebackup -U vignesh -R\n> -> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\n> will have dbname as username specified)\n> 2) ./pg_basebackup -D data -R\n> -> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\n> will have the dbname as username (which is the same as the OS user\n> while setting defaults))\n> 3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n> -> primary_conninfo = \"dbname=replication\" (In this case\n> primary_conninfo will have dbname as replication which is the default\n> value from GetConnection as connection string is specified)\n> 4) ./pg_basebackup -d \"user=vignesh dbname=postgres\" -D data -R\n> -> primary_conninfo = \"dbname=postgres\" (In this case primary_conninfo\n> will have the dbname as the dbname specified)\n> 5) ./pg_basebackup -d \"\" -D data -R\n> -> primary_conninfo = \"dbname=replication\" (In this case\n> primary_conninfo will have dbname as replication which is the default\n> value from GetConnection as connection string is specified)\n>\n> I have mentioned the reasons as to why dbname is being set to\n> replication or username in each of the cases.\n>\n\nIIUC, the dbname is already allowed in connstring for pg_basebackup by\ncommit cca97ce6a6 and with this patch we are just writing it in\npostgresql.auto.conf if -R option is used to write recovery info. This\nwill help slot syncworker to automatically connect with database\nwithout user manually specifying the dbname and replication\nconnection, the same will still be ignored. I don't see any problem\nwith this. Does anyone else see any problem?\n\nThe <filename>postgresql.auto.conf</filename> file will record the connection\n settings and, if specified, the replication slot\n that <application>pg_basebackup</application> is using, so that\n- streaming replication will use the same settings later on.\n+ a) synchronization of logical replication slots on the primary to the\n+ hot_standby from <link linkend=\"pg-sync-replication-slots\">\n+ <function>pg_sync_replication_slots</function></link> or slot\nsync worker\n+ and b) streaming replication will use the same settings later on.\n\nWe can simply modify the last line as: \".. streaming replication or\nlogical replication slots synchronization will use the same settings\nlater on.\" Additionally, you can give a link reference to [1].\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS-SYNCHRONIZATION\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Mar 2024 16:58:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 16:58, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Mar 12, 2024 at 5:13 PM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 11 Mar 2024 at 17:16, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Feb 27, 2024 at 2:07 PM Hayato Kuroda (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > > > PSA the patch for implementing it. It is basically same as Ian's one.\n> > > > > However, this patch still cannot satisfy the condition 3).\n> > > > >\n> > > > > `pg_basebackup -D data_N2 -d \"user=postgres\" -R`\n> > > > > -> dbname would not be appeared in primary_conninfo.\n> > > > >\n> > > > > This is because `if (connection_string)` case in GetConnection() explicy override\n> > > > > a dbname to \"replication\". I've tried to add a dummy entry {\"dbname\", NULL} pair\n> > > > > before the overriding, but it is no-op. Because The replacement of the dbname in\n> > > > > pqConnectOptions2() would be done only for the valid (=lastly specified)\n> > > > > connection options.\n> > > >\n> > > > Oh, this patch missed the straightforward case:\n> > > >\n> > > > pg_basebackup -D data_N2 -d \"user=postgres dbname=replication\" -R\n> > > > -> dbname would not be appeared in primary_conninfo.\n> > > >\n> > > > So I think it cannot be applied as-is. Sorry for sharing the bad item.\n> > > >\n> > >\n> > > Can you please share the patch that can be considered for review?\n> >\n> > Here is a patch with few changes: a) Removed the version check based\n> > on discussion from [1] b) updated the documentation.\n> > I have tested various scenarios with the attached patch, the dbname\n> > that is written in postgresql.auto.conf for each of the cases with\n> > pg_basebackup is given below:\n> > 1) ./pg_basebackup -U vignesh -R\n> > -> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\n> > will have dbname as username specified)\n> > 2) ./pg_basebackup -D data -R\n> > -> primary_conninfo = \"dbname=vignesh\" (In this case primary_conninfo\n> > will have the dbname as username (which is the same as the OS user\n> > while setting defaults))\n> > 3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n> > -> primary_conninfo = \"dbname=replication\" (In this case\n> > primary_conninfo will have dbname as replication which is the default\n> > value from GetConnection as connection string is specified)\n> > 4) ./pg_basebackup -d \"user=vignesh dbname=postgres\" -D data -R\n> > -> primary_conninfo = \"dbname=postgres\" (In this case primary_conninfo\n> > will have the dbname as the dbname specified)\n> > 5) ./pg_basebackup -d \"\" -D data -R\n> > -> primary_conninfo = \"dbname=replication\" (In this case\n> > primary_conninfo will have dbname as replication which is the default\n> > value from GetConnection as connection string is specified)\n> >\n> > I have mentioned the reasons as to why dbname is being set to\n> > replication or username in each of the cases.\n> >\n>\n> IIUC, the dbname is already allowed in connstring for pg_basebackup by\n> commit cca97ce6a6 and with this patch we are just writing it in\n> postgresql.auto.conf if -R option is used to write recovery info. This\n> will help slot syncworker to automatically connect with database\n> without user manually specifying the dbname and replication\n> connection, the same will still be ignored. I don't see any problem\n> with this. Does anyone else see any problem?\n>\n> The <filename>postgresql.auto.conf</filename> file will record the connection\n> settings and, if specified, the replication slot\n> that <application>pg_basebackup</application> is using, so that\n> - streaming replication will use the same settings later on.\n> + a) synchronization of logical replication slots on the primary to the\n> + hot_standby from <link linkend=\"pg-sync-replication-slots\">\n> + <function>pg_sync_replication_slots</function></link> or slot\n> sync worker\n> + and b) streaming replication will use the same settings later on.\n>\n> We can simply modify the last line as: \".. streaming replication or\n> logical replication slots synchronization will use the same settings\n> later on.\" Additionally, you can give a link reference to [1].\n\nThanks for the comments, the attached v4 version patch has the changes\nfor the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 13 Mar 2024 21:34:20 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 3:05 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Feb 21, 2024 at 7:46 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > > Just FYI - here is an extreme case. And note that I have applied proposed patch.\n> > > >\n> > > > When `pg_basebackup -D data_N2 -R` is used:\n> > > > ```\n> > > > primary_conninfo = 'user=hayato ... dbname=hayato ...\n> > > > ```\n> > > >\n> > > > But when `pg_basebackup -d \"\" -D data_N2 -R` is used:\n> > > > ```\n> > > > primary_conninfo = 'user=hayato ... dbname=replication\n> > > > ```\n> > >\n> > > It seems like maybe somebody should look into why this is happening,\n> > > and perhaps fix it.\n> >\n> > I think this caused from below part [1] in GetConnection().\n> >\n> > If both dbname and connection_string are the NULL, we will enter the else part\n> > and NULL would be substituted - {\"dbnmae\", NULL} key-value pair is generated\n> > only here.\n> >\n> > Then, in PQconnectdbParams()->PQconnectStartParams->pqConnectOptions2(),\n> > the strange part would be found and replaced to the username [2].\n> >\n> > I think if both the connection string and the dbname are NULL, it should be\n> > considered as the physical replication connection. here is a patch to fix it.\n> >\n>\n> When dbname is NULL or not given, it defaults to username. This\n> follows the specs of the connection string.\n\nThis fact makes me think that the slotsync worker might be able to\naccept the primary_conninfo value even if there is no dbname in the\nvalue. That is, if there is no dbname in the primary_conninfo, it uses\nthe username in accordance with the specs of the connection string.\nCurrently, the slotsync worker connects to the local database first\nand then establishes the connection to the primary server. But if we\ncan reverse the two steps, it can get the dbname that has actually\nbeen used to establish the remote connection and use it for the local\nconnection too. That way, the primary_conninfo generated by\npg_basebackup could work even without the patch. For example, if the\nOS user executing pg_basebackup is 'postgres', the slotsync worker\nwould connect to the postgres database. Given the 'postgres' database\nis created by default and 'postgres' OS user is used in common, I\nguess it could cover many cases in practice actually.\n\nHaving said that, even with (or without) the above change, we might\nwant to change the pg_basebackup so that it writes the dbname to the\nprimary_conninfo if -R option is specified. Since the database where\nthe slotsync worker connects cannot be dropped while the slotsync\nworker is running, the user might want to change the database to\nconnect, and it would be useful if they can do that using\npg_basebackup instead of modifying the configuration file manually.\n\nWhile the current approach makes sense to me, I'm a bit concerned that\nwe might end up having the pg_basebackup search the actual database\nname (e.g. 'dbname=template1') from the .pgpass file instead of\n'dbname=replication'. As far as I tested on my environment, suppose\nthat I execute:\n\npg_basebackup -D tmp -d \"dbname=testdb\" -R\n\nThe pg_basebackup established a replication connection but looked for\nthe password of the 'testdb' database. This could be another\ninconvenience for the existing users who want to use the slot\nsynchronization.\n\nA random idea I came up with is, we add a new option to the\npg_basebackup to overwrite the full or some portion of the connection\nstring that is eventually written in the primary_conninfo in\npostgresql.auto.conf. For example, the command:\n\npg_basebackup -D tmp -d \"host=1.1.1.1 port=5555\" -R\n--primary-coninfo-ext \"host=2.2.2.2 dbname=postgres\"\n\nwill produce the connection string that is based on -d option value\nbut is overwritten by --primary-conninfo-ext option value, which will\nbe like:\n\nhost=2.2.2.2 dbname=postgres port=5555\n\nThis option might help not only for users who want to use the slotsync\nworker but also for users who want to take a basebackup from a standby\nbut have the new standby connect to the primary.\n\nBut it's still just an idea and I might be missing something. And\ngiven we're getting closer to the feature freeze, it would be a PG18\nitem.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 09:26:28 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n>\n> This fact makes me think that the slotsync worker might be able to\n> accept the primary_conninfo value even if there is no dbname in the\n> value. That is, if there is no dbname in the primary_conninfo, it uses\n> the username in accordance with the specs of the connection string.\n> Currently, the slotsync worker connects to the local database first\n> and then establishes the connection to the primary server. But if we\n> can reverse the two steps, it can get the dbname that has actually\n> been used to establish the remote connection and use it for the local\n> connection too. That way, the primary_conninfo generated by\n> pg_basebackup could work even without the patch. For example, if the\n> OS user executing pg_basebackup is 'postgres', the slotsync worker\n> would connect to the postgres database. Given the 'postgres' database\n> is created by default and 'postgres' OS user is used in common, I\n> guess it could cover many cases in practice actually.\n>\n\nI think this is worth investigating but I suspect that in most cases\nusers will end up using a replication connection without specifying\nthe user name and we may not be able to give a meaningful error\nmessage when slotsync worker won't be able to connect. The same will\nbe true even when the dbname same as the username would be used.\n\n> Having said that, even with (or without) the above change, we might\n> want to change the pg_basebackup so that it writes the dbname to the\n> primary_conninfo if -R option is specified. Since the database where\n> the slotsync worker connects cannot be dropped while the slotsync\n> worker is running, the user might want to change the database to\n> connect, and it would be useful if they can do that using\n> pg_basebackup instead of modifying the configuration file manually.\n>\n> While the current approach makes sense to me, I'm a bit concerned that\n> we might end up having the pg_basebackup search the actual database\n> name (e.g. 'dbname=template1') from the .pgpass file instead of\n> 'dbname=replication'. As far as I tested on my environment, suppose\n> that I execute:\n>\n> pg_basebackup -D tmp -d \"dbname=testdb\" -R\n>\n> The pg_basebackup established a replication connection but looked for\n> the password of the 'testdb' database. This could be another\n> inconvenience for the existing users who want to use the slot\n> synchronization.\n>\n\nThis is true because it is internally using logical replication\nconnection (as we will set set replication=database). I feel the\nmentioned behavior is an expected one with or without slotsync worker\nusage. Anyway, this is an optional feature, so users can still choose\nto ignore specifying dbname in the connection string. The point is\nthat commit cca97ce6a6 allowed using dbname in the connection string\nin pg_basebackup and we are just extending to write that dbname along\nwith other things in connection_info.\n\n> A random idea I came up with is, we add a new option to the\n> pg_basebackup to overwrite the full or some portion of the connection\n> string that is eventually written in the primary_conninfo in\n> postgresql.auto.conf. For example, the command:\n>\n> pg_basebackup -D tmp -d \"host=1.1.1.1 port=5555\" -R\n> --primary-coninfo-ext \"host=2.2.2.2 dbname=postgres\"\n>\n> will produce the connection string that is based on -d option value\n> but is overwritten by --primary-conninfo-ext option value, which will\n> be like:\n>\n> host=2.2.2.2 dbname=postgres port=5555\n>\n> This option might help not only for users who want to use the slotsync\n> worker but also for users who want to take a basebackup from a standby\n> but have the new standby connect to the primary.\n>\n\nAgreed, this could be another way though it would be good to get some\ninputs from users or otherwise about the preferred way to specify\ndbname. One can also imagine using the Alter System for this purpose.\n\n> But it's still just an idea and I might be missing something. And\n> given we're getting closer to the feature freeze, it would be a PG18\n> item.\n>\n\n+1. At this stage, it is important to discuss whether we should allow\npg_baseback to write dbname (either a specified one or a default one)\nalong with other parameters in primary_conninfo?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 10:56:56 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 10:57 AM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > This fact makes me think that the slotsync worker might be able to\n> > accept the primary_conninfo value even if there is no dbname in the\n> > value. That is, if there is no dbname in the primary_conninfo, it uses\n> > the username in accordance with the specs of the connection string.\n> > Currently, the slotsync worker connects to the local database first\n> > and then establishes the connection to the primary server. But if we\n> > can reverse the two steps, it can get the dbname that has actually\n> > been used to establish the remote connection and use it for the local\n> > connection too. That way, the primary_conninfo generated by\n> > pg_basebackup could work even without the patch. For example, if the\n> > OS user executing pg_basebackup is 'postgres', the slotsync worker\n> > would connect to the postgres database. Given the 'postgres' database\n> > is created by default and 'postgres' OS user is used in common, I\n> > guess it could cover many cases in practice actually.\n> >\n>\n> I think this is worth investigating but I suspect that in most cases\n> users will end up using a replication connection without specifying\n> the user name and we may not be able to give a meaningful error\n> message when slotsync worker won't be able to connect. The same will\n> be true even when the dbname same as the username would be used.\n>\n\nI attempted the change as suggested by Swada-San. Attached the PoC\npatch .For it to work, I have to expose a new get api in libpq-fe\nwhich gets dbname from stream-connection. Please have a look.\n\nWithout this PoC patch, the errors in slot-sync worker:\n\n-----------------\na) If dbname is missing:\n[1230932] LOG: slot sync worker started\n[1230932] ERROR: slot synchronization requires dbname to be specified\nin primary_conninfo\n\nb) If specified db does not exist\n[1230913] LOG: slot sync worker started\n[1230913] FATAL: database \"postgres1\" does not exist\n-----------------\n\nNow with this patch:\n-----------------\na) If the dbname same as user does not exist:\n[1232473] LOG: slot sync worker started\n[1232473] ERROR: could not connect to the primary server: connection\nto server at \"127.0.0.1\", port 5433 failed: FATAL: database\n\"bckp_user\" does not exist\n\nb) If user itself is removed from primary_conninfo, libpq takes user\nwho has authenticated the system by default and gives error if db of\nsame name does not exist\nERROR: could not connect to the primary server: connection to server\nat \"127.0.0.1\", port 5433 failed: FATAL: database \"shveta\" does not\nexist\n-----------------\n\nThe errors in second case look slightly confusing to me.\n\nthanks\nShveta",
"msg_date": "Thu, 14 Mar 2024 12:33:57 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 2:27 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > This fact makes me think that the slotsync worker might be able to\n> > accept the primary_conninfo value even if there is no dbname in the\n> > value. That is, if there is no dbname in the primary_conninfo, it uses\n> > the username in accordance with the specs of the connection string.\n> > Currently, the slotsync worker connects to the local database first\n> > and then establishes the connection to the primary server. But if we\n> > can reverse the two steps, it can get the dbname that has actually\n> > been used to establish the remote connection and use it for the local\n> > connection too. That way, the primary_conninfo generated by\n> > pg_basebackup could work even without the patch. For example, if the\n> > OS user executing pg_basebackup is 'postgres', the slotsync worker\n> > would connect to the postgres database. Given the 'postgres' database\n> > is created by default and 'postgres' OS user is used in common, I\n> > guess it could cover many cases in practice actually.\n> >\n>\n> I think this is worth investigating but I suspect that in most cases\n> users will end up using a replication connection without specifying\n> the user name and we may not be able to give a meaningful error\n> message when slotsync worker won't be able to connect. The same will\n> be true even when the dbname same as the username would be used.\n\nWhat do you mean by not being able to give a meaningful error message?\n\nIf the slotsync worker uses the user name as the dbname, and such a\ndatabase doesn't exist, the error message the user will get is\n\"database \"test_user\" does not exist\". ISTM the same is true when the\nuser specifies the wrong database in the primary_conninfo.\n\n>\n> > Having said that, even with (or without) the above change, we might\n> > want to change the pg_basebackup so that it writes the dbname to the\n> > primary_conninfo if -R option is specified. Since the database where\n> > the slotsync worker connects cannot be dropped while the slotsync\n> > worker is running, the user might want to change the database to\n> > connect, and it would be useful if they can do that using\n> > pg_basebackup instead of modifying the configuration file manually.\n> >\n> > While the current approach makes sense to me, I'm a bit concerned that\n> > we might end up having the pg_basebackup search the actual database\n> > name (e.g. 'dbname=template1') from the .pgpass file instead of\n> > 'dbname=replication'. As far as I tested on my environment, suppose\n> > that I execute:\n> >\n> > pg_basebackup -D tmp -d \"dbname=testdb\" -R\n> >\n> > The pg_basebackup established a replication connection but looked for\n> > the password of the 'testdb' database. This could be another\n> > inconvenience for the existing users who want to use the slot\n> > synchronization.\n> >\n>\n> This is true because it is internally using logical replication\n> connection (as we will set set replication=database).\n\nDid you mean the pg_basebackup is using a logical replication\nconnection in this case? As far as I tested, even if we specify dbname\nto the -d option of pg_basebackup, it uses a physical replication\nconnection. For example, it can take a backup even if I specify a\nnon-existing database name.\n\n> > A random idea I came up with is, we add a new option to the\n> > pg_basebackup to overwrite the full or some portion of the connection\n> > string that is eventually written in the primary_conninfo in\n> > postgresql.auto.conf. For example, the command:\n> >\n> > pg_basebackup -D tmp -d \"host=1.1.1.1 port=5555\" -R\n> > --primary-coninfo-ext \"host=2.2.2.2 dbname=postgres\"\n> >\n> > will produce the connection string that is based on -d option value\n> > but is overwritten by --primary-conninfo-ext option value, which will\n> > be like:\n> >\n> > host=2.2.2.2 dbname=postgres port=5555\n> >\n> > This option might help not only for users who want to use the slotsync\n> > worker but also for users who want to take a basebackup from a standby\n> > but have the new standby connect to the primary.\n> >\n>\n> Agreed, this could be another way though it would be good to get some\n> inputs from users or otherwise about the preferred way to specify\n> dbname. One can also imagine using the Alter System for this purpose.\n\nAgreed.\n\n>\n> > But it's still just an idea and I might be missing something. And\n> > given we're getting closer to the feature freeze, it would be a PG18\n> > item.\n> >\n>\n> +1. At this stage, it is important to discuss whether we should allow\n> pg_baseback to write dbname (either a specified one or a default one)\n> along with other parameters in primary_conninfo?\n>\n\nTrue. While I basically agree that pg_basebackup writes dbname in\nprimary_conninfo, I'm concerned that writing \"dbname=replication\"\ncould be problematic. Quoting the case 3) Vignesh summarized before:\n\n3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n-> primary_conninfo = \"dbname=replication\" (In this case\nprimary_conninfo will have dbname as replication which is the default\nvalue from GetConnection as connection string is specified)\n\nThe primary_conninfo generated by pg_basebackup -R is now used by\neither a walreceiver (for physical replication connection) or a\nslotsync worker (for normal connection). The \"dbname=replication\" is\nokay for walreceiver. On the other hand, as for the slotsync worker,\nit can pass the CheckAndGetDbnameFromConninfo() check but it's very\nlikely that it cannot connect to the primary since most users won't\ncreate a database with \"replication\" name. The user will end up\ngetting an error message like 'database \"replication\" does not exist'\nbut I'm not sure it would be informative for users. Rather, the error\nmessage \"slot synchronization requires dbname to be specified in\nprimary_conninfo\" might be more informative for users. So I personally\nlike to omit the dbname if \"dbname=replication\", at this point.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 17:14:50 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 1:45 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 2:27 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > This fact makes me think that the slotsync worker might be able to\n> > > accept the primary_conninfo value even if there is no dbname in the\n> > > value. That is, if there is no dbname in the primary_conninfo, it uses\n> > > the username in accordance with the specs of the connection string.\n> > > Currently, the slotsync worker connects to the local database first\n> > > and then establishes the connection to the primary server. But if we\n> > > can reverse the two steps, it can get the dbname that has actually\n> > > been used to establish the remote connection and use it for the local\n> > > connection too. That way, the primary_conninfo generated by\n> > > pg_basebackup could work even without the patch. For example, if the\n> > > OS user executing pg_basebackup is 'postgres', the slotsync worker\n> > > would connect to the postgres database. Given the 'postgres' database\n> > > is created by default and 'postgres' OS user is used in common, I\n> > > guess it could cover many cases in practice actually.\n> > >\n> >\n> > I think this is worth investigating but I suspect that in most cases\n> > users will end up using a replication connection without specifying\n> > the user name and we may not be able to give a meaningful error\n> > message when slotsync worker won't be able to connect. The same will\n> > be true even when the dbname same as the username would be used.\n>\n> What do you mean by not being able to give a meaningful error message?\n>\n> If the slotsync worker uses the user name as the dbname, and such a\n> database doesn't exist, the error message the user will get is\n> \"database \"test_user\" does not exist\". ISTM the same is true when the\n> user specifies the wrong database in the primary_conninfo.\n>\n\nRight, the exact error message as mentioned by Shveta will be:\nERROR: could not connect to the primary server: connection to server\nat \"127.0.0.1\", port 5433 failed: FATAL: database \"bckp_user\" does not\nexist\n\nNow, without this idea, the ERROR message will be:\n ERROR: slot synchronization requires dbname to be specified in\nprimary_conninfo\n\nI am not sure how much this matters but the second message sounds more useful.\n\n> >\n> > > Having said that, even with (or without) the above change, we might\n> > > want to change the pg_basebackup so that it writes the dbname to the\n> > > primary_conninfo if -R option is specified. Since the database where\n> > > the slotsync worker connects cannot be dropped while the slotsync\n> > > worker is running, the user might want to change the database to\n> > > connect, and it would be useful if they can do that using\n> > > pg_basebackup instead of modifying the configuration file manually.\n> > >\n> > > While the current approach makes sense to me, I'm a bit concerned that\n> > > we might end up having the pg_basebackup search the actual database\n> > > name (e.g. 'dbname=template1') from the .pgpass file instead of\n> > > 'dbname=replication'. As far as I tested on my environment, suppose\n> > > that I execute:\n> > >\n> > > pg_basebackup -D tmp -d \"dbname=testdb\" -R\n> > >\n> > > The pg_basebackup established a replication connection but looked for\n> > > the password of the 'testdb' database. This could be another\n> > > inconvenience for the existing users who want to use the slot\n> > > synchronization.\n> > >\n> >\n> > This is true because it is internally using logical replication\n> > connection (as we will set set replication=database).\n>\n> Did you mean the pg_basebackup is using a logical replication\n> connection in this case? As far as I tested, even if we specify dbname\n> to the -d option of pg_basebackup, it uses a physical replication\n> connection. For example, it can take a backup even if I specify a\n> non-existing database name.\n>\n\nYou are right. I misunderstood some part of the code in GetConnection.\nHowever, I think my point is still valid that if the user has provided\ndbname in the connection string it means that she wants that database\nentry to be looked upon not \"replication\" entry.\n\n>\n> >\n> > > But it's still just an idea and I might be missing something. And\n> > > given we're getting closer to the feature freeze, it would be a PG18\n> > > item.\n> > >\n> >\n> > +1. At this stage, it is important to discuss whether we should allow\n> > pg_baseback to write dbname (either a specified one or a default one)\n> > along with other parameters in primary_conninfo?\n> >\n>\n> True. While I basically agree that pg_basebackup writes dbname in\n> primary_conninfo, I'm concerned that writing \"dbname=replication\"\n> could be problematic. Quoting the case 3) Vignesh summarized before:\n>\n> 3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n> -> primary_conninfo = \"dbname=replication\" (In this case\n> primary_conninfo will have dbname as replication which is the default\n> value from GetConnection as connection string is specified)\n>\n> The primary_conninfo generated by pg_basebackup -R is now used by\n> either a walreceiver (for physical replication connection) or a\n> slotsync worker (for normal connection). The \"dbname=replication\" is\n> okay for walreceiver. On the other hand, as for the slotsync worker,\n> it can pass the CheckAndGetDbnameFromConninfo() check but it's very\n> likely that it cannot connect to the primary since most users won't\n> create a database with \"replication\" name. The user will end up\n> getting an error message like 'database \"replication\" does not exist'\n> but I'm not sure it would be informative for users. Rather, the error\n> message \"slot synchronization requires dbname to be specified in\n> primary_conninfo\" might be more informative for users. So I personally\n> like to omit the dbname if \"dbname=replication\", at this point.\n>\n\nHow about if we write dbname in primary_conninfo to\npostgresql.auto.conf file only when the user has explicitly specified\ndbname in the connection string? To achieve this we need to somehow\npass this information via PGconn (say by having a new bool variable\ndbname_specified) from GetConnection() or something like that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:48:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 15:49, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 1:45 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 2:27 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > This fact makes me think that the slotsync worker might be able to\n> > > > accept the primary_conninfo value even if there is no dbname in the\n> > > > value. That is, if there is no dbname in the primary_conninfo, it uses\n> > > > the username in accordance with the specs of the connection string.\n> > > > Currently, the slotsync worker connects to the local database first\n> > > > and then establishes the connection to the primary server. But if we\n> > > > can reverse the two steps, it can get the dbname that has actually\n> > > > been used to establish the remote connection and use it for the local\n> > > > connection too. That way, the primary_conninfo generated by\n> > > > pg_basebackup could work even without the patch. For example, if the\n> > > > OS user executing pg_basebackup is 'postgres', the slotsync worker\n> > > > would connect to the postgres database. Given the 'postgres' database\n> > > > is created by default and 'postgres' OS user is used in common, I\n> > > > guess it could cover many cases in practice actually.\n> > > >\n> > >\n> > > I think this is worth investigating but I suspect that in most cases\n> > > users will end up using a replication connection without specifying\n> > > the user name and we may not be able to give a meaningful error\n> > > message when slotsync worker won't be able to connect. The same will\n> > > be true even when the dbname same as the username would be used.\n> >\n> > What do you mean by not being able to give a meaningful error message?\n> >\n> > If the slotsync worker uses the user name as the dbname, and such a\n> > database doesn't exist, the error message the user will get is\n> > \"database \"test_user\" does not exist\". ISTM the same is true when the\n> > user specifies the wrong database in the primary_conninfo.\n> >\n>\n> Right, the exact error message as mentioned by Shveta will be:\n> ERROR: could not connect to the primary server: connection to server\n> at \"127.0.0.1\", port 5433 failed: FATAL: database \"bckp_user\" does not\n> exist\n>\n> Now, without this idea, the ERROR message will be:\n> ERROR: slot synchronization requires dbname to be specified in\n> primary_conninfo\n>\n> I am not sure how much this matters but the second message sounds more useful.\n>\n> > >\n> > > > Having said that, even with (or without) the above change, we might\n> > > > want to change the pg_basebackup so that it writes the dbname to the\n> > > > primary_conninfo if -R option is specified. Since the database where\n> > > > the slotsync worker connects cannot be dropped while the slotsync\n> > > > worker is running, the user might want to change the database to\n> > > > connect, and it would be useful if they can do that using\n> > > > pg_basebackup instead of modifying the configuration file manually.\n> > > >\n> > > > While the current approach makes sense to me, I'm a bit concerned that\n> > > > we might end up having the pg_basebackup search the actual database\n> > > > name (e.g. 'dbname=template1') from the .pgpass file instead of\n> > > > 'dbname=replication'. As far as I tested on my environment, suppose\n> > > > that I execute:\n> > > >\n> > > > pg_basebackup -D tmp -d \"dbname=testdb\" -R\n> > > >\n> > > > The pg_basebackup established a replication connection but looked for\n> > > > the password of the 'testdb' database. This could be another\n> > > > inconvenience for the existing users who want to use the slot\n> > > > synchronization.\n> > > >\n> > >\n> > > This is true because it is internally using logical replication\n> > > connection (as we will set set replication=database).\n> >\n> > Did you mean the pg_basebackup is using a logical replication\n> > connection in this case? As far as I tested, even if we specify dbname\n> > to the -d option of pg_basebackup, it uses a physical replication\n> > connection. For example, it can take a backup even if I specify a\n> > non-existing database name.\n> >\n>\n> You are right. I misunderstood some part of the code in GetConnection.\n> However, I think my point is still valid that if the user has provided\n> dbname in the connection string it means that she wants that database\n> entry to be looked upon not \"replication\" entry.\n>\n> >\n> > >\n> > > > But it's still just an idea and I might be missing something. And\n> > > > given we're getting closer to the feature freeze, it would be a PG18\n> > > > item.\n> > > >\n> > >\n> > > +1. At this stage, it is important to discuss whether we should allow\n> > > pg_baseback to write dbname (either a specified one or a default one)\n> > > along with other parameters in primary_conninfo?\n> > >\n> >\n> > True. While I basically agree that pg_basebackup writes dbname in\n> > primary_conninfo, I'm concerned that writing \"dbname=replication\"\n> > could be problematic. Quoting the case 3) Vignesh summarized before:\n> >\n> > 3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n> > -> primary_conninfo = \"dbname=replication\" (In this case\n> > primary_conninfo will have dbname as replication which is the default\n> > value from GetConnection as connection string is specified)\n> >\n> > The primary_conninfo generated by pg_basebackup -R is now used by\n> > either a walreceiver (for physical replication connection) or a\n> > slotsync worker (for normal connection). The \"dbname=replication\" is\n> > okay for walreceiver. On the other hand, as for the slotsync worker,\n> > it can pass the CheckAndGetDbnameFromConninfo() check but it's very\n> > likely that it cannot connect to the primary since most users won't\n> > create a database with \"replication\" name. The user will end up\n> > getting an error message like 'database \"replication\" does not exist'\n> > but I'm not sure it would be informative for users. Rather, the error\n> > message \"slot synchronization requires dbname to be specified in\n> > primary_conninfo\" might be more informative for users. So I personally\n> > like to omit the dbname if \"dbname=replication\", at this point.\n> >\n>\n> How about if we write dbname in primary_conninfo to\n> postgresql.auto.conf file only when the user has explicitly specified\n> dbname in the connection string? To achieve this we need to somehow\n> pass this information via PGconn (say by having a new bool variable\n> dbname_specified) from GetConnection() or something like that?\n\nHere is a patch which will write dbname in the primary_conninfo only\nif the database name is specified explicitly. I have added a new\nfunction GetDbnameFromConnectionString which will return the dbname\nspecified in the connection and GenerateRecoveryConfig will append\nthis database name.\nHere are the test results with the patch:\ncase 1:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=postgres\"\nprimary_conninfo will have dbname=postgres\n\ncase 2:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=replication\"\nprimary_conninfo will have dbname=replication\n\ncase 3:\npg_basebackup -D test10 -p 5431 -X s -P -R -U vignesh\nprimary_conninfo will not have dbname\n\ncase 4:\npg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will not have dbname\n\ncase 5:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh\"\nprimary_conninfo will not have dbname\n\ncase 6:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"\"\nprimary_conninfo will not have dbname\n\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Thu, 14 Mar 2024 20:16:31 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 11:46 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 14 Mar 2024 at 15:49, Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Mar 14, 2024 at 1:45 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 14, 2024 at 2:27 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Thu, Mar 14, 2024 at 5:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > This fact makes me think that the slotsync worker might be able to\n> > > > > accept the primary_conninfo value even if there is no dbname in the\n> > > > > value. That is, if there is no dbname in the primary_conninfo, it uses\n> > > > > the username in accordance with the specs of the connection string.\n> > > > > Currently, the slotsync worker connects to the local database first\n> > > > > and then establishes the connection to the primary server. But if we\n> > > > > can reverse the two steps, it can get the dbname that has actually\n> > > > > been used to establish the remote connection and use it for the local\n> > > > > connection too. That way, the primary_conninfo generated by\n> > > > > pg_basebackup could work even without the patch. For example, if the\n> > > > > OS user executing pg_basebackup is 'postgres', the slotsync worker\n> > > > > would connect to the postgres database. Given the 'postgres' database\n> > > > > is created by default and 'postgres' OS user is used in common, I\n> > > > > guess it could cover many cases in practice actually.\n> > > > >\n> > > >\n> > > > I think this is worth investigating but I suspect that in most cases\n> > > > users will end up using a replication connection without specifying\n> > > > the user name and we may not be able to give a meaningful error\n> > > > message when slotsync worker won't be able to connect. The same will\n> > > > be true even when the dbname same as the username would be used.\n> > >\n> > > What do you mean by not being able to give a meaningful error message?\n> > >\n> > > If the slotsync worker uses the user name as the dbname, and such a\n> > > database doesn't exist, the error message the user will get is\n> > > \"database \"test_user\" does not exist\". ISTM the same is true when the\n> > > user specifies the wrong database in the primary_conninfo.\n> > >\n> >\n> > Right, the exact error message as mentioned by Shveta will be:\n> > ERROR: could not connect to the primary server: connection to server\n> > at \"127.0.0.1\", port 5433 failed: FATAL: database \"bckp_user\" does not\n> > exist\n> >\n> > Now, without this idea, the ERROR message will be:\n> > ERROR: slot synchronization requires dbname to be specified in\n> > primary_conninfo\n> >\n> > I am not sure how much this matters but the second message sounds more useful.\n> >\n> > > >\n> > > > > Having said that, even with (or without) the above change, we might\n> > > > > want to change the pg_basebackup so that it writes the dbname to the\n> > > > > primary_conninfo if -R option is specified. Since the database where\n> > > > > the slotsync worker connects cannot be dropped while the slotsync\n> > > > > worker is running, the user might want to change the database to\n> > > > > connect, and it would be useful if they can do that using\n> > > > > pg_basebackup instead of modifying the configuration file manually.\n> > > > >\n> > > > > While the current approach makes sense to me, I'm a bit concerned that\n> > > > > we might end up having the pg_basebackup search the actual database\n> > > > > name (e.g. 'dbname=template1') from the .pgpass file instead of\n> > > > > 'dbname=replication'. As far as I tested on my environment, suppose\n> > > > > that I execute:\n> > > > >\n> > > > > pg_basebackup -D tmp -d \"dbname=testdb\" -R\n> > > > >\n> > > > > The pg_basebackup established a replication connection but looked for\n> > > > > the password of the 'testdb' database. This could be another\n> > > > > inconvenience for the existing users who want to use the slot\n> > > > > synchronization.\n> > > > >\n> > > >\n> > > > This is true because it is internally using logical replication\n> > > > connection (as we will set set replication=database).\n> > >\n> > > Did you mean the pg_basebackup is using a logical replication\n> > > connection in this case? As far as I tested, even if we specify dbname\n> > > to the -d option of pg_basebackup, it uses a physical replication\n> > > connection. For example, it can take a backup even if I specify a\n> > > non-existing database name.\n> > >\n> >\n> > You are right. I misunderstood some part of the code in GetConnection.\n> > However, I think my point is still valid that if the user has provided\n> > dbname in the connection string it means that she wants that database\n> > entry to be looked upon not \"replication\" entry.\n> >\n> > >\n> > > >\n> > > > > But it's still just an idea and I might be missing something. And\n> > > > > given we're getting closer to the feature freeze, it would be a PG18\n> > > > > item.\n> > > > >\n> > > >\n> > > > +1. At this stage, it is important to discuss whether we should allow\n> > > > pg_baseback to write dbname (either a specified one or a default one)\n> > > > along with other parameters in primary_conninfo?\n> > > >\n> > >\n> > > True. While I basically agree that pg_basebackup writes dbname in\n> > > primary_conninfo, I'm concerned that writing \"dbname=replication\"\n> > > could be problematic. Quoting the case 3) Vignesh summarized before:\n> > >\n> > > 3) ./pg_basebackup -d \"user=vignesh\" -D data -R\n> > > -> primary_conninfo = \"dbname=replication\" (In this case\n> > > primary_conninfo will have dbname as replication which is the default\n> > > value from GetConnection as connection string is specified)\n> > >\n> > > The primary_conninfo generated by pg_basebackup -R is now used by\n> > > either a walreceiver (for physical replication connection) or a\n> > > slotsync worker (for normal connection). The \"dbname=replication\" is\n> > > okay for walreceiver. On the other hand, as for the slotsync worker,\n> > > it can pass the CheckAndGetDbnameFromConninfo() check but it's very\n> > > likely that it cannot connect to the primary since most users won't\n> > > create a database with \"replication\" name. The user will end up\n> > > getting an error message like 'database \"replication\" does not exist'\n> > > but I'm not sure it would be informative for users. Rather, the error\n> > > message \"slot synchronization requires dbname to be specified in\n> > > primary_conninfo\" might be more informative for users. So I personally\n> > > like to omit the dbname if \"dbname=replication\", at this point.\n> > >\n> >\n> > How about if we write dbname in primary_conninfo to\n> > postgresql.auto.conf file only when the user has explicitly specified\n> > dbname in the connection string? To achieve this we need to somehow\n> > pass this information via PGconn (say by having a new bool variable\n> > dbname_specified) from GetConnection() or something like that?\n>\n> Here is a patch which will write dbname in the primary_conninfo only\n> if the database name is specified explicitly. I have added a new\n> function GetDbnameFromConnectionString which will return the dbname\n> specified in the connection and GenerateRecoveryConfig will append\n> this database name.\n> Here are the test results with the patch:\n> case 1:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=postgres\"\n> primary_conninfo will have dbname=postgres\n>\n> case 2:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=replication\"\n> primary_conninfo will have dbname=replication\n>\n> case 3:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -U vignesh\n> primary_conninfo will not have dbname\n>\n> case 4:\n> pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will not have dbname\n>\n> case 5:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh\"\n> primary_conninfo will not have dbname\n>\n> case 6:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"\"\n> primary_conninfo will not have dbname\n>\n> Thoughts?\n\nThank you for updating the patch!\n\nThis behavior makes sense to me. But do we want to handle the case of\nusing environment variables too? IIUC,\n\npg_basebackup -D tmp -d \"user=masahiko dbname=test_db\"\n\nis equivalent to:\n\nPGDATABASE=\"user=masahiko dbname=test_db\" pg_basebackup -D tmp\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 09:33:44 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nThanks for giving comments!\r\n\r\n> This behavior makes sense to me. But do we want to handle the case of\r\n> using environment variables too? \r\n\r\nYeah, v5 does not consider which libpq parameters are specified by environment\r\nvariables. Such a variable should be used when the dbname is not expressly written\r\nin the connection string.\r\nSuch a path was added in the v6 patch. If the dbname is not determined after\r\nparsing the connection string, we call PQconndefaults() to get settings from\r\nenvironment variables and service files [1], then start to search dbname again.\r\nBelow shows an example.\r\n\r\n```\r\nPGPORT=5431 PGUSER=kuroda PGDATABASE=postgres pg_basebackup -D data_N2 -R -v\r\n->\r\nprimary_conninfo = 'user=kuroda ... port=5431 ... dbname=postgres ... '\r\n```\r\n\r\n> IIUC,\r\n>\r\n> pg_basebackup -D tmp -d \"user=masahiko dbname=test_db\"\r\n>\r\n> is equivalent to:\r\n>\r\n> PGDATABASE=\"user=masahiko dbname=test_db\" pg_basebackup -D tmp\r\n\r\nThe case won't work. I think You assumed that expanded_dbname like\r\nPQconnectdbParams() [2] can be used for enviroment variables, but it is not correct\r\n- it won't parse as connection string again.\r\n\r\nIn the libpq layer, connection parameters are parsed in PQconnectStartParams()->conninfo_array_parse().\r\nWhen expand_dbname is specified, the entry \"dbname\" is firstly checked and\r\nparsed its value. They are done at fe-connect.c:5846.\r\n\r\nThe environment variables are checked and parsed in conninfo_add_defaults(), which\r\nis called from conninfo_array_parse(). However, it is done at fe-connect.c:5956 - the\r\nexpand_dbname has already been done at that time. This means there is no chance\r\nthat PGDATABASE is parsed as an expanded style.\r\n\r\nFor example, if the pg_basebackup runs like below:\r\n\r\nPGDATABASE=\"user=kuroda dbname=postgres\" pg_basebackup -D data_N2 -R -v\r\n\r\nThe primary_conninfo written in the file will be:\r\n\r\nprimary_conninfo = 'user=hayato ... dbname=''user=kuroda dbname=postgres'''\r\n\r\n[1]: https://www.postgresql.org/docs/devel/libpq-pgservice.html\r\n[2]: https://www.postgresql.org/docs/devel/libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 19 Mar 2024 11:47:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 17:18, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> Thanks for giving comments!\n>\n> > This behavior makes sense to me. But do we want to handle the case of\n> > using environment variables too?\n>\n> Yeah, v5 does not consider which libpq parameters are specified by environment\n> variables. Such a variable should be used when the dbname is not expressly written\n> in the connection string.\n> Such a path was added in the v6 patch. If the dbname is not determined after\n> parsing the connection string, we call PQconndefaults() to get settings from\n> environment variables and service files [1], then start to search dbname again.\n> Below shows an example.\n>\n> ```\n> PGPORT=5431 PGUSER=kuroda PGDATABASE=postgres pg_basebackup -D data_N2 -R -v\n> ->\n> primary_conninfo = 'user=kuroda ... port=5431 ... dbname=postgres ... '\n> ```\n>\n> > IIUC,\n> >\n> > pg_basebackup -D tmp -d \"user=masahiko dbname=test_db\"\n> >\n> > is equivalent to:\n> >\n> > PGDATABASE=\"user=masahiko dbname=test_db\" pg_basebackup -D tmp\n>\n> The case won't work. I think You assumed that expanded_dbname like\n> PQconnectdbParams() [2] can be used for enviroment variables, but it is not correct\n> - it won't parse as connection string again.\n>\n> In the libpq layer, connection parameters are parsed in PQconnectStartParams()->conninfo_array_parse().\n> When expand_dbname is specified, the entry \"dbname\" is firstly checked and\n> parsed its value. They are done at fe-connect.c:5846.\n>\n> The environment variables are checked and parsed in conninfo_add_defaults(), which\n> is called from conninfo_array_parse(). However, it is done at fe-connect.c:5956 - the\n> expand_dbname has already been done at that time. This means there is no chance\n> that PGDATABASE is parsed as an expanded style.\n>\n> For example, if the pg_basebackup runs like below:\n>\n> PGDATABASE=\"user=kuroda dbname=postgres\" pg_basebackup -D data_N2 -R -v\n>\n> The primary_conninfo written in the file will be:\n>\n> primary_conninfo = 'user=hayato ... dbname=''user=kuroda dbname=postgres'''\n\nThanks for the patch.\nHere are the test results for various tests by specifying connection\nstring, environment variable, service file, and connection URIs with\nthe patch:\ncase 1:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=db1\"\nprimary_conninfo will have dbname=db1\n\ncase 2:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=replication\"\nprimary_conninfo will have dbname=replication\n\ncase 3:\npg_basebackup -D test10 -p 5431 -X s -P -R -U vignesh\nprimary_conninfo will not have dbname\n\ncase 4:\npg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will not have dbname\n\ncase 5:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh\"\nprimary_conninfo will not have dbname\n\ncase 6:\npg_basebackup -D test10 -p 5431 -X s -P -R -d \"\"\nprimary_conninfo will not have dbname\n\n--- Testing through PGDATABASE environment variable\ncase 7:\nexport PGDATABASE=\"user=postgres dbname=test\"\n./pg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will have dbname=''user=postgres dbname=test'' like below:\nprimary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\nchannel_binding=prefer port=5431 sslmode=prefer sslcompression=0\nsslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2\ngssencmode=disable krbsrvname=postgres gssdelegation=0\ntarget_session_attrs=any load_balance_hosts=disable\ndbname=''user=postgres dbname=test'''\n\ncase 8:\nexport PGDATABASE=db1\n./pg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will have dbname=db1\n\n--- Testing through pg_service\ncase 9:\nCreate .pg_service.conf with the following info:\n[conn1]\ndbname=db2\n\nexport PGSERVICE=conn1\n\n./pg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will have dbname=db2\n\ncase 10:\nCreate .pg_service.conf with the following info, i.e. there is no\ndatabase specified:\n[conn1]\n\n./pg_basebackup -D test10 -p 5431 -X s -P -R\nprimary_conninfo will not have dbname\n\n--- Testing through Connection URIs\ncase 11:\n./pg_basebackup -D test10 -X s -P -R -d \"postgresql://localhost:5431\"\nprimary_conninfo will not have dbname\n\ncase 12:\n./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n\"postgresql://localhost/db3:5431\"\nprimary_conninfo will have dbname=''db3:5431'' like below:\nprimary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\nchannel_binding=prefer host=localhost port=5431 sslmode=prefer\nsslcompression=0 sslcertmode=allow sslsni=1\nssl_min_protocol_version=TLSv1.2 gssencmode=disable\nkrbsrvname=postgres gssdelegation=0 target_session_attrs=any\nload_balance_hosts=disable dbname=''db3:5431'''\n\ncase 13:\n./pg_basebackup -D test10 -p 5431 -X s -P -R -d \"postgresql://localhost/db3\"\nprimary_conninfo will have dbname=db3\n\ncase 14:\n./pg_basebackup -D test10 -X s -P -R -d \"postgresql://localhost:5431/db3\"\nprimary_conninfo will have dbname=db3\n\ncase 15:\n./pg_basebackup -D test10 -X s -P -R -d\n\"postgresql://localhost:5431/db4,127.0.0.1:5431/db5\"\nprimary_conninfo will have dbname=''db4,127.0.0.1:5431/db5'' like below:\nprimary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\nchannel_binding=prefer host=localhost port=5431 sslmode=prefer\nsslcompression=0 sslcertmode=allow sslsni=1\nssl_min_protocol_version=TLSv1.2 gssencmode=disable\nkrbsrvname=postgres gssdelegation=0 target_session_attrs=any\nload_balance_hosts=disable dbname=''db4,127.0.0.1:5431/db5'''\n\ncase 16:\n./pg_basebackup -D test10 -X s -P -R -d\n\"postgresql://localhost:5431,127.0.0.1:5431/db5\"\nprimary_conninfo will have dbname=db5\n\ncase 17:\n./pg_basebackup -D test10 -X s -P -R -d\n\"postgresql:///db6?host=localhost&port=5431\"\nprimary_conninfo will have dbname=db6\n\ncase 18:\n ./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n\"postgresql:///db7?host=/home/vignesh/postgres/inst/bin\"\n primary_conninfo will have dbname=db7\n\ncase 19:\n./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n\"postgresql:///db8?host=%2Fhome%2Fvignesh%2Fpostgres%2Finst%2Fbin\"\n primary_conninfo will have dbname=db8\n\nIn these cases, the database name specified will be written to the\nconf file. The test results look good to me.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:54:21 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 5:18 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thanks for giving comments!\n>\n> > This behavior makes sense to me. But do we want to handle the case of\n> > using environment variables too?\n>\n> Yeah, v5 does not consider which libpq parameters are specified by environment\n> variables. Such a variable should be used when the dbname is not expressly written\n> in the connection string.\n> Such a path was added in the v6 patch. If the dbname is not determined after\n> parsing the connection string, we call PQconndefaults() to get settings from\n> environment variables and service files [1], then start to search dbname again.\n>\n\nThe functionality implemented by the patch looks good to me. I have\nmade minor modifications in the function names, error handling,\ncomments, and doc updates in the attached patch. Let me know what you\nthink of the attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 20 Mar 2024 17:09:21 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 8:48 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> Thanks for giving comments!\n>\n> > This behavior makes sense to me. But do we want to handle the case of\n> > using environment variables too?\n>\n> Yeah, v5 does not consider which libpq parameters are specified by environment\n> variables. Such a variable should be used when the dbname is not expressly written\n> in the connection string.\n> Such a path was added in the v6 patch. If the dbname is not determined after\n> parsing the connection string, we call PQconndefaults() to get settings from\n> environment variables and service files [1], then start to search dbname again.\n> Below shows an example.\n\nThank you for updating the patch!\n\n>\n> ```\n> PGPORT=5431 PGUSER=kuroda PGDATABASE=postgres pg_basebackup -D data_N2 -R -v\n> ->\n> primary_conninfo = 'user=kuroda ... port=5431 ... dbname=postgres ... '\n> ```\n>\n> > IIUC,\n> >\n> > pg_basebackup -D tmp -d \"user=masahiko dbname=test_db\"\n> >\n> > is equivalent to:\n> >\n> > PGDATABASE=\"user=masahiko dbname=test_db\" pg_basebackup -D tmp\n>\n> The case won't work. I think You assumed that expanded_dbname like\n> PQconnectdbParams() [2] can be used for enviroment variables, but it is not correct\n> - it won't parse as connection string again.\n>\n> In the libpq layer, connection parameters are parsed in PQconnectStartParams()->conninfo_array_parse().\n> When expand_dbname is specified, the entry \"dbname\" is firstly checked and\n> parsed its value. They are done at fe-connect.c:5846.\n>\n> The environment variables are checked and parsed in conninfo_add_defaults(), which\n> is called from conninfo_array_parse(). However, it is done at fe-connect.c:5956 - the\n> expand_dbname has already been done at that time. This means there is no chance\n> that PGDATABASE is parsed as an expanded style.\n>\n\nThank you for pointing it out. I tested the use of PGDATABASE with\npg_basebackup and somehow missed the fact you explained.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Mar 2024 23:25:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 2:24 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 19 Mar 2024 at 17:18, Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Sawada-san,\n> >\n> > Thanks for giving comments!\n> >\n> > > This behavior makes sense to me. But do we want to handle the case of\n> > > using environment variables too?\n> >\n> > Yeah, v5 does not consider which libpq parameters are specified by environment\n> > variables. Such a variable should be used when the dbname is not expressly written\n> > in the connection string.\n> > Such a path was added in the v6 patch. If the dbname is not determined after\n> > parsing the connection string, we call PQconndefaults() to get settings from\n> > environment variables and service files [1], then start to search dbname again.\n> > Below shows an example.\n> >\n> > ```\n> > PGPORT=5431 PGUSER=kuroda PGDATABASE=postgres pg_basebackup -D data_N2 -R -v\n> > ->\n> > primary_conninfo = 'user=kuroda ... port=5431 ... dbname=postgres ... '\n> > ```\n> >\n> > > IIUC,\n> > >\n> > > pg_basebackup -D tmp -d \"user=masahiko dbname=test_db\"\n> > >\n> > > is equivalent to:\n> > >\n> > > PGDATABASE=\"user=masahiko dbname=test_db\" pg_basebackup -D tmp\n> >\n> > The case won't work. I think You assumed that expanded_dbname like\n> > PQconnectdbParams() [2] can be used for enviroment variables, but it is not correct\n> > - it won't parse as connection string again.\n> >\n> > In the libpq layer, connection parameters are parsed in PQconnectStartParams()->conninfo_array_parse().\n> > When expand_dbname is specified, the entry \"dbname\" is firstly checked and\n> > parsed its value. They are done at fe-connect.c:5846.\n> >\n> > The environment variables are checked and parsed in conninfo_add_defaults(), which\n> > is called from conninfo_array_parse(). However, it is done at fe-connect.c:5956 - the\n> > expand_dbname has already been done at that time. This means there is no chance\n> > that PGDATABASE is parsed as an expanded style.\n> >\n> > For example, if the pg_basebackup runs like below:\n> >\n> > PGDATABASE=\"user=kuroda dbname=postgres\" pg_basebackup -D data_N2 -R -v\n> >\n> > The primary_conninfo written in the file will be:\n> >\n> > primary_conninfo = 'user=hayato ... dbname=''user=kuroda dbname=postgres'''\n>\n> Thanks for the patch.\n> Here are the test results for various tests by specifying connection\n> string, environment variable, service file, and connection URIs with\n> the patch:\n> case 1:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=db1\"\n> primary_conninfo will have dbname=db1\n>\n> case 2:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh dbname=replication\"\n> primary_conninfo will have dbname=replication\n>\n> case 3:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -U vignesh\n> primary_conninfo will not have dbname\n>\n> case 4:\n> pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will not have dbname\n>\n> case 5:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"user=vignesh\"\n> primary_conninfo will not have dbname\n>\n> case 6:\n> pg_basebackup -D test10 -p 5431 -X s -P -R -d \"\"\n> primary_conninfo will not have dbname\n>\n> --- Testing through PGDATABASE environment variable\n> case 7:\n> export PGDATABASE=\"user=postgres dbname=test\"\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will have dbname=''user=postgres dbname=test'' like below:\n> primary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\n> channel_binding=prefer port=5431 sslmode=prefer sslcompression=0\n> sslcertmode=allow sslsni=1 ssl_min_protocol_version=TLSv1.2\n> gssencmode=disable krbsrvname=postgres gssdelegation=0\n> target_session_attrs=any load_balance_hosts=disable\n> dbname=''user=postgres dbname=test'''\n>\n> case 8:\n> export PGDATABASE=db1\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will have dbname=db1\n>\n> --- Testing through pg_service\n> case 9:\n> Create .pg_service.conf with the following info:\n> [conn1]\n> dbname=db2\n>\n> export PGSERVICE=conn1\n>\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will have dbname=db2\n>\n> case 10:\n> Create .pg_service.conf with the following info, i.e. there is no\n> database specified:\n> [conn1]\n>\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R\n> primary_conninfo will not have dbname\n>\n> --- Testing through Connection URIs\n> case 11:\n> ./pg_basebackup -D test10 -X s -P -R -d \"postgresql://localhost:5431\"\n> primary_conninfo will not have dbname\n>\n> case 12:\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n> \"postgresql://localhost/db3:5431\"\n> primary_conninfo will have dbname=''db3:5431'' like below:\n> primary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\n> channel_binding=prefer host=localhost port=5431 sslmode=prefer\n> sslcompression=0 sslcertmode=allow sslsni=1\n> ssl_min_protocol_version=TLSv1.2 gssencmode=disable\n> krbsrvname=postgres gssdelegation=0 target_session_attrs=any\n> load_balance_hosts=disable dbname=''db3:5431'''\n>\n> case 13:\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R -d \"postgresql://localhost/db3\"\n> primary_conninfo will have dbname=db3\n>\n> case 14:\n> ./pg_basebackup -D test10 -X s -P -R -d \"postgresql://localhost:5431/db3\"\n> primary_conninfo will have dbname=db3\n>\n> case 15:\n> ./pg_basebackup -D test10 -X s -P -R -d\n> \"postgresql://localhost:5431/db4,127.0.0.1:5431/db5\"\n> primary_conninfo will have dbname=''db4,127.0.0.1:5431/db5'' like below:\n> primary_conninfo = 'user=vignesh passfile=''/home/vignesh/.pgpass''\n> channel_binding=prefer host=localhost port=5431 sslmode=prefer\n> sslcompression=0 sslcertmode=allow sslsni=1\n> ssl_min_protocol_version=TLSv1.2 gssencmode=disable\n> krbsrvname=postgres gssdelegation=0 target_session_attrs=any\n> load_balance_hosts=disable dbname=''db4,127.0.0.1:5431/db5'''\n>\n> case 16:\n> ./pg_basebackup -D test10 -X s -P -R -d\n> \"postgresql://localhost:5431,127.0.0.1:5431/db5\"\n> primary_conninfo will have dbname=db5\n>\n> case 17:\n> ./pg_basebackup -D test10 -X s -P -R -d\n> \"postgresql:///db6?host=localhost&port=5431\"\n> primary_conninfo will have dbname=db6\n>\n> case 18:\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n> \"postgresql:///db7?host=/home/vignesh/postgres/inst/bin\"\n> primary_conninfo will have dbname=db7\n>\n> case 19:\n> ./pg_basebackup -D test10 -p 5431 -X s -P -R -d\n> \"postgresql:///db8?host=%2Fhome%2Fvignesh%2Fpostgres%2Finst%2Fbin\"\n> primary_conninfo will have dbname=db8\n>\n> In these cases, the database name specified will be written to the\n> conf file. The test results look good to me.\n\nThank you for the tests! These results look good to me too.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Mar 2024 00:08:10 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 17:09, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Mar 19, 2024 at 5:18 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Thanks for giving comments!\n> >\n> > > This behavior makes sense to me. But do we want to handle the case of\n> > > using environment variables too?\n> >\n> > Yeah, v5 does not consider which libpq parameters are specified by environment\n> > variables. Such a variable should be used when the dbname is not expressly written\n> > in the connection string.\n> > Such a path was added in the v6 patch. If the dbname is not determined after\n> > parsing the connection string, we call PQconndefaults() to get settings from\n> > environment variables and service files [1], then start to search dbname again.\n> >\n>\n> The functionality implemented by the patch looks good to me. I have\n> made minor modifications in the function names, error handling,\n> comments, and doc updates in the attached patch. Let me know what you\n> think of the attached.\n\nWhile reviewing, I found the following changes could be done:\na) we can add one test in 010_pg_basebackup.pl to verify the change\nb) Here two different styles of linking is used in the document, we\ncan try to keep it same:\n+ streaming replication and <link\nlinkend=\"logicaldecoding-replication-slots-synchronization\">\n+ logical replication slot synchronization</link> will use the same\n+ settings later on. The dbname will be recorded only if the dbname was\n+ specified explicitly in the connection string or environment variable\n+ (see <xref linkend=\"libpq-envars\"/>).\n\nThe updated patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 21 Mar 2024 08:18:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?"
}
] |
[
{
"msg_contents": "While working on 4c2369ac5, I noticed there's close to as much code to\ndisallow BooleanTests in the form of \"IS UNKNOWN\" and \"IS NOT UNKNOWN\"\nin partition pruning as it would take to allow pruning to work for\nthese.\n\nThe attached makes it work.\n\nDavid",
"msg_date": "Tue, 20 Feb 2024 15:38:44 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support boolcol IS [NOT] UNKNOWN in partition pruning"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 15:38, David Rowley <[email protected]> wrote:\n> While working on 4c2369ac5, I noticed there's close to as much code to\n> disallow BooleanTests in the form of \"IS UNKNOWN\" and \"IS NOT UNKNOWN\"\n> in partition pruning as it would take to allow pruning to work for\n> these.\n\nI looked at this again and reminded myself that it's quite trivial. I\npushed the patch after doing a bit more work on the comments.\n\nDavid\n\n\n",
"msg_date": "Mon, 4 Mar 2024 14:46:21 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support boolcol IS [NOT] UNKNOWN in partition pruning"
}
] |
[
{
"msg_contents": "Hi,\nThe --clean option of pg_restore allows you to replace an object before\nbeing imported. However, dependencies such as foreign keys or views prevent\nthe deletion of the object. Is there a way to add the cascade option to\nforce the deletion?\nThanks for helping\nFabrice\n\nHi,The --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?Thanks for helpingFabrice",
"msg_date": "Tue, 20 Feb 2024 08:47:52 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_restore option --clean"
},
{
"msg_contents": "Hi,\nThe --clean option of pg_restore allows you to replace an object before\nbeing imported. However, dependencies such as foreign keys or views prevent\nthe deletion of the object. Is there a way to add the cascade option to\nforce the deletion?\nThanks for helping\nFabrice\n\nHi,The --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?Thanks for helpingFabrice",
"msg_date": "Wed, 21 Feb 2024 10:17:40 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: pg_restore option --clean"
},
{
"msg_contents": "Look around for\n\nALTER TABLE TABLE-NAME\n ADD constraint fk-name foreign key col-name refers to tab-name ( col-name )\n on UPDATE cascase\n on DELETE CASCADE\n;\nGood luck,\nSarwar\n\n________________________________\nFrom: Fabrice Chapuis <[email protected]>\nSent: Wednesday, February 21, 2024 4:17 AM\nTo: [email protected] <[email protected]>\nSubject: Fwd: pg_restore option --clean\n\n\n\nHi,\nThe --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?\nThanks for helping\nFabrice\n\n\n\n\n\n\n\n\nLook around for\n\n\n\n\nALTER TABLE TABLE-NAME\n\n ADD constraint fk-name foreign key col-name refers to tab-name ( col-name )\n\n on UPDATE cascase\n\n on DELETE CASCADE\n\n;\n\nGood luck,\n\nSarwar\n\n\n\n\n\nFrom: Fabrice Chapuis <[email protected]>\nSent: Wednesday, February 21, 2024 4:17 AM\nTo: [email protected] <[email protected]>\nSubject: Fwd: pg_restore option --clean\n \n\n\n\n\n\n\nHi,\nThe --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?\n\nThanks for helping\nFabrice",
"msg_date": "Wed, 21 Feb 2024 11:47:29 +0000",
"msg_from": "M Sarwar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore option --clean"
},
{
"msg_contents": "But it does not work for the structure\n# CONSTRAINT test FOREIGN KEY (id_tab_key) REFERENCES tab(id) ON DELETE\ncascade ON UPDATE CASCADE\n\nERROR: cannot drop table tab because other objects depend on it\n\nRegards,\n\nFabrice\n\nOn Wed, Feb 21, 2024 at 12:47 PM M Sarwar <[email protected]> wrote:\n\n> Look around for\n>\n> ALTER TABLE TABLE-NAME\n> ADD constraint fk-name foreign key col-name refers to tab-name (\n> col-name )\n> on UPDATE cascase\n> on DELETE CASCADE\n> ;\n> Good luck,\n> Sarwar\n>\n> ------------------------------\n> *From:* Fabrice Chapuis <[email protected]>\n> *Sent:* Wednesday, February 21, 2024 4:17 AM\n> *To:* [email protected] <[email protected]>\n> *Subject:* Fwd: pg_restore option --clean\n>\n>\n>\n> Hi,\n> The --clean option of pg_restore allows you to replace an object before\n> being imported. However, dependencies such as foreign keys or views prevent\n> the deletion of the object. Is there a way to add the cascade option to\n> force the deletion?\n> Thanks for helping\n> Fabrice\n>\n\nBut it does not work for the structure# CONSTRAINT test FOREIGN KEY (id_tab_key) REFERENCES tab(id) ON DELETE cascade ON UPDATE CASCADEERROR: cannot drop table tab because other objects depend on itRegards,FabriceOn Wed, Feb 21, 2024 at 12:47 PM M Sarwar <[email protected]> wrote:\n\n\nLook around for\n\n\n\n\nALTER TABLE TABLE-NAME\n\n ADD constraint fk-name foreign key col-name refers to tab-name ( col-name )\n\n on UPDATE cascase\n\n on DELETE CASCADE\n\n;\n\nGood luck,\n\nSarwar\n\n\n\n\n\nFrom: Fabrice Chapuis <[email protected]>\nSent: Wednesday, February 21, 2024 4:17 AM\nTo: [email protected] <[email protected]>\nSubject: Fwd: pg_restore option --clean\n \n\n\n\n\n\n\nHi,\nThe --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?\n\nThanks for helping\nFabrice",
"msg_date": "Wed, 21 Feb 2024 15:00:37 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore option --clean"
},
{
"msg_contents": "Hi,\n\nLe mer. 21 févr. 2024 à 15:01, Fabrice Chapuis <[email protected]> a\nécrit :\n\n> But it does not work for the structure\n> # CONSTRAINT test FOREIGN KEY (id_tab_key) REFERENCES tab(id) ON DELETE\n> cascade ON UPDATE CASCADE\n>\n> ERROR: cannot drop table tab because other objects depend on it\n>\n>\nYeah, ON DELETE and ON CASCADE are not the answer to your question.\n\npg_restore won't drop objects in cascade. There's no option for that. I'd\nguess the reason is that --clean only cleans the object it will restore. If\nother objects depend on it, pg_restore has no way to know how to recreate\nthem, and you would end up with a not completely restored database.\n\nRegards.\n\n\n> Regards,\n>\n> Fabrice\n>\n> On Wed, Feb 21, 2024 at 12:47 PM M Sarwar <[email protected]> wrote:\n>\n>> Look around for\n>>\n>> ALTER TABLE TABLE-NAME\n>> ADD constraint fk-name foreign key col-name refers to tab-name (\n>> col-name )\n>> on UPDATE cascase\n>> on DELETE CASCADE\n>> ;\n>> Good luck,\n>> Sarwar\n>>\n>> ------------------------------\n>> *From:* Fabrice Chapuis <[email protected]>\n>> *Sent:* Wednesday, February 21, 2024 4:17 AM\n>> *To:* [email protected] <[email protected]>\n>> *Subject:* Fwd: pg_restore option --clean\n>>\n>>\n>>\n>> Hi,\n>> The --clean option of pg_restore allows you to replace an object before\n>> being imported. However, dependencies such as foreign keys or views prevent\n>> the deletion of the object. Is there a way to add the cascade option to\n>> force the deletion?\n>> Thanks for helping\n>> Fabrice\n>>\n>\n\n-- \nGuillaume.\n\nHi,Le mer. 21 févr. 2024 à 15:01, Fabrice Chapuis <[email protected]> a écrit :But it does not work for the structure# CONSTRAINT test FOREIGN KEY (id_tab_key) REFERENCES tab(id) ON DELETE cascade ON UPDATE CASCADEERROR: cannot drop table tab because other objects depend on itYeah, ON DELETE and ON CASCADE are not the answer to your question.pg_restore won't drop objects in cascade. There's no option for that. I'd guess the reason is that --clean only cleans the object it will restore. If other objects depend on it, pg_restore has no way to know how to recreate them, and you would end up with a not completely restored database.Regards. Regards,FabriceOn Wed, Feb 21, 2024 at 12:47 PM M Sarwar <[email protected]> wrote:\n\n\nLook around for\n\n\n\n\nALTER TABLE TABLE-NAME\n\n ADD constraint fk-name foreign key col-name refers to tab-name ( col-name )\n\n on UPDATE cascase\n\n on DELETE CASCADE\n\n;\n\nGood luck,\n\nSarwar\n\n\n\n\n\nFrom: Fabrice Chapuis <[email protected]>\nSent: Wednesday, February 21, 2024 4:17 AM\nTo: [email protected] <[email protected]>\nSubject: Fwd: pg_restore option --clean\n \n\n\n\n\n\n\nHi,\nThe --clean option of pg_restore allows you to replace an object before being imported. However, dependencies such as foreign keys or views prevent the deletion of the object. Is there a way to add the cascade option to force the deletion?\n\nThanks for helping\nFabrice\n\n\n\n\n\n\n-- Guillaume.",
"msg_date": "Wed, 21 Feb 2024 15:35:14 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore option --clean"
},
{
"msg_contents": "Guillaume Lelarge <[email protected]> writes:\n> pg_restore won't drop objects in cascade. There's no option for that. I'd\n> guess the reason is that --clean only cleans the object it will restore. If\n> other objects depend on it, pg_restore has no way to know how to recreate\n> them, and you would end up with a not completely restored database.\n\nYeah. The expectation is that --clean will issue the DROP commands\nin reverse dependency order, so that no step would require CASCADE.\nIf one did, it'd imply that pg_dump failed to catalog all the\ndependencies in the database, which would be a bug we'd want to know\nabout.\n\nNow, this theory does fail in at least two practical cases:\n\n* You're trying to use --clean with a selective restore.\n\n* You're trying to restore into a database that has more or\ndifferent objects than the source DB did.\n\nBut in both cases, blindly using CASCADE seems like a bad idea.\nYou'd end up with a database that's missing some objects, and\nyou won't know which ones or how to put them back.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Feb 2024 10:31:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore option --clean"
},
{
"msg_contents": "Effectivly, Tom, we are in the second case, we are carrying out a partial\nrestore, certain tables remain in place and are not replaced, currently I\nhave to do manually a DROP... CASCADE and recreate the dependencies\nbetween the objects which are imported and those which are in place.No\nchoice to continue with this approach.\n\nRegarde, Fabrice\n\nOn Wed, Feb 21, 2024 at 4:31 PM Tom Lane <[email protected]> wrote:\n\n> Guillaume Lelarge <[email protected]> writes:\n> > pg_restore won't drop objects in cascade. There's no option for that. I'd\n> > guess the reason is that --clean only cleans the object it will restore.\n> If\n> > other objects depend on it, pg_restore has no way to know how to recreate\n> > them, and you would end up with a not completely restored database.\n>\n> Yeah. The expectation is that --clean will issue the DROP commands\n> in reverse dependency order, so that no step would require CASCADE.\n> If one did, it'd imply that pg_dump failed to catalog all the\n> dependencies in the database, which would be a bug we'd want to know\n> about.\n>\n> Now, this theory does fail in at least two practical cases:\n>\n> * You're trying to use --clean with a selective restore.\n>\n> * You're trying to restore into a database that has more or\n> different objects than the source DB did.\n>\n> But in both cases, blindly using CASCADE seems like a bad idea.\n> You'd end up with a database that's missing some objects, and\n> you won't know which ones or how to put them back.\n>\n> regards, tom lane\n>\n\nEffectivly, Tom, we are in the second case, we are carrying out a partial restore, certain tables remain in place and are not replaced, currently I have to do manually a DROP... CASCADE and recreate the dependencies between the objects which are imported and those which are in place.No choice to continue with this approach.Regarde, FabriceOn Wed, Feb 21, 2024 at 4:31 PM Tom Lane <[email protected]> wrote:Guillaume Lelarge <[email protected]> writes:\n> pg_restore won't drop objects in cascade. There's no option for that. I'd\n> guess the reason is that --clean only cleans the object it will restore. If\n> other objects depend on it, pg_restore has no way to know how to recreate\n> them, and you would end up with a not completely restored database.\n\nYeah. The expectation is that --clean will issue the DROP commands\nin reverse dependency order, so that no step would require CASCADE.\nIf one did, it'd imply that pg_dump failed to catalog all the\ndependencies in the database, which would be a bug we'd want to know\nabout.\n\nNow, this theory does fail in at least two practical cases:\n\n* You're trying to use --clean with a selective restore.\n\n* You're trying to restore into a database that has more or\ndifferent objects than the source DB did.\n\nBut in both cases, blindly using CASCADE seems like a bad idea.\nYou'd end up with a database that's missing some objects, and\nyou won't know which ones or how to put them back.\n\n regards, tom lane",
"msg_date": "Thu, 22 Feb 2024 09:05:22 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore option --clean"
}
] |
[
{
"msg_contents": "The Query structure has an increasing number of bool attributes. This is \nlikely to increase in the future. And they have the same properties. \nWouldn't it be better to store them in bits? Common statements don't use \nthem, so they have little impact. This also saves memory space.\n\n--\nQuan Zongliang",
"msg_date": "Tue, 20 Feb 2024 18:07:46 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Change the bool member of the Query structure to bits"
},
{
"msg_contents": "Sorry. I forgot to save a file. This is the latest.\n\nOn 2024/2/20 18:07, Quan Zongliang wrote:\n> \n> The Query structure has an increasing number of bool attributes. This is \n> likely to increase in the future. And they have the same properties. \n> Wouldn't it be better to store them in bits? Common statements don't use \n> them, so they have little impact. This also saves memory space.\n> \n> -- \n> Quan Zongliang",
"msg_date": "Tue, 20 Feb 2024 18:11:55 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change the bool member of the Query structure to bits"
},
{
"msg_contents": "On 2/20/24 11:11, Quan Zongliang wrote:\n> \n> Sorry. I forgot to save a file. This is the latest.\n> \n> On 2024/2/20 18:07, Quan Zongliang wrote:\n>>\n>> The Query structure has an increasing number of bool attributes. This\n>> is likely to increase in the future. And they have the same\n>> properties. Wouldn't it be better to store them in bits? Common\n>> statements don't use them, so they have little impact. This also saves\n>> memory space.\n>>\n\nHi,\n\nAre we really adding bools to Query that often? A bit of git-blame says\nit's usually multiple years to add a single new flag, which is what I'd\nexpect. I doubt that'll change.\n\nAs for the memory savings, can you quantify how much memory this would save?\n\nI highly doubt that's actually true (or at least measurable). The Query\nstruct has ~256B, the patch cuts that to ~232B. But we allocate stuff in\npower-of-2, so we'll allocate 256B chunk anyway. And we allocate very\nfew of those objects anyway ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:18:46 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change the bool member of the Query structure to bits"
},
{
"msg_contents": "Quan Zongliang <[email protected]> writes:\n> The Query structure has an increasing number of bool attributes. This is \n> likely to increase in the future. And they have the same properties. \n> Wouldn't it be better to store them in bits? Common statements don't use \n> them, so they have little impact. This also saves memory space.\n\nI'm -1 on that, for three reasons:\n\n* The amount of space saved is quite negligible. If queries had many\nQuery structs then it could matter, but they don't.\n\n* This causes enough code churn to create a headache for back-patching.\n\n* This'll completely destroy the readability of these flags in\npprint output.\n\nI'm not greatly in love with the macro layer you propose, either,\nbut those details don't matter because I think we should just\nleave well enough alone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:45:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change the bool member of the Query structure to bits"
},
{
"msg_contents": "\n\nOn 2024/2/20 23:45, Tom Lane wrote:\n> Quan Zongliang <[email protected]> writes:\n>> The Query structure has an increasing number of bool attributes. This is\n>> likely to increase in the future. And they have the same properties.\n>> Wouldn't it be better to store them in bits? Common statements don't use\n>> them, so they have little impact. This also saves memory space.\n> \n> I'm -1 on that, for three reasons:\n> \n> * The amount of space saved is quite negligible. If queries had many\n> Query structs then it could matter, but they don't.\n> \n> * This causes enough code churn to create a headache for back-patching.\n> \n> * This'll completely destroy the readability of these flags in\n> pprint output.\n> \n> I'm not greatly in love with the macro layer you propose, either,\n> but those details don't matter because I think we should just\n> leave well enough alone.\n> \n> \t\t\tregards, tom lane\nI get it. Withdraw.\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 07:45:22 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change the bool member of the Query structure to bits"
},
{
"msg_contents": "\n\nOn 2024/2/20 19:18, Tomas Vondra wrote:\n> On 2/20/24 11:11, Quan Zongliang wrote:\n>>\n>> Sorry. I forgot to save a file. This is the latest.\n>>\n>> On 2024/2/20 18:07, Quan Zongliang wrote:\n>>>\n>>> The Query structure has an increasing number of bool attributes. This\n>>> is likely to increase in the future. And they have the same\n>>> properties. Wouldn't it be better to store them in bits? Common\n>>> statements don't use them, so they have little impact. This also saves\n>>> memory space.\n>>>\n> \n> Hi,\n> \n> Are we really adding bools to Query that often? A bit of git-blame says\n> it's usually multiple years to add a single new flag, which is what I'd\n> expect. I doubt that'll change.\n> \n> As for the memory savings, can you quantify how much memory this would save?\n> \n> I highly doubt that's actually true (or at least measurable). The Query\n> struct has ~256B, the patch cuts that to ~232B. But we allocate stuff in\n> power-of-2, so we'll allocate 256B chunk anyway. And we allocate very\n> few of those objects anyway ...\n> \n> regards\n> \nThat makes sense. Withdraw.\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 07:46:53 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change the bool member of the Query structure to bits"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed that, beginning with PG16, grouped aggregates are missing the\n\"Group Key\" in the EXPLAIN output.\n\nIt seems the Agg node has numCols (number of grouping cols) set to zero in\nqueries like\n\nSELECT foo, count(*) FROM bar WHERE foo=1 GROUP BY foo;\n\nIn PG15, the \"Group Key\" is shown and the Agg node has numCols set as\nexpected.\n\nIs this intentional or a bug?\n\nBest regards,\n\nErik\n\n-- \nDatabase Architect, Timescale\n\nHello,I noticed that, beginning with PG16, grouped aggregates are missing the \"Group Key\" in the EXPLAIN output.It seems the Agg node has numCols (number of grouping cols) set to zero in queries like SELECT foo, count(*) FROM bar WHERE foo=1 GROUP BY foo;In PG15, the \"Group Key\" is shown and the Agg node has numCols set as expected.Is this intentional or a bug?Best regards,Erik-- Database Architect, Timescale",
"msg_date": "Tue, 20 Feb 2024 11:31:21 +0100",
"msg_from": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing Group Key in grouped aggregate"
},
{
"msg_contents": "=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <[email protected]> writes:\n> I noticed that, beginning with PG16, grouped aggregates are missing the\n> \"Group Key\" in the EXPLAIN output.\n\n> It seems the Agg node has numCols (number of grouping cols) set to zero in\n> queries like\n\n> SELECT foo, count(*) FROM bar WHERE foo=1 GROUP BY foo;\n\n> In PG15, the \"Group Key\" is shown and the Agg node has numCols set as\n> expected.\n\nLooks sane to me: the planner now notices that there can only\nbe one group so it doesn't tell the GroupAgg node to worry about\nmaking groups. If it were missing in a case where there could be\nmultiple output groups, yes that'd be a bug.\n\nIf you want to run it to ground you could bisect to see where the\nbehavior changed, but you'd probably just find it was intentional.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:53:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing Group Key in grouped aggregate"
},
{
"msg_contents": "On 2/20/24 16:53, Tom Lane wrote:\n> =?UTF-8?Q?Erik_Nordstr=C3=B6m?= <[email protected]> writes:\n>> I noticed that, beginning with PG16, grouped aggregates are missing the\n>> \"Group Key\" in the EXPLAIN output.\n> \n>> It seems the Agg node has numCols (number of grouping cols) set to zero in\n>> queries like\n> \n>> SELECT foo, count(*) FROM bar WHERE foo=1 GROUP BY foo;\n> \n>> In PG15, the \"Group Key\" is shown and the Agg node has numCols set as\n>> expected.\n> \n> Looks sane to me: the planner now notices that there can only\n> be one group so it doesn't tell the GroupAgg node to worry about\n> making groups. If it were missing in a case where there could be\n> multiple output groups, yes that'd be a bug.\n> \n> If you want to run it to ground you could bisect to see where the\n> behavior changed, but you'd probably just find it was intentional.\n> \n\nI believe this changed in:\n\ncommit 8d83a5d0a2673174dc478e707de1f502935391a5\nAuthor: Tom Lane <[email protected]>\nDate: Wed Jan 18 12:37:57 2023 -0500\n\n Remove redundant grouping and DISTINCT columns.\n\n Avoid explicitly grouping by columns that we know are redundant\n for sorting, for example we need group by only one of x and y in\n SELECT ... WHERE x = y GROUP BY x, y\n This comes up more often than you might think, as shown by the\n changes in the regression tests. It's nearly free to detect too,\n since we are just piggybacking on the existing logic that detects\n redundant pathkeys. (In some of the existing plans that change,\n it's visible that a sort step preceding the grouping step already\n didn't bother to sort by the redundant column, making the old plan\n a bit silly-looking.)\n\n ...\n\nIt's not quite obvious from the commit message, but that's where git\nbisect says the behavior changed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Feb 2024 19:56:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing Group Key in grouped aggregate"
}
] |
[
{
"msg_contents": "Hi,\n\nPresently, replication slot invalidation causes and their text are\nscattered into ReplicationSlotInvalidationCause enum and a bunch of\nmacros. This is making the code to get invalidation cause text given\nthe cause as enum and vice-versa unreadable, longer and inextensible.\nThe attached patch adds a lookup table for all invalidation causes for\nbetter readability and extensibility. FWIW, another patch in\ndiscussion https://www.postgresql.org/message-id/CALj2ACWgACB4opnbqi=x7Hc4aqcgkXoLsh1VB+gfidXaDQNu_Q@mail.gmail.com\nadds a couple of other invalidation reasons, this lookup table makes\nthe life easier and code shorter.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Feb 2024 16:40:44 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 12:11, Bharath Rupireddy\n<[email protected]> wrote:\n> Thoughts?\n\nSeems like a good improvement overall. But I'd prefer the definition\nof the lookup table to use this syntax:\n\nconst char *const SlotInvalidationCauses[] = {\n [RS_INVAL_NONE] = \"none\",\n [RS_INVAL_WAL_REMOVED] = \"wal_removed\",\n [RS_INVAL_HORIZON] = \"rows_removed\",\n [RS_INVAL_WAL_LEVEL] = \"wal_level_sufficient\",\n};\n\n\nRegarding the actual patch:\n\n- Assert(conflict_reason);\n\nProbably we should keep this Assert. As well as the Assert(0)\n\n\n+ for (cause = RS_INVAL_NONE; cause <= RS_INVAL_MAX_CAUSES; cause++)\n\nStrictly speaking this is a slight change in behaviour, since now\n\"none\" is also parsed. That seems fine to me though.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 17:53:03 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 05:53:03PM +0100, Jelte Fennema-Nio wrote:\n> Seems like a good improvement overall. But I'd prefer the definition\n> of the lookup table to use this syntax:\n> \n> const char *const SlotInvalidationCauses[] = {\n> [RS_INVAL_NONE] = \"none\",\n> [RS_INVAL_WAL_REMOVED] = \"wal_removed\",\n> [RS_INVAL_HORIZON] = \"rows_removed\",\n> [RS_INVAL_WAL_LEVEL] = \"wal_level_sufficient\",\n> };\n\n+1.\n\n> Regarding the actual patch:\n> \n> - Assert(conflict_reason);\n> \n> Probably we should keep this Assert. As well as the Assert(0)\n\nThe assert(0) at the end of the routine, likely so. I don't see a\nhuge point for the assert on conflict_reason as we'd crash anyway on\nstrcmp, no?\n\n> + for (cause = RS_INVAL_NONE; cause <= RS_INVAL_MAX_CAUSES; cause++)\n> \n> Strictly speaking this is a slight change in behaviour, since now\n> \"none\" is also parsed. That seems fine to me though.\n\nYep. This does not strike me as an issue. We only use\nGetSlotInvalidationCause() in synchronize_slots(), mapping to NULL in\nthe case of \"none\".\n\nAgreed that this is an improvement.\n\n+/* Maximum number of invalidation causes */\n+#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL\n\nThere is no need to add that to slot.h: it is only used in slot.c.\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 08:34:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 5:04 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 05:53:03PM +0100, Jelte Fennema-Nio wrote:\n> > Seems like a good improvement overall. But I'd prefer the definition\n> > of the lookup table to use this syntax:\n> >\n> > const char *const SlotInvalidationCauses[] = {\n> > [RS_INVAL_NONE] = \"none\",\n> > [RS_INVAL_WAL_REMOVED] = \"wal_removed\",\n> > [RS_INVAL_HORIZON] = \"rows_removed\",\n> > [RS_INVAL_WAL_LEVEL] = \"wal_level_sufficient\",\n> > };\n>\n> +1.\n\nDone that way. I'm fine with the designated initialization [1] that an\nISO C99 compliant compiler offers. PostgreSQL installation guide\nhttps://www.postgresql.org/docs/current/install-requirements.html says\nthat we need an at least C99-compliant ISO/ANSI C compiler.\n\n[1] https://open-std.org/JTC1/SC22/WG14/www/docs/n494.pdf\nhttps://en.cppreference.com/w/c/99\nhttps://www.ibm.com/docs/en/zos/2.4.0?topic=initializers-designated-aggregate-types-c-only\n\n> > Regarding the actual patch:\n> >\n> > - Assert(conflict_reason);\n> >\n> > Probably we should keep this Assert. As well as the Assert(0)\n>\n> The assert(0) at the end of the routine, likely so. I don't see a\n> huge point for the assert on conflict_reason as we'd crash anyway on\n> strcmp, no?\n\nRight, but an assertion isn't a bad idea there as it can generate a\nbacktrace as opposed to the crash generating just SEGV note (and\nperhaps a crash dump) in server logs.\n\nWith these two asserts, the behavior (asserts on null and non-existent\ninputs) is the same as what GetSlotInvalidationCause has right now.\n\n> +/* Maximum number of invalidation causes */\n> +#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL\n>\n> There is no need to add that to slot.h: it is only used in slot.c.\n\nRight, but it needs to be updated whenever a new cause is added to\nenum ReplicationSlotInvalidationCause. Therefore, I think it's better\nto be closer there in slot.h.\n\nPlease see the attached v2 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Feb 2024 09:49:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 09:49:37AM +0530, Bharath Rupireddy wrote:\n> On Wed, Feb 21, 2024 at 5:04 AM Michael Paquier <[email protected]> wrote:\n> Done that way. I'm fine with the designated initialization [1] that an\n> ISO C99 compliant compiler offers. PostgreSQL installation guide\n> https://www.postgresql.org/docs/current/install-requirements.html says\n> that we need an at least C99-compliant ISO/ANSI C compiler.\n\nNote the recent commit 74a730631065 where Alvaro has changed for the\nlwlock tranche names. That's quite elegant.\n\n> Right, but an assertion isn't a bad idea there as it can generate a\n> backtrace as opposed to the crash generating just SEGV note (and\n> perhaps a crash dump) in server logs.\n> \n> With these two asserts, the behavior (asserts on null and non-existent\n> inputs) is the same as what GetSlotInvalidationCause has right now.\n\nWell, I won't fight you over that.\n\n>> +/* Maximum number of invalidation causes */\n>> +#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL\n>>>> There is no need to add that to slot.h: it is only used in slot.c.\n\n> \n> Right, but it needs to be updated whenever a new cause is added to\n> enum ReplicationSlotInvalidationCause. Therefore, I think it's better\n> to be closer there in slot.h.\n\nA new cause would require an update of SlotInvalidationCause, so if\nyou keep RS_INVAL_MAX_CAUSES close to it that's impossible to miss.\nIMO, it makes just more sense to keep that in slot.c because of the\nstatic assert as well.\n\n+ * If you add a new invalidation cause here, remember to add its name in\n+ * SlotInvalidationCauses in the same order as that of the cause. \n\nThe order does not matter with the way v2 does things with\nSlotInvalidationCauses[], no?\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 15:26:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 11:56 AM Michael Paquier <[email protected]> wrote:\n>\n> Note the recent commit 74a730631065 where Alvaro has changed for the\n> lwlock tranche names. That's quite elegant.\n\nYes, that's absolutely neat. FWIW, designated initializer syntax can\nbe used in a few more places though. I'm not sure how much worth it\nwill be but I'll see if I can quickly put up a patch for it.\n\n> > With these two asserts, the behavior (asserts on null and non-existent\n> > inputs) is the same as what GetSlotInvalidationCause has right now.\n>\n> Well, I won't fight you over that.\n\nHaha :)\n\n> A new cause would require an update of SlotInvalidationCause, so if\n> you keep RS_INVAL_MAX_CAUSES close to it that's impossible to miss.\n> IMO, it makes just more sense to keep that in slot.c because of the\n> static assert as well.\n\nHm, okay. Moved that to slot.c but left a note in the comment atop\nenum to update it.\n\n> + * If you add a new invalidation cause here, remember to add its name in\n> + * SlotInvalidationCauses in the same order as that of the cause.\n>\n> The order does not matter with the way v2 does things with\n> SlotInvalidationCauses[], no?\n\nUgh. Corrected that now.\n\nPlease see the attached v3 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Feb 2024 12:50:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 12:50:00PM +0530, Bharath Rupireddy wrote:\n> Please see the attached v3 patch.\n\nSeems globally OK, so applied. I've simplified a bit the comments,\npainted some extra const, and kept variable name as conflict_reason as \nthe other routines of slot.h use \"name\" already to refer to the slot\nnames, and that was a bit confusing IMO.\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 08:59:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "Hi, Sorry for the late comment but isn't the pushed logic now\ndifferent to what it was there before?\n\nIIUC previously (in a non-debug build) if the specified\nconflict_reason was not found, it returned RS_INVAL_NONE -- now it\nseems to return whatever enum happens to be last.\n\nHow about something more like below:\n\n----------\nReplicationSlotInvalidationCause\nGetSlotInvalidationCause(const char *conflict_reason)\n{\n ReplicationSlotInvalidationCause cause;\n bool found = false;\n\n for (cause = 0; !found && cause <= RS_INVAL_MAX_CAUSES; cause++)\n found = strcmp(SlotInvalidationCauses[cause], conflict_reason) == 0;\n\n Assert(found);\n return found ? cause : RS_INVAL_NONE;\n}\n----------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 22 Feb 2024 17:19:36 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 5:19 PM Peter Smith <[email protected]> wrote:\n>\n> Hi, Sorry for the late comment but isn't the pushed logic now\n> different to what it was there before?\n>\n> IIUC previously (in a non-debug build) if the specified\n> conflict_reason was not found, it returned RS_INVAL_NONE -- now it\n> seems to return whatever enum happens to be last.\n>\n> How about something more like below:\n>\n> ----------\n> ReplicationSlotInvalidationCause\n> GetSlotInvalidationCause(const char *conflict_reason)\n> {\n> ReplicationSlotInvalidationCause cause;\n> bool found = false;\n>\n> for (cause = 0; !found && cause <= RS_INVAL_MAX_CAUSES; cause++)\n> found = strcmp(SlotInvalidationCauses[cause], conflict_reason) == 0;\n>\n> Assert(found);\n> return found ? cause : RS_INVAL_NONE;\n> }\n> ----------\n>\n\nOops. Perhaps I meant more like below -- in any case, the point was\nthe same -- to ensure RS_INVAL_NONE is what returns if something\nunexpected happens.\n\nReplicationSlotInvalidationCause\nGetSlotInvalidationCause(const char *conflict_reason)\n{\n ReplicationSlotInvalidationCause cause;\n\n for (cause = 0; cause <= RS_INVAL_MAX_CAUSES; cause++)\n {\n if (strcmp(SlotInvalidationCauses[cause], conflict_reason) == 0)\n return cause;\n }\n\n Assert(0);\n return RS_INVAL_NONE;\n}\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 22 Feb 2024 17:30:08 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 05:30:08PM +1100, Peter Smith wrote:\n> Oops. Perhaps I meant more like below -- in any case, the point was\n> the same -- to ensure RS_INVAL_NONE is what returns if something\n> unexpected happens.\n\nYou are right that this could be a bit confusing, even if we should\nnever reach this state. How about avoiding to return the index of the\nloop as result, as of the attached? Would you find that cleaner?\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 15:56:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 12:26 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 22, 2024 at 05:30:08PM +1100, Peter Smith wrote:\n> > Oops. Perhaps I meant more like below -- in any case, the point was\n> > the same -- to ensure RS_INVAL_NONE is what returns if something\n> > unexpected happens.\n>\n> You are right that this could be a bit confusing, even if we should\n> never reach this state. How about avoiding to return the index of the\n> loop as result, as of the attached? Would you find that cleaner?\n\nLooks neat!\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Feb 2024 12:52:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 5:56 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 22, 2024 at 05:30:08PM +1100, Peter Smith wrote:\n> > Oops. Perhaps I meant more like below -- in any case, the point was\n> > the same -- to ensure RS_INVAL_NONE is what returns if something\n> > unexpected happens.\n>\n> You are right that this could be a bit confusing, even if we should\n> never reach this state. How about avoiding to return the index of the\n> loop as result, as of the attached? Would you find that cleaner?\n> --\n\nHi, yes, it should never happen, but thanks for making the changes.\n\nI would've just removed every local variable instead of adding more of\nthem. I also felt the iteration starting from RS_INVAL_NONE instead of\n0 is asserting RS_INVAL_NONE must always be the first enum and can't\nbe rearranged. Probably it will never happen, but why require it?\n\n------\nReplicationSlotInvalidationCause\nGetSlotInvalidationCause(const char *conflict_reason)\n{\n for (ReplicationSlotInvalidationCause cause = 0; cause <=\nRS_INVAL_MAX_CAUSES; cause++)\n if (strcmp(SlotInvalidationCauses[cause], conflict_reason) == 0)\n return cause;\n\n Assert(0);\n return RS_INVAL_NONE;\n}\n------\n\nBut maybe those nits are a matter of personal choice. Your patch code\naddressed my main concern, so it LGTM.\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 23 Feb 2024 09:04:04 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 09:04:04AM +1100, Peter Smith wrote:\n> I would've just removed every local variable instead of adding more of\n> them. I also felt the iteration starting from RS_INVAL_NONE instead of\n> 0 is asserting RS_INVAL_NONE must always be the first enum and can't\n> be rearranged. Probably it will never happen, but why require it?\n\nFWIW, I think that the code is OK as-is, so I'd just let it be for\nnow.\n--\nMichael",
"msg_date": "Sat, 24 Feb 2024 08:39:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add lookup table for replication slot invalidation causes"
}
] |
[
{
"msg_contents": "Hello hackers,\r\n\r\nUsing Svace* I think I've found a little bug in src/backend/utils/mmgr/dsa.c.\r\nThis bug is presented in REL_12_STABLE, REL_13_STABLE, REL_14_STABLE,\r\nREL_15_STABLE, REL_16_STABLE and master. I see that it was introduced together\r\nwith dynamic shared memory areas in the commit 13df76a537cca3b8884911d8fdf7c89a457a8dd3.\r\nI also see that at least two people have encountered this fprintf output.\r\n(https://postgrespro.com/list/thread-id/2419512,\r\nhttps://www.postgresql.org/message-id/15e9501170d.e4b5a3858707.3339083113985275726%40zohocorp.com)\r\n\r\nfprintf(stderr,\r\n \" segment bin %zu (at least %d contiguous pages free):\\n\",\r\n i, 1 << (i - 1));\r\n\r\nIn case i equals zero user will get \"at least -2147483648 contiguous pages free\".\r\nI believe that this is a mistake, and fprintf should print \"at least 0 contiguous pages free\"\r\nin case i equals zero.\r\n\r\nThe patch that has a fix of this is attached.\r\n\r\n* - https://svace.pages.ispras.ru/svace-website/en/\r\n\r\nKind regards,\r\nIan Ilyasov.\r\n\r\nJuniour Software Developer at Postgres Professional",
"msg_date": "Tue, 20 Feb 2024 11:28:03 +0000",
"msg_from": "=?utf-8?B?0JjQu9GM0Y/RgdC+0LIg0K/QvQ==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Integer undeflow in fprintf in dsa.c"
},
{
"msg_contents": "> On 20 Feb 2024, at 12:28, Ильясов Ян <[email protected]> wrote:\n\n> fprintf(stderr,\n> \" segment bin %zu (at least %d contiguous pages free):\\n\",\n> i, 1 << (i - 1));\n> \n> In case i equals zero user will get \"at least -2147483648 contiguous pages free\".\n\nThat does indeed seem like an oversight.\n\n> I believe that this is a mistake, and fprintf should print \"at least 0 contiguous pages free\"\n> in case i equals zero.\n\nThe message \"at least 0 contiguous pages free\" reads a bit nonsensical though,\nwouldn't it be preferrable to check for i being zero and print a custom message\nfor that case? Something like the below untested sketch?\n\n+ if (i == 0)\n+ fprintf(stderr,\n+ \" segment bin %zu (no contiguous free pages):\\n\", i);\n+ else\n+ fprintf(stderr,\n+ \" segment bin %zu (at least %d contiguous pages free):\\n\",\n+ i, 1 << (i - 1));\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:00:19 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Integer undeflow in fprintf in dsa.c"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:30 PM Daniel Gustafsson <[email protected]> wrote:\n> The message \"at least 0 contiguous pages free\" reads a bit nonsensical though,\n> wouldn't it be preferrable to check for i being zero and print a custom message\n> for that case? Something like the below untested sketch?\n>\n> + if (i == 0)\n> + fprintf(stderr,\n> + \" segment bin %zu (no contiguous free pages):\\n\", i);\n> + else\n> + fprintf(stderr,\n> + \" segment bin %zu (at least %d contiguous pages free):\\n\",\n> + i, 1 << (i - 1));\n\nThat does seem reasonable. However, this is just debugging code, so it\nalso probably isn't necessary to sweat anything too much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 17:52:48 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Integer undeflow in fprintf in dsa.c"
},
{
"msg_contents": "Sorry for not answering quickly.\n\nThank you for your comments.\n\nI attached a patch to the letter with changes to take into account Daniel Gustafsson's comment.\n\n\nKind regards,\nIan Ilyasov.\n\nJuniour Software Developer at Postgres Professional",
"msg_date": "Tue, 20 Feb 2024 16:13:41 +0000",
"msg_from": "Ilyasov Ian <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Integer undeflow in fprintf in dsa.c"
},
{
"msg_contents": "> On 20 Feb 2024, at 17:13, Ilyasov Ian <[email protected]> wrote:\n\n> Sorry for not answering quickly.\n\nThere is no need for any apology, there is no obligation to answer within any\nspecific timeframe.\n\n> I attached a patch to the letter with changes to take into account Daniel Gustafsson's comment.\n\nLooks good on a quick skim, I'll take care of this shortly.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 18:00:07 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Integer undeflow in fprintf in dsa.c"
}
] |
[
{
"msg_contents": "Hi,\nWhen a table is reloaded wit pg_restore, it is recreated without indexes or\nconstraints. There are automatically skipped. Is there a reason for this?\n\ng_restore -j 8 -v -d zof /shared/pgdump/aq/backup/dbtest/shtest --no-owner\n--role=test -t mytable 2>&1 | tee -a dbest.log\n\npg_restore: skipping item 7727 SEQUENCE SET xxx_seq\npg_restore: skipping item 5110 INDEX xxxxx_cons\npg_restore: skipping item 5143 CONSTRAINT xxx\npg_restore: skipping item 5670 FK CONSTRAINT xxx\n\nThanks for your feedback\n\nFabrice\n\nHi,When a table is reloaded wit pg_restore, it is recreated without indexes or constraints. There are automatically skipped. Is there a reason for this?g_restore -j 8 -v -d zof /shared/pgdump/aq/backup/dbtest/shtest --no-owner --role=test -t mytable 2>&1 | tee -a dbest.log pg_restore: skipping item 7727 SEQUENCE SET xxx_seqpg_restore: skipping item 5110 INDEX xxxxx_conspg_restore: skipping item 5143 CONSTRAINT xxx pg_restore: skipping item 5670 FK CONSTRAINT xxxThanks for your feedbackFabrice",
"msg_date": "Tue, 20 Feb 2024 15:44:47 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_restore problem to load constraints with tables"
},
{
"msg_contents": "Fabrice Chapuis <[email protected]> writes:\n> When a table is reloaded wit pg_restore, it is recreated without indexes or\n> constraints. There are automatically skipped. Is there a reason for this?\n\n[ shrug ] That's how the -t switch is defined. If you want something\nelse, you can use the -l and -L switches to pick out a custom\ncollection of objects to restore.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:58:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore problem to load constraints with tables"
}
] |
[
{
"msg_contents": "This blog, and the blogs it links to, explains the complexities of using\nmmap() for database data/index file I/O.\n\n\thttps://www.symas.com/post/are-you-sure-you-want-to-use-mmap-in-your-dbms\n\nThe blog starts by stating:\n\n\tThere are, however, severe correctness and performance issues\n\twith mmap that are not immediately apparent. Such problems make it\n\tdifficult, if not impossible, to use mmap correctly and efficiently\n\tin a modern DBMS.\n\nThe remainder of the article makes various arguments that such mmap use\nis _possible_, but ends with a reasonable conclusion:\n\n\tUltimately, the answer to the question \"are you sure you want\n\tto use mmap in your DBMS?\" should be rephrased - do you really\n\twant to reimplement everything the OS already does for you? Do\n\tyou really believe you can do it correctly, better than the OS\n\talready does? The DBMS world is littered with projects whose\n\tauthors believed, incorrectly, that they could.\n\nI think we have come to the same conclusion in the past, but I thought\nit would be good to share someone else's research, and it might be\nhelpful if we ever revisit this idea.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 18:21:32 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lessons from using mmap()"
},
{
"msg_contents": "Hi!\n\nOn Wed, Feb 21, 2024 at 1:21 AM Bruce Momjian <[email protected]> wrote:\n> I think we have come to the same conclusion in the past, but I thought\n> it would be good to share someone else's research, and it might be\n> helpful if we ever revisit this idea.\n\nI read this blog post before. In my personal opinion it is an example\nof awful speculation. If your DBMS needs WAL, then your first\nquestion about using mmap() for your data is how to enforce WAL to be\nreally ahead of writing the data pages. As I know there is no\nsolution for that with mmap() (or at least a solution with acceptable\nportability). The blog post advertises LMDB, and LMDB is really good.\nBut LMDB uses copy-on-write instead of WAL to ensure durability. And\nit supports a single writer at a time. This is just another niche of\nsolutions!\nThe blog post makes an impression that developers of non-mmap()\nDBMS'es are idiots who didn't manage to use mmap() properly. We're\nnot idiots, we develop DBMS of high concurrency! :-)\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 21 Feb 2024 01:58:50 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lessons from using mmap()"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI would like to know that why we have 'Shutdown <= SmartShutdown'\ncheck before launching few processes (WalReceiver, WalSummarizer,\nAutoVacuum worker) while rest of the processes (BGWriter, WalWriter,\nCheckpointer, Archiver etc) do not have any such check. If I have to\nlaunch a new process, what shall be the criteria to decide if I need\nthis check?\n\nLooking forward to your expert advice.\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 21 Feb 2024 08:57:46 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "'Shutdown <= SmartShutdown' check while launching processes in\n postmaster."
},
{
"msg_contents": "shveta malik <[email protected]> writes:\n> I would like to know that why we have 'Shutdown <= SmartShutdown'\n> check before launching few processes (WalReceiver, WalSummarizer,\n> AutoVacuum worker) while rest of the processes (BGWriter, WalWriter,\n> Checkpointer, Archiver etc) do not have any such check. If I have to\n> launch a new process, what shall be the criteria to decide if I need\n> this check?\n\nChildren that are stopped by the \"if (pmState == PM_STOP_BACKENDS)\"\nstanza in PostmasterStateMachine should not be allowed to start\nagain later if we are trying to shut down. (But \"smart\" shutdown\ndoesn't enforce that, since it's a very weak state that only\nprohibits new client sessions.) The processes that are allowed\nto continue beyond that point are ones that are needed to perform\nthe shutdown checkpoint, or useful to make it finish faster.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Feb 2024 23:31:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Shutdown <= SmartShutdown' check while launching processes in\n postmaster."
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 10:01 AM Tom Lane <[email protected]> wrote:\n>\n> shveta malik <[email protected]> writes:\n> > I would like to know that why we have 'Shutdown <= SmartShutdown'\n> > check before launching few processes (WalReceiver, WalSummarizer,\n> > AutoVacuum worker) while rest of the processes (BGWriter, WalWriter,\n> > Checkpointer, Archiver etc) do not have any such check. If I have to\n> > launch a new process, what shall be the criteria to decide if I need\n> > this check?\n>\n> Children that are stopped by the \"if (pmState == PM_STOP_BACKENDS)\"\n> stanza in PostmasterStateMachine should not be allowed to start\n> again later if we are trying to shut down. (But \"smart\" shutdown\n> doesn't enforce that, since it's a very weak state that only\n> prohibits new client sessions.) The processes that are allowed\n> to continue beyond that point are ones that are needed to perform\n> the shutdown checkpoint, or useful to make it finish faster.\n\nThank you for providing the details. It clarifies the situation. Do\nyou think it would be beneficial to include this as a code comment in\npostmaster.c to simplify understanding for future readers?\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 21 Feb 2024 15:38:10 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 'Shutdown <= SmartShutdown' check while launching processes in\n postmaster."
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 3:38 PM shveta malik <[email protected]> wrote:\n>\n> > Children that are stopped by the \"if (pmState == PM_STOP_BACKENDS)\"\n> > stanza in PostmasterStateMachine should not be allowed to start\n> > again later if we are trying to shut down. (But \"smart\" shutdown\n> > doesn't enforce that, since it's a very weak state that only\n> > prohibits new client sessions.) The processes that are allowed\n> > to continue beyond that point are ones that are needed to perform\n> > the shutdown checkpoint, or useful to make it finish faster.\n>\n> Thank you for providing the details. It clarifies the situation. Do\n> you think it would be beneficial to include this as a code comment in\n> postmaster.c to simplify understanding for future readers?\n\n+1 for a note either before the StartChildProcess() or before the\nPMState enum definition.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 17:31:43 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Shutdown <= SmartShutdown' check while launching processes in\n postmaster."
}
] |
[
{
"msg_contents": "Greetings, everyone!\n\nWhile analyzing output of Svace static analyzer [1] I've found a bug\n\nFunction bringetbitmap that is used in BRIN's IndexAmRoutine should \nreturn an\nint64 value, but the actual return value is int, since totalpages is int \nand\ntotalpages * 10 is also int. This could lead to integer overflow\n\nI suggest to change totalpages to be int64 to avoid potential overflow.\nAlso in all other \"amgetbitmap functions\" (such as hashgetbitmap, \ngistgetbitmap,\ngingetbitmap, blgetbitmap) the return value is of correct int64 type\n\nThe proposed patch is attached\n\n[1] - https://svace.pages.ispras.ru/svace-website/en/\n\nOleg Tselebrovskiy, Postgres Pro",
"msg_date": "Wed, 21 Feb 2024 12:40:59 +0700",
"msg_from": "Oleg Tselebrovskiy <[email protected]>",
"msg_from_op": true,
"msg_subject": "BRIN integer overflow"
},
{
"msg_contents": "> On 21 Feb 2024, at 06:40, Oleg Tselebrovskiy <[email protected]> wrote:\n\n> Function bringetbitmap that is used in BRIN's IndexAmRoutine should return an\n> int64 value, but the actual return value is int, since totalpages is int and\n> totalpages * 10 is also int. This could lead to integer overflow\n\n(totalpages * 10) overflowing an int seems like a quite theoretical risk which\nwould be hard to hit in practice.\n\n> I suggest to change totalpages to be int64 to avoid potential overflow.\n> Also in all other \"amgetbitmap functions\" (such as hashgetbitmap, gistgetbitmap,\n> gingetbitmap, blgetbitmap) the return value is of correct int64 type\n\nThat being said, changing it like this seems reasonable since the API is\ndefined as int64, and it will keep static analyzers quiet.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 11:31:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BRIN integer overflow"
}
] |
[
{
"msg_contents": "Hi All,\nIn [1] we found that having a test to dump and restore objects left\nbehind by regression test is missing. Such a test would cover many\ndump restore scenarios without much effort. It will also help identity\nproblems described in the same thread [2] during development itself.\n\nI am starting a new thread to discuss such a test. Attached is a WIP\nversion of the test. The test does fail at the restore step when\ncommit 74563f6b90216180fc13649725179fc119dddeb5 is reverted\nreintroducing the problem.\n\nAttached WIP test is inspired from\nsrc/bin/pg_upgrade/t/002_pg_upgrade.pl which tests binary-upgrade\ndumps. Attached test tests the non-binary-upgrade dumps.\n\nSimilar to 0002_pg_upgrade.pl the test uses SQL dumps before and after\ndump and restore to make sure that the objects are restored correctly.\nThe test has some shortcomings\n1. Objects which are not dumped at all are never tested.\n2. Since the rows are dumped in varying order by the two clusters, the\ntest only tests schema dump and restore.\n3. The order of columns of the inheritance child table differs\ndepending upon the DDLs used to reach a given state. This introduces\ndiffs in the SQL dumps before and after restore. The test ignores\nthese diffs by hardcoding the diff in the test.\n\nEven with 1 and 2 the test is useful to detect dump/restore anomalies.\nI think we should improve 3, but I don't have a good and simpler\nsolution. I didn't find any way to compare two given clusters in our\nTAP test framework. Building it will be a lot of work. Not sure if\nit's worth it.\n\nSuggestions welcome.\n\n[1] https://www.postgresql.org/message-id/CAExHW5vyqv%3DXLTcNMzCNccOrHiun_XhYPjcRqeV6dLvZSamriQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/3462358.1708107856%40sss.pgh.pa.us\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 21 Feb 2024 12:18:45 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 12:18:45PM +0530, Ashutosh Bapat wrote:\n> Even with 1 and 2 the test is useful to detect dump/restore anomalies.\n> I think we should improve 3, but I don't have a good and simpler\n> solution. I didn't find any way to compare two given clusters in our\n> TAP test framework. Building it will be a lot of work. Not sure if\n> it's worth it.\n\n+\tmy $rc =\n+\t system($ENV{PG_REGRESS}\n+\t\t . \" $extra_opts \"\n+\t\t . \"--dlpath=\\\"$dlpath\\\" \"\n+\t\t . \"--bindir= \"\n+\t\t . \"--host=\"\n+\t\t . $node->host . \" \"\n+\t\t . \"--port=\"\n+\t\t . $node->port . \" \"\n+\t\t . \"--schedule=$srcdir/src/test/regress/parallel_schedule \"\n+\t\t . \"--max-concurrent-tests=20 \"\n+\t\t . \"--inputdir=\\\"$inputdir\\\" \"\n+\t\t . \"--outputdir=\\\"$outputdir\\\"\");\n\nI am not sure that it is a good idea to add a full regression test\ncycle while we have already 027_stream_regress.pl that would be enough\nto test some dump scenarios. These are very expensive and easy to\nnotice even with a high level of parallelization of the tests.\n--\nMichael",
"msg_date": "Thu, 22 Feb 2024 10:01:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 6:32 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Feb 21, 2024 at 12:18:45PM +0530, Ashutosh Bapat wrote:\n> > Even with 1 and 2 the test is useful to detect dump/restore anomalies.\n> > I think we should improve 3, but I don't have a good and simpler\n> > solution. I didn't find any way to compare two given clusters in our\n> > TAP test framework. Building it will be a lot of work. Not sure if\n> > it's worth it.\n>\n> + my $rc =\n> + system($ENV{PG_REGRESS}\n> + . \" $extra_opts \"\n> + . \"--dlpath=\\\"$dlpath\\\" \"\n> + . \"--bindir= \"\n> + . \"--host=\"\n> + . $node->host . \" \"\n> + . \"--port=\"\n> + . $node->port . \" \"\n> + . \"--schedule=$srcdir/src/test/regress/parallel_schedule \"\n> + . \"--max-concurrent-tests=20 \"\n> + . \"--inputdir=\\\"$inputdir\\\" \"\n> + . \"--outputdir=\\\"$outputdir\\\"\");\n>\n> I am not sure that it is a good idea to add a full regression test\n> cycle while we have already 027_stream_regress.pl that would be enough\n> to test some dump scenarios.\n\nThat test *uses* pg_dump as a way to test whether the two clusters are\nin sync. The test might change in future to use some other method to\nmake sure the two clusters are consistent. Adding the test here to\nthat test will make that change much harder.\n\nIt's not the dump, but restore, we are interested in here. No test\nthat runs PG_REGRESS also runs pg_restore in non-binary mode.\n\nAlso we need to keep this test near other pg_dump tests, not far from them.\n\n> These are very expensive and easy to\n> notice even with a high level of parallelization of the tests.\n\nI agree, but I didn't find a suitable test to ride on.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 22 Feb 2024 14:23:05 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On 22.02.24 02:01, Michael Paquier wrote:\n> On Wed, Feb 21, 2024 at 12:18:45PM +0530, Ashutosh Bapat wrote:\n>> Even with 1 and 2 the test is useful to detect dump/restore anomalies.\n>> I think we should improve 3, but I don't have a good and simpler\n>> solution. I didn't find any way to compare two given clusters in our\n>> TAP test framework. Building it will be a lot of work. Not sure if\n>> it's worth it.\n> \n> +\tmy $rc =\n> +\t system($ENV{PG_REGRESS}\n> +\t\t . \" $extra_opts \"\n> +\t\t . \"--dlpath=\\\"$dlpath\\\" \"\n> +\t\t . \"--bindir= \"\n> +\t\t . \"--host=\"\n> +\t\t . $node->host . \" \"\n> +\t\t . \"--port=\"\n> +\t\t . $node->port . \" \"\n> +\t\t . \"--schedule=$srcdir/src/test/regress/parallel_schedule \"\n> +\t\t . \"--max-concurrent-tests=20 \"\n> +\t\t . \"--inputdir=\\\"$inputdir\\\" \"\n> +\t\t . \"--outputdir=\\\"$outputdir\\\"\");\n> \n> I am not sure that it is a good idea to add a full regression test\n> cycle while we have already 027_stream_regress.pl that would be enough\n> to test some dump scenarios. These are very expensive and easy to\n> notice even with a high level of parallelization of the tests.\n\nThe problem is, we don't really have any end-to-end coverage of\n\ndump\nrestore\ndump again\ncompare the two dumps\n\nwith a database with lots of interesting objects in it.\n\nNote that each of these steps could fail.\n\nWe have somewhat relied on the pg_upgrade test to provide this testing, \nbut we have recently discovered that the dumps in binary-upgrade mode \nare different enough to not test the normal dumps well.\n\nYes, this test is a bit expensive. We could save some time by doing the \nfirst dump at the end of the normal regress test and have the pg_dump \ntest reuse that, but then that would make the regress test run a bit \nlonger. Is that a better tradeoff?\n\nI have done some timing tests:\n\nmaster:\n\npg_dump check: 22s\npg_dump check -j8: 8s\ncheck-world -j8: 2min44s\n\npatched:\n\npg_dump check: 34s\npg_dump check -j8: 13s\ncheck-world -j8: 2min46s\n\nSo overall it doesn't seem that bad.\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:16:50 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "> On 22 Feb 2024, at 10:16, Peter Eisentraut <[email protected]> wrote:\n\n> We have somewhat relied on the pg_upgrade test to provide this testing, but we have recently discovered that the dumps in binary-upgrade mode are different enough to not test the normal dumps well.\n> \n> Yes, this test is a bit expensive. We could save some time by doing the first dump at the end of the normal regress test and have the pg_dump test reuse that, but then that would make the regress test run a bit longer. Is that a better tradeoff?\n\nSomething this expensive seems like what PG_TEST_EXTRA is intended for, we\nalready have important test suites there.\n\nBut. We know that the cluster has an interesting state when the pg_upgrade\ntest starts, could we use that to make a dump/restore test before continuing\nwith testing pg_upgrade? It can be argued that pg_upgrade shouldn't be\nresponsible for testing pg_dump, but it's already now a pretty important\ntestcase for pg_dump in binary upgrade mode so it's that far off. If pg_dump\nhas bugs then pg_upgrade risks subtly breaking.\n\nWhen upgrading to the same version, we could perhaps also use this to test a\nscenario like: Dump A, restore into B, upgrade B into C, dump C and compare C\nto A.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:33:04 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 3:03 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 22 Feb 2024, at 10:16, Peter Eisentraut <[email protected]> wrote:\n>\n> > We have somewhat relied on the pg_upgrade test to provide this testing, but we have recently discovered that the dumps in binary-upgrade mode are different enough to not test the normal dumps well.\n> >\n> > Yes, this test is a bit expensive. We could save some time by doing the first dump at the end of the normal regress test and have the pg_dump test reuse that, but then that would make the regress test run a bit longer. Is that a better tradeoff?\n>\n> Something this expensive seems like what PG_TEST_EXTRA is intended for, we\n> already have important test suites there.\n\nThat's ok with me.\n\n>\n> But. We know that the cluster has an interesting state when the pg_upgrade\n> test starts, could we use that to make a dump/restore test before continuing\n> with testing pg_upgrade? It can be argued that pg_upgrade shouldn't be\n> responsible for testing pg_dump, but it's already now a pretty important\n> testcase for pg_dump in binary upgrade mode so it's that far off. If pg_dump\n> has bugs then pg_upgrade risks subtly breaking.\n\nSomebody looking for dump/restore tests wouldn't search\nsrc/bin/pg_upgrade, I think. However if more people think we should\njust add this test 002_pg_upgrade.pl, I am fine with it.\n\n>\n> When upgrading to the same version, we could perhaps also use this to test a\n> scenario like: Dump A, restore into B, upgrade B into C, dump C and compare C\n> to A.\n\nIf comparison of C to A fails, we wouldn't know which step fails. I\nwould rather compare outputs of each step separately.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 22 Feb 2024 15:25:42 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "> On 22 Feb 2024, at 10:55, Ashutosh Bapat <[email protected]> wrote:\n> On Thu, Feb 22, 2024 at 3:03 PM Daniel Gustafsson <[email protected]> wrote:\n\n> Somebody looking for dump/restore tests wouldn't search\n> src/bin/pg_upgrade, I think.\n\nQuite possibly not, but pg_upgrade is already today an important testsuite for\ntesting pg_dump in binary-upgrade mode so maybe more developers touching\npg_dump should?\n\n>> When upgrading to the same version, we could perhaps also use this to test a\n>> scenario like: Dump A, restore into B, upgrade B into C, dump C and compare C\n>> to A.\n> \n> If comparison of C to A fails, we wouldn't know which step fails. I\n> would rather compare outputs of each step separately.\n\nTo be clear, this wasn't intended to replace what you are proposing, but an\nidea for using it to test *more* scenarios.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 11:00:58 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On 22.02.24 11:00, Daniel Gustafsson wrote:\n>> On 22 Feb 2024, at 10:55, Ashutosh Bapat <[email protected]> wrote:\n>> On Thu, Feb 22, 2024 at 3:03 PM Daniel Gustafsson <[email protected]> wrote:\n> \n>> Somebody looking for dump/restore tests wouldn't search\n>> src/bin/pg_upgrade, I think.\n> \n> Quite possibly not, but pg_upgrade is already today an important testsuite for\n> testing pg_dump in binary-upgrade mode so maybe more developers touching\n> pg_dump should?\n\nYeah, I think attaching this to the existing pg_upgrade test would be a \ngood idea. Not only would it save test run time, it would probably also \nreduce code duplication.\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 11:20:29 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The problem is, we don't really have any end-to-end coverage of\n\n> dump\n> restore\n> dump again\n> compare the two dumps\n\n> with a database with lots of interesting objects in it.\n\nI'm very much against adding another full run of the core regression\ntests to support this. But beyond the problem of not bloating the\ncheck-world test runtime, there is the question of what this would\nactually buy us. I doubt that it is worth very much, because\nit would not detect bugs-of-omission in pg_dump. As I remarked in\nthe other thread, if pg_dump is blind to the existence of some\nfeature or field, testing that the dumps compare equal will fail\nto reveal that it didn't restore that property.\n\nI'm not sure what we could do about that. One could imagine writing\nsome test infrastructure that dumps out the contents of the system\ncatalogs directly, and comparing that instead of pg_dump output.\nBut that'd be a lot of infrastructure to write and maintain ...\nand it's not real clear why it wouldn't *also* suffer from\nI-forgot-to-add-this hazards.\n\nOn balance, I think there are good reasons that we've not added\nsuch a test, and I don't believe those tradeoffs have changed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:05:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 3:50 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 22.02.24 11:00, Daniel Gustafsson wrote:\n> >> On 22 Feb 2024, at 10:55, Ashutosh Bapat <[email protected]> wrote:\n> >> On Thu, Feb 22, 2024 at 3:03 PM Daniel Gustafsson <[email protected]> wrote:\n> >\n> >> Somebody looking for dump/restore tests wouldn't search\n> >> src/bin/pg_upgrade, I think.\n> >\n> > Quite possibly not, but pg_upgrade is already today an important testsuite for\n> > testing pg_dump in binary-upgrade mode so maybe more developers touching\n> > pg_dump should?\n>\n> Yeah, I think attaching this to the existing pg_upgrade test would be a\n> good idea. Not only would it save test run time, it would probably also\n> reduce code duplication.\n>\n\nThat's more than one vote for adding the test to 002_pg_ugprade.pl.\nSeems fine to me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:33:42 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 8:35 PM Tom Lane <[email protected]> wrote:\n>\n> Peter Eisentraut <[email protected]> writes:\n> > The problem is, we don't really have any end-to-end coverage of\n>\n> > dump\n> > restore\n> > dump again\n> > compare the two dumps\n>\n> > with a database with lots of interesting objects in it.\n>\n> I'm very much against adding another full run of the core regression\n> tests to support this.\n\nThis will be taken care of by Peter's latest idea of augmenting\nexisting 002_pg_upgrade.pl.\n\n> But beyond the problem of not bloating the\n> check-world test runtime, there is the question of what this would\n> actually buy us. I doubt that it is worth very much, because\n> it would not detect bugs-of-omission in pg_dump. As I remarked in\n> the other thread, if pg_dump is blind to the existence of some\n> feature or field, testing that the dumps compare equal will fail\n> to reveal that it didn't restore that property.\n>\n> I'm not sure what we could do about that. One could imagine writing\n> some test infrastructure that dumps out the contents of the system\n> catalogs directly, and comparing that instead of pg_dump output.\n> But that'd be a lot of infrastructure to write and maintain ...\n> and it's not real clear why it wouldn't *also* suffer from\n> I-forgot-to-add-this hazards.\n\nIf a developer forgets to add logic to dump objects that their patch\nadds, it's hard to detect it, through testing alone, in every possible\ncase. We need reviewers to take care of that. I don't think that's the\nobjective of this test case or of pg_upgrade test either.\n\n>\n> On balance, I think there are good reasons that we've not added\n> such a test, and I don't believe those tradeoffs have changed.\n>\n\nI am not aware of those reasons. Are they documented somewhere? Any\npointers to the previous discussion on this topic? Googling \"pg_dump\nregression pgsql-hackers\" returns threads about performance\nregressions.\n\nOn the flip side, the test I wrote reproduces the COMPRESSION/STORAGE\nbug you reported along with a few other bugs in that area which I will\nreport soon on that thread. I think, that shows that we need such a\ntest.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:46:01 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 10:46 AM Ashutosh Bapat <\[email protected]> wrote:\n\n> On Thu, Feb 22, 2024 at 8:35 PM Tom Lane <[email protected]> wrote:\n> >\n> > Peter Eisentraut <[email protected]> writes:\n> > > The problem is, we don't really have any end-to-end coverage of\n> >\n> > > dump\n> > > restore\n> > > dump again\n> > > compare the two dumps\n> >\n> > > with a database with lots of interesting objects in it.\n> >\n> > I'm very much against adding another full run of the core regression\n> > tests to support this.\n>\n> This will be taken care of by Peter's latest idea of augmenting\n> existing 002_pg_upgrade.pl.\n>\n>\nIncorporated the test to 002_pg_ugprade.pl.\n\nSome points for discussion:\n1. The test still hardcodes the diffs between two dumps. Haven't found a\nbetter way to do it. I did consider removing the problematic objects from\nthe regression database but thought against it since we would lose some\ncoverage.\n\n2. The new code tests dump and restore of just the regression database and\ndoes not use pg_dumpall like pg_upgrade. Should it instead perform\npg_dumpall? I decided against it since a. we are interested in dumping and\nrestoring objects left behind by regression, b. I didn't find a way to\nprovide the format option to pg_dumpall. The test could be enhanced to use\ndifferent dump formats.\n\nI have added it to the next commitfest.\nhttps://commitfest.postgresql.org/48/4956/\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 26 Apr 2024 18:38:22 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Fri, Apr 26, 2024 at 06:38:22PM +0530, Ashutosh Bapat wrote:\n> Some points for discussion:\n> 1. The test still hardcodes the diffs between two dumps. Haven't found a\n> better way to do it. I did consider removing the problematic objects from\n> the regression database but thought against it since we would lose some\n> coverage.\n> \n> 2. The new code tests dump and restore of just the regression database and\n> does not use pg_dumpall like pg_upgrade. Should it instead perform\n> pg_dumpall? I decided against it since a. we are interested in dumping and\n> restoring objects left behind by regression, b. I didn't find a way to\n> provide the format option to pg_dumpall. The test could be enhanced to use\n> different dump formats.\n> \n> I have added it to the next commitfest.\n> https://commitfest.postgresql.org/48/4956/\n\nAshutosh and I have discussed this patch a bit last week. Here is a\nshort summary of my input, after I understood what is going on.\n \n+\t# We could avoid this by dumping the database loaded from original dump.\n+\t# But that would change the state of the objects as left behind by the\n+\t# regression.\n+\tmy $expected_diff = \" --\n+ CREATE TABLE public.gtestxx_4 (\n+- b integer,\n+- a integer NOT NULL\n++ a integer NOT NULL,\n++ b integer\n+ )\n[...]\n+\tmy ($stdout, $stderr) =\n+\t\trun_command([ 'diff', '-u', $dump4_file, $dump5_file]);\n+\t# Clear file names, line numbers from the diffs; those are not going to\n+\t# remain the same always. Also clear empty lines and normalize new line\n+\t# characters across platforms.\n+\t$stdout =~ s/^\\@\\@.*$//mg;\n+\t$stdout =~ s/^.*$dump4_file.*$//mg;\n+\t$stdout =~ s/^.*$dump5_file.*$//mg;\n+\t$stdout =~ s/^\\s*\\n//mg;\n+\t$stdout =~ s/\\r\\n/\\n/g;\n+\t$expected_diff =~ s/\\r\\n/\\n/g;\n+\tis($stdout, $expected_diff, 'old and new dumps match after dump and restore');\n+}\n\nI am not a fan of what this patch does, adding the knowledge related\nto the dump filtering within 002_pg_upgrade.pl. Please do not take\nme wrong, I am not against the idea of adding that within this\npg_upgrade test to save from one full cycle of `make check` to check\nthe consistency of the dump. My issue is that this logic should be\nexternalized, and it should be in fewer lines of code.\n\nFor the externalization part, Ashutosh and I considered a few ideas,\nbut one that we found tempting is to create a small .pm, say named\nAdjustDump.pm. This would share some rules with the existing\nAdjustUpgrade.pm, which would be fine IMO even if there is a small\noverlap, documenting the dependency between each module. That makes\nthe integration with the buildfarm much simpler by not creating more\ndependencies with the modules shared between core and the buildfarm\ncode. For the \"shorter\" part, one idea that I had is to apply to the\ndump a regexp that wipes out the column definitions within the\nparenthesis, keeping around the CREATE TABLE and any other attributes\nnot impacted by the reordering. All that should be documented in the\nmodule, of course.\n\nAnother thing would be to improve the backend so as we are able to\na better support for physical column ordering, which would, I assume\n(and correct me if I'm wrong!), prevent the reordering of the\nattributes like in this inheritance case. But that would not address\nthe case of dumps taken from older versions with a new version of\npg_dump, which is something that may be interesting to have more tests\nfor in the long-term. Overall a module sounds like a better solution.\n--\nMichael",
"msg_date": "Tue, 4 Jun 2024 07:58:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Tue, Jun 4, 2024 at 4:28 AM Michael Paquier <[email protected]> wrote:\n\n> On Fri, Apr 26, 2024 at 06:38:22PM +0530, Ashutosh Bapat wrote:\n> > Some points for discussion:\n> > 1. The test still hardcodes the diffs between two dumps. Haven't found a\n> > better way to do it. I did consider removing the problematic objects from\n> > the regression database but thought against it since we would lose some\n> > coverage.\n> >\n> > 2. The new code tests dump and restore of just the regression database\n> and\n> > does not use pg_dumpall like pg_upgrade. Should it instead perform\n> > pg_dumpall? I decided against it since a. we are interested in dumping\n> and\n> > restoring objects left behind by regression, b. I didn't find a way to\n> > provide the format option to pg_dumpall. The test could be enhanced to\n> use\n> > different dump formats.\n> >\n> > I have added it to the next commitfest.\n> > https://commitfest.postgresql.org/48/4956/\n>\n> Ashutosh and I have discussed this patch a bit last week. Here is a\n> short summary of my input, after I understood what is going on.\n>\n> + # We could avoid this by dumping the database loaded from original\n> dump.\n> + # But that would change the state of the objects as left behind by\n> the\n> + # regression.\n> + my $expected_diff = \" --\n> + CREATE TABLE public.gtestxx_4 (\n> +- b integer,\n> +- a integer NOT NULL\n> ++ a integer NOT NULL,\n> ++ b integer\n> + )\n> [...]\n> + my ($stdout, $stderr) =\n> + run_command([ 'diff', '-u', $dump4_file, $dump5_file]);\n> + # Clear file names, line numbers from the diffs; those are not\n> going to\n> + # remain the same always. Also clear empty lines and normalize new\n> line\n> + # characters across platforms.\n> + $stdout =~ s/^\\@\\@.*$//mg;\n> + $stdout =~ s/^.*$dump4_file.*$//mg;\n> + $stdout =~ s/^.*$dump5_file.*$//mg;\n> + $stdout =~ s/^\\s*\\n//mg;\n> + $stdout =~ s/\\r\\n/\\n/g;\n> + $expected_diff =~ s/\\r\\n/\\n/g;\n> + is($stdout, $expected_diff, 'old and new dumps match after dump\n> and restore');\n> +}\n>\n> I am not a fan of what this patch does, adding the knowledge related\n> to the dump filtering within 002_pg_upgrade.pl. Please do not take\n> me wrong, I am not against the idea of adding that within this\n> pg_upgrade test to save from one full cycle of `make check` to check\n> the consistency of the dump. My issue is that this logic should be\n> externalized, and it should be in fewer lines of code.\n\n\n> For the externalization part, Ashutosh and I considered a few ideas,\n> but one that we found tempting is to create a small .pm, say named\n> AdjustDump.pm. This would share some rules with the existing\n> AdjustUpgrade.pm, which would be fine IMO even if there is a small\n> overlap, documenting the dependency between each module. That makes\n> the integration with the buildfarm much simpler by not creating more\n> dependencies with the modules shared between core and the buildfarm\n> code. For the \"shorter\" part, one idea that I had is to apply to the\n> dump a regexp that wipes out the column definitions within the\n> parenthesis, keeping around the CREATE TABLE and any other attributes\n> not impacted by the reordering. All that should be documented in the\n> module, of course.\n>\n\nThanks for the suggestion. I didn't understand the dependency with the\nbuildfarm module. Will the new module be used in buildfarm separately? I\nwill work on this soon. Thanks for changing CF entry to WoA.\n\n\n>\n> Another thing would be to improve the backend so as we are able to\n> a better support for physical column ordering, which would, I assume\n> (and correct me if I'm wrong!), prevent the reordering of the\n> attributes like in this inheritance case. But that would not address\n> the case of dumps taken from older versions with a new version of\n> pg_dump, which is something that may be interesting to have more tests\n> for in the long-term. Overall a module sounds like a better solution.\n>\n\nChanging the physical order of column of a child table based on the\ninherited table seems intentional as per MergeAttributes(). That logic\nlooks sane by itself. In binary mode pg_dump works very hard to retain the\ncolumn order by issuing UPDATE commands against catalog tables. I don't\nthink mimicking that behaviour is the right choice for non-binary dump. I\nagree with your conclusion that we fix it in by fixing the diffs. The code\nto do that will be part of a separate module.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Jun 4, 2024 at 4:28 AM Michael Paquier <[email protected]> wrote:On Fri, Apr 26, 2024 at 06:38:22PM +0530, Ashutosh Bapat wrote:\n> Some points for discussion:\n> 1. The test still hardcodes the diffs between two dumps. Haven't found a\n> better way to do it. I did consider removing the problematic objects from\n> the regression database but thought against it since we would lose some\n> coverage.\n> \n> 2. The new code tests dump and restore of just the regression database and\n> does not use pg_dumpall like pg_upgrade. Should it instead perform\n> pg_dumpall? I decided against it since a. we are interested in dumping and\n> restoring objects left behind by regression, b. I didn't find a way to\n> provide the format option to pg_dumpall. The test could be enhanced to use\n> different dump formats.\n> \n> I have added it to the next commitfest.\n> https://commitfest.postgresql.org/48/4956/\n\nAshutosh and I have discussed this patch a bit last week. Here is a\nshort summary of my input, after I understood what is going on.\n\n+ # We could avoid this by dumping the database loaded from original dump.\n+ # But that would change the state of the objects as left behind by the\n+ # regression.\n+ my $expected_diff = \" --\n+ CREATE TABLE public.gtestxx_4 (\n+- b integer,\n+- a integer NOT NULL\n++ a integer NOT NULL,\n++ b integer\n+ )\n[...]\n+ my ($stdout, $stderr) =\n+ run_command([ 'diff', '-u', $dump4_file, $dump5_file]);\n+ # Clear file names, line numbers from the diffs; those are not going to\n+ # remain the same always. Also clear empty lines and normalize new line\n+ # characters across platforms.\n+ $stdout =~ s/^\\@\\@.*$//mg;\n+ $stdout =~ s/^.*$dump4_file.*$//mg;\n+ $stdout =~ s/^.*$dump5_file.*$//mg;\n+ $stdout =~ s/^\\s*\\n//mg;\n+ $stdout =~ s/\\r\\n/\\n/g;\n+ $expected_diff =~ s/\\r\\n/\\n/g;\n+ is($stdout, $expected_diff, 'old and new dumps match after dump and restore');\n+}\n\nI am not a fan of what this patch does, adding the knowledge related\nto the dump filtering within 002_pg_upgrade.pl. Please do not take\nme wrong, I am not against the idea of adding that within this\npg_upgrade test to save from one full cycle of `make check` to check\nthe consistency of the dump. My issue is that this logic should be\nexternalized, and it should be in fewer lines of code.\n\nFor the externalization part, Ashutosh and I considered a few ideas,\nbut one that we found tempting is to create a small .pm, say named\nAdjustDump.pm. This would share some rules with the existing\nAdjustUpgrade.pm, which would be fine IMO even if there is a small\noverlap, documenting the dependency between each module. That makes\nthe integration with the buildfarm much simpler by not creating more\ndependencies with the modules shared between core and the buildfarm\ncode. For the \"shorter\" part, one idea that I had is to apply to the\ndump a regexp that wipes out the column definitions within the\nparenthesis, keeping around the CREATE TABLE and any other attributes\nnot impacted by the reordering. All that should be documented in the\nmodule, of course.Thanks for the suggestion. I didn't understand the dependency with the buildfarm module. Will the new module be used in buildfarm separately? I will work on this soon. Thanks for changing CF entry to WoA. \n\nAnother thing would be to improve the backend so as we are able to\na better support for physical column ordering, which would, I assume\n(and correct me if I'm wrong!), prevent the reordering of the\nattributes like in this inheritance case. But that would not address\nthe case of dumps taken from older versions with a new version of\npg_dump, which is something that may be interesting to have more tests\nfor in the long-term. Overall a module sounds like a better solution.Changing the physical order of column of a child table based on the inherited table seems intentional as per MergeAttributes(). That logic looks sane by itself. In binary mode pg_dump works very hard to retain the column order by issuing UPDATE commands against catalog tables. I don't think mimicking that behaviour is the right choice for non-binary dump. I agree with your conclusion that we fix it in by fixing the diffs. The code to do that will be part of a separate module.--Best Wishes,Ashutosh Bapat",
"msg_date": "Wed, 5 Jun 2024 17:09:58 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Wed, Jun 05, 2024 at 05:09:58PM +0530, Ashutosh Bapat wrote:\n> Thanks for the suggestion. I didn't understand the dependency with the\n> buildfarm module. Will the new module be used in buildfarm separately? I\n> will work on this soon. Thanks for changing CF entry to WoA.\n\nI had some doubts about PGBuild/Modules/TestUpgradeXversion.pm, but\nafter double-checking it loads dynamically AdjustUpgrade from the core\ntree based on the base path where all the modules are:\n # load helper module from source tree\n unshift(@INC, \"$srcdir/src/test/perl\");\n require PostgreSQL::Test::AdjustUpgrade;\n PostgreSQL::Test::AdjustUpgrade->import;\n shift(@INC);\n\nIt would be annoying to tweak the buildfarm code more to have a\ndifferent behavior depending on the branch of Postgres tested.\nAnyway, from what I can see, you could create a new module with the\ndump filtering rules that AdjustUpgrade requires without having to\nupdate the buildfarm code.\n\n> Changing the physical order of column of a child table based on the\n> inherited table seems intentional as per MergeAttributes(). That logic\n> looks sane by itself. In binary mode pg_dump works very hard to retain the\n> column order by issuing UPDATE commands against catalog tables. I don't\n> think mimicking that behaviour is the right choice for non-binary dump. I\n> agree with your conclusion that we fix it in by fixing the diffs. The code\n> to do that will be part of a separate module.\n\nThanks.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2024 08:37:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "Sorry for delay, but here's next version of the patchset for review.\n\nOn Thu, Jun 6, 2024 at 5:07 AM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Jun 05, 2024 at 05:09:58PM +0530, Ashutosh Bapat wrote:\n> > Thanks for the suggestion. I didn't understand the dependency with the\n> > buildfarm module. Will the new module be used in buildfarm separately? I\n> > will work on this soon. Thanks for changing CF entry to WoA.\n>\n> I had some doubts about PGBuild/Modules/TestUpgradeXversion.pm, but\n> after double-checking it loads dynamically AdjustUpgrade from the core\n> tree based on the base path where all the modules are:\n> # load helper module from source tree\n> unshift(@INC, \"$srcdir/src/test/perl\");\n> require PostgreSQL::Test::AdjustUpgrade;\n> PostgreSQL::Test::AdjustUpgrade->import;\n> shift(@INC);\n\n\n> It would be annoying to tweak the buildfarm code more to have a\n> different behavior depending on the branch of Postgres tested.\n> Anyway, from what I can see, you could create a new module with the\n> dump filtering rules that AdjustUpgrade requires without having to\n> update the buildfarm code.\n>\n\nThe two filtering rules that I picked from AdjustUpgrade() are a. use unix\nstyle newline b. eliminate blank lines. I think we could copy those rule\ninto the new module (as done in the patch) without creating any dependency\nbetween modules. There's little gained by creating another perl function\njust for those two sed commands. There's no way to do that otherwise. If we\nkeep those two modules independent, we will be free to change each module\nas required in future. Do we need to change buildfarm code to load the\nAdjustDump module like above? I am not familiar with the buildfarm code.\n\nHere's a description of patches and some notes\n0001\n-------\n1. Per your suggestion the logic to handle dump output differences is\nexternalized in PostgreSQL::Test::AdjustDump. Instead of eliminating those\ndifferences altogether from both the dump outputs, the corresponding DDL in\nthe original dump output is adjusted to look like that from the restored\ndatabase. Thus we retain full knowledge of what differences to expect.\n2. I have changed the name filter_dump to filter_dump_for_upgrade so as to\ndifferentiate between two adjustments 1. for upgrade and 2. for\ndump/restore. Ideally the name should have been adjust_dump_for_ugprade() .\nIt's more of an adjustment than filtering as indicated by the function it\ncalls. But I haven't changed that. The new function to adjust dumps for\ndump and restore tests is named adjust_dump_for_restore() however.\n3. As suggested by Daniel upthread, the test for dump and restore happens\nbefore upgrade which might change the old cluster thus changing the state\nof objects left behind by regression. The test is not executed if\nregression is not used to create the old cluster.\n4. The code to compare two dumps and report differences if any is moved to\nits own function compare_dumps() which is used for both upgrade and\ndump/restore tests.\nThe test uses the custom dump format for dumping and restoring the database.\n\n0002\n------\nThis commit expands the previous test to test all dump formats. But as\nexpected that increases the time taken by this test. On my laptop 0001\ntakes approx 28 seconds to run the test and with 0002 it takes approx 35\nseconds. But there's not much impact on the duration of running all the\ntests (2m30s vs 2m40s). The code which creates the DDL statements in the\ndump is independent of the dump format. So usually we shouldn't require to\ntest all the formats in this test. But each format stores the dependencies\nbetween dumped objects in a different manner which would be tested with the\nchanges in this patch. I think this patch is also useful. If we decide to\nkeep this test, the patch is intended to be merged into 0001.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 28 Jun 2024 18:00:07 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Fri, Jun 28, 2024 at 06:00:07PM +0530, Ashutosh Bapat wrote:\n> Here's a description of patches and some notes\n> 0001\n> -------\n> 1. Per your suggestion the logic to handle dump output differences is\n> externalized in PostgreSQL::Test::AdjustDump. Instead of eliminating those\n> differences altogether from both the dump outputs, the corresponding DDL in\n> the original dump output is adjusted to look like that from the restored\n> database. Thus we retain full knowledge of what differences to expect.\n> 2. I have changed the name filter_dump to filter_dump_for_upgrade so as to\n> differentiate between two adjustments 1. for upgrade and 2. for\n> dump/restore. Ideally the name should have been adjust_dump_for_ugprade() .\n> It's more of an adjustment than filtering as indicated by the function it\n> calls. But I haven't changed that. The new function to adjust dumps for\n> dump and restore tests is named adjust_dump_for_restore() however.\n> 3. As suggested by Daniel upthread, the test for dump and restore happens\n> before upgrade which might change the old cluster thus changing the state\n> of objects left behind by regression. The test is not executed if\n> regression is not used to create the old cluster.\n> 4. The code to compare two dumps and report differences if any is moved to\n> its own function compare_dumps() which is used for both upgrade and\n> dump/restore tests.\n> The test uses the custom dump format for dumping and restoring the\n> database.\n\nAt quick glance, that seems to be going in the right direction. Note\nthat you have forgotten install and uninstall rules for the new .pm\nfile.\n\n0002 increases more the runtime of a test that's already one of the\nlongest ones in the tree is not really appealing, I am afraid.\n--\nMichael",
"msg_date": "Fri, 5 Jul 2024 14:29:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Fri, Jul 5, 2024 at 10:59 AM Michael Paquier <[email protected]> wrote:\n\n> On Fri, Jun 28, 2024 at 06:00:07PM +0530, Ashutosh Bapat wrote:\n> > Here's a description of patches and some notes\n> > 0001\n> > -------\n> > 1. Per your suggestion the logic to handle dump output differences is\n> > externalized in PostgreSQL::Test::AdjustDump. Instead of eliminating\n> those\n> > differences altogether from both the dump outputs, the corresponding DDL\n> in\n> > the original dump output is adjusted to look like that from the restored\n> > database. Thus we retain full knowledge of what differences to expect.\n> > 2. I have changed the name filter_dump to filter_dump_for_upgrade so as\n> to\n> > differentiate between two adjustments 1. for upgrade and 2. for\n> > dump/restore. Ideally the name should have been\n> adjust_dump_for_ugprade() .\n> > It's more of an adjustment than filtering as indicated by the function it\n> > calls. But I haven't changed that. The new function to adjust dumps for\n> > dump and restore tests is named adjust_dump_for_restore() however.\n> > 3. As suggested by Daniel upthread, the test for dump and restore happens\n> > before upgrade which might change the old cluster thus changing the state\n> > of objects left behind by regression. The test is not executed if\n> > regression is not used to create the old cluster.\n> > 4. The code to compare two dumps and report differences if any is moved\n> to\n> > its own function compare_dumps() which is used for both upgrade and\n> > dump/restore tests.\n> > The test uses the custom dump format for dumping and restoring the\n> > database.\n>\n> At quick glance, that seems to be going in the right direction. Note\n> that you have forgotten install and uninstall rules for the new .pm\n> file.\n>\n\nBefore submitting the patch, I looked for all the places which mention\nAdjustUpgrade or AdjustUpgrade.pm to find places where the new module needs\nto be mentioned. But I didn't find any. AdjustUpgrade is not mentioned\nin src/test/perl/Makefile or src/test/perl/meson.build. Do we want to also\nadd AdjustUpgrade.pm in those files?\n\n\n>\n> 0002 increases more the runtime of a test that's already one of the\n> longest ones in the tree is not really appealing, I am afraid.\n>\n\nWe could forget 0002. I am fine with that. But I can change the code such\nthat formats other than \"plain\" are tested when PG_TEST_EXTRAS contains\n\"regress_dump_formats\". Would that be acceptable?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Jul 5, 2024 at 10:59 AM Michael Paquier <[email protected]> wrote:On Fri, Jun 28, 2024 at 06:00:07PM +0530, Ashutosh Bapat wrote:\n> Here's a description of patches and some notes\n> 0001\n> -------\n> 1. Per your suggestion the logic to handle dump output differences is\n> externalized in PostgreSQL::Test::AdjustDump. Instead of eliminating those\n> differences altogether from both the dump outputs, the corresponding DDL in\n> the original dump output is adjusted to look like that from the restored\n> database. Thus we retain full knowledge of what differences to expect.\n> 2. I have changed the name filter_dump to filter_dump_for_upgrade so as to\n> differentiate between two adjustments 1. for upgrade and 2. for\n> dump/restore. Ideally the name should have been adjust_dump_for_ugprade() .\n> It's more of an adjustment than filtering as indicated by the function it\n> calls. But I haven't changed that. The new function to adjust dumps for\n> dump and restore tests is named adjust_dump_for_restore() however.\n> 3. As suggested by Daniel upthread, the test for dump and restore happens\n> before upgrade which might change the old cluster thus changing the state\n> of objects left behind by regression. The test is not executed if\n> regression is not used to create the old cluster.\n> 4. The code to compare two dumps and report differences if any is moved to\n> its own function compare_dumps() which is used for both upgrade and\n> dump/restore tests.\n> The test uses the custom dump format for dumping and restoring the\n> database.\n\nAt quick glance, that seems to be going in the right direction. Note\nthat you have forgotten install and uninstall rules for the new .pm\nfile.Before submitting the patch, I looked for all the places which mention AdjustUpgrade or AdjustUpgrade.pm to find places where the new module needs to be mentioned. But I didn't find any. AdjustUpgrade is not mentioned in src/test/perl/Makefile or src/test/perl/meson.build. Do we want to also add AdjustUpgrade.pm in those files? \n\n0002 increases more the runtime of a test that's already one of the\nlongest ones in the tree is not really appealing, I am afraid.We could forget 0002. I am fine with that. But I can change the code such that formats other than \"plain\" are tested when PG_TEST_EXTRAS contains \"regress_dump_formats\". Would that be acceptable?-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 8 Jul 2024 15:59:30 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Mon, Jul 08, 2024 at 03:59:30PM +0530, Ashutosh Bapat wrote:\n> Before submitting the patch, I looked for all the places which mention\n> AdjustUpgrade or AdjustUpgrade.pm to find places where the new module needs\n> to be mentioned. But I didn't find any. AdjustUpgrade is not mentioned\n> in src/test/perl/Makefile or src/test/perl/meson.build. Do we want to also\n> add AdjustUpgrade.pm in those files?\n\nGood question. This has not been mentioned on the thread that added\nthe module:\nhttps://www.postgresql.org/message-id/891521.1673657296%40sss.pgh.pa.us\n\nAnd I could see it as being useful if installed. The same applies to\nKerberos.pm, actually. I'll ping that on a new thread.\n\n> We could forget 0002. I am fine with that. But I can change the code such\n> that formats other than \"plain\" are tested when PG_TEST_EXTRAS contains\n> \"regress_dump_formats\". Would that be acceptable?\n\nInteresting idea. That may be acceptable, under the same arguments as\nthe xid_wraparound one.\n--\nMichael",
"msg_date": "Tue, 9 Jul 2024 16:36:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Tue, Jul 9, 2024 at 1:07 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Jul 08, 2024 at 03:59:30PM +0530, Ashutosh Bapat wrote:\n> > Before submitting the patch, I looked for all the places which mention\n> > AdjustUpgrade or AdjustUpgrade.pm to find places where the new module needs\n> > to be mentioned. But I didn't find any. AdjustUpgrade is not mentioned\n> > in src/test/perl/Makefile or src/test/perl/meson.build. Do we want to also\n> > add AdjustUpgrade.pm in those files?\n>\n> Good question. This has not been mentioned on the thread that added\n> the module:\n> https://www.postgresql.org/message-id/891521.1673657296%40sss.pgh.pa.us\n>\n> And I could see it as being useful if installed. The same applies to\n> Kerberos.pm, actually. I'll ping that on a new thread.\n\nFor now, it may be better to maintain status-quo. If we see a need to\nuse these modules in future by say extensions or tests outside core\ntree, we will add them to meson and make files.\n\n>\n> > We could forget 0002. I am fine with that. But I can change the code such\n> > that formats other than \"plain\" are tested when PG_TEST_EXTRAS contains\n> > \"regress_dump_formats\". Would that be acceptable?\n>\n> Interesting idea. That may be acceptable, under the same arguments as\n> the xid_wraparound one.\n\nDone. Added a new entry in PG_TEST_EXTRA documentation.\n\nI have merged the two patches now.\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 12 Jul 2024 10:42:35 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 10:42 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> I have merged the two patches now.\n>\n\n894be11adfa60ad1ce5f74534cf5f04e66d51c30 changed the schema in which\nobjects in test genereated_stored.sql are created. Because of this the\nnew test added by the patch was failing. Fixed the failure in the\nattached.\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 9 Sep 2024 15:43:58 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test to dump and restore objects left behind by regression"
}
] |
[
{
"msg_contents": "Usage of designated initializers came up in:\nhttps://www.postgresql.org/message-id/flat/ZdWXhAt9Tz4d-lut%40paquier.xyz#9dc17e604e58569ad35643672bf74acc\n\nThis converts all arrays that I could find that could clearly benefit\nfrom this without any other code changes being necessary.\n\nThere were a few arrays that I didn't convert that seemed like they\ncould be useful to convert, but where the variables started counting\nat 1. So by converting them elements the array would grow and elements\nwould be shifted by one. Changing those might be nice, but would\nrequire some more code changes so I didn't want to combine it with\nthese simpler refactors. The arrays I'm talking about were\nspecifically tsearch_op_priority, BT_implies_table, BT_refutes_table,\nand BT_implic_table.",
"msg_date": "Wed, 21 Feb 2024 16:03:31 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve readability by using designated initializers when possible"
},
{
"msg_contents": "On Wed, 2024-02-21 at 16:03 +0100, Jelte Fennema-Nio wrote:\n> Usage of designated initializers came up in:\n> https://www.postgresql.org/message-id/flat/ZdWXhAt9Tz4d-lut%40paquier.xyz#9dc17e604e58569ad35643672bf74acc\n> \n> This converts all arrays that I could find that could clearly benefit\n> from this without any other code changes being necessary.\n\nLooking at the object_classes array and the ObjectClass enum, I don't\nquite understand the point. It seems like a way to write OCLASS_OPCLASS\ninstead of OperatorClassRelationId, and similar?\n\nAm I missing something?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 14:46:51 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Thu, Feb 22, 2024, 23:46 Jeff Davis <[email protected]> wrote:\n\n>\n> Am I missing something?\n\n\nThe main benefits it has are:\n1. The order of the array doesn't have to exactly match the order of the\nenum for the arrays to contain the correct mapping.\n2. Typos in the enum variant names are caught by the compiler because\nactual symbols are used, not comments.\n3. The left-to-right order reads more natural imho for such key-value\npairs, e.g. OCLASS_PROC maps to ProcedureRelationId.\n\nOn Thu, Feb 22, 2024, 23:46 Jeff Davis <[email protected]> wrote:\n\nAm I missing something?The main benefits it has are:1. The order of the array doesn't have to exactly match the order of the enum for the arrays to contain the correct mapping. 2. Typos in the enum variant names are caught by the compiler because actual symbols are used, not comments. 3. The left-to-right order reads more natural imho for such key-value pairs, e.g. OCLASS_PROC maps to ProcedureRelationId.",
"msg_date": "Fri, 23 Feb 2024 01:35:36 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, 2024-02-23 at 01:35 +0100, Jelte Fennema-Nio wrote:\n> On Thu, Feb 22, 2024, 23:46 Jeff Davis <[email protected]> wrote:\n> > \n> > Am I missing something?\n> \n> The main benefits it has are:\n\nSorry, I was unclear. I was asking a question about the reason the\nObjectClass and the object_classes[] array exist in the current code,\nit wasn't a direct question about your patch.\n\nObjectClass is only used in a couple places, and I don't immediately\nsee why those places can't just use the OID of the class (like \nOperatorClassRelationId).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 17:57:14 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, 23 Feb 2024 at 02:57, Jeff Davis <[email protected]> wrote:\n> Sorry, I was unclear. I was asking a question about the reason the\n> ObjectClass and the object_classes[] array exist in the current code,\n> it wasn't a direct question about your patch.\n\nI did a bit of git spelunking and the reason seems to be that back in\n2002 when this was introduced not all relation ids were compile time\nconstants and thus an array was initialized once at bootup. I totally\nagree with you that these days there's no reason for the array. So I\nnow added a second patch that removes this array, instead of updating\nit to use the designated initializer syntax.",
"msg_date": "Fri, 23 Feb 2024 10:59:53 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "Hi. minor issues.\n\n@@ -2063,12 +2009,12 @@ find_expr_references_walker(Node *node,\n CoerceViaIO *iocoerce = (CoerceViaIO *) node;\n\n /* since there is no exposed function, need to depend on type */\n- add_object_address(OCLASS_TYPE, iocoerce->resulttype, 0,\n+ add_object_address(TypeRelationId iocoerce->resulttype, 0,\n context->addrs);\n\n@@ -2090,21 +2036,21 @@ find_expr_references_walker(Node *node,\n ConvertRowtypeExpr *cvt = (ConvertRowtypeExpr *) node;\n\n /* since there is no function dependency, need to depend on type */\n- add_object_address(OCLASS_TYPE, cvt->resulttype, 0,\n+ add_object_address(TypeRelationId cvt->resulttype, 0,\n context->addrs);\n\nobvious typo errors.\n\ndiff --git a/src/common/relpath.c b/src/common/relpath.c\nindex b16fe19dea6..d9214f915c9 100644\n--- a/src/common/relpath.c\n+++ b/src/common/relpath.c\n@@ -31,10 +31,10 @@\n * pg_relation_size().\n */\n const char *const forkNames[] = {\n- \"main\", /* MAIN_FORKNUM */\n- \"fsm\", /* FSM_FORKNUM */\n- \"vm\", /* VISIBILITYMAP_FORKNUM */\n- \"init\" /* INIT_FORKNUM */\n+ [MAIN_FORKNUM] = \"main\",\n+ [FSM_FORKNUM] = \"fsm\",\n+ [VISIBILITYMAP_FORKNUM] = \"vm\",\n+ [INIT_FORKNUM] = \"init\",\n };\n\n`+ [INIT_FORKNUM] = \"init\", ` no need for an extra comma?\n\n+ [PG_SJIS] = {0, 0, pg_sjis_mblen, pg_sjis_dsplen,\npg_sjis_verifychar, pg_sjis_verifystr, 2},\n+ [PG_BIG5] = {0, 0, pg_big5_mblen, pg_big5_dsplen,\npg_big5_verifychar, pg_big5_verifystr, 2},\n+ [PG_GBK] = {0, 0, pg_gbk_mblen, pg_gbk_dsplen, pg_gbk_verifychar,\npg_gbk_verifystr, 2},\n+ [PG_UHC] = {0, 0, pg_uhc_mblen, pg_uhc_dsplen, pg_uhc_verifychar,\npg_uhc_verifystr, 2},\n+ [PG_GB18030] = {0, 0, pg_gb18030_mblen, pg_gb18030_dsplen,\npg_gb18030_verifychar, pg_gb18030_verifystr, 4},\n+ [PG_JOHAB] = {0, 0, pg_johab_mblen, pg_johab_dsplen,\npg_johab_verifychar, pg_johab_verifystr, 3},\n+ [PG_SHIFT_JIS_2004] = {0, 0, pg_sjis_mblen, pg_sjis_dsplen,\npg_sjis_verifychar, pg_sjis_verifystr, 2},\n };\nsimilarly, last entry, no need an extra comma?\nalso other places last array entry no need extra comma.\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:41:17 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "\nOn Mon, 26 Feb 2024 at 16:41, jian he <[email protected]> wrote:\n> Hi. minor issues.\n>\n> @@ -2063,12 +2009,12 @@ find_expr_references_walker(Node *node,\n> CoerceViaIO *iocoerce = (CoerceViaIO *) node;\n>\n> /* since there is no exposed function, need to depend on type */\n> - add_object_address(OCLASS_TYPE, iocoerce->resulttype, 0,\n> + add_object_address(TypeRelationId iocoerce->resulttype, 0,\n> context->addrs);\n>\n> @@ -2090,21 +2036,21 @@ find_expr_references_walker(Node *node,\n> ConvertRowtypeExpr *cvt = (ConvertRowtypeExpr *) node;\n>\n> /* since there is no function dependency, need to depend on type */\n> - add_object_address(OCLASS_TYPE, cvt->resulttype, 0,\n> + add_object_address(TypeRelationId cvt->resulttype, 0,\n> context->addrs);\n>\n> obvious typo errors.\n>\n> diff --git a/src/common/relpath.c b/src/common/relpath.c\n> index b16fe19dea6..d9214f915c9 100644\n> --- a/src/common/relpath.c\n> +++ b/src/common/relpath.c\n> @@ -31,10 +31,10 @@\n> * pg_relation_size().\n> */\n> const char *const forkNames[] = {\n> - \"main\", /* MAIN_FORKNUM */\n> - \"fsm\", /* FSM_FORKNUM */\n> - \"vm\", /* VISIBILITYMAP_FORKNUM */\n> - \"init\" /* INIT_FORKNUM */\n> + [MAIN_FORKNUM] = \"main\",\n> + [FSM_FORKNUM] = \"fsm\",\n> + [VISIBILITYMAP_FORKNUM] = \"vm\",\n> + [INIT_FORKNUM] = \"init\",\n> };\n>\n> `+ [INIT_FORKNUM] = \"init\", ` no need for an extra comma?\n>\n> + [PG_SJIS] = {0, 0, pg_sjis_mblen, pg_sjis_dsplen,\n> pg_sjis_verifychar, pg_sjis_verifystr, 2},\n> + [PG_BIG5] = {0, 0, pg_big5_mblen, pg_big5_dsplen,\n> pg_big5_verifychar, pg_big5_verifystr, 2},\n> + [PG_GBK] = {0, 0, pg_gbk_mblen, pg_gbk_dsplen, pg_gbk_verifychar,\n> pg_gbk_verifystr, 2},\n> + [PG_UHC] = {0, 0, pg_uhc_mblen, pg_uhc_dsplen, pg_uhc_verifychar,\n> pg_uhc_verifystr, 2},\n> + [PG_GB18030] = {0, 0, pg_gb18030_mblen, pg_gb18030_dsplen,\n> pg_gb18030_verifychar, pg_gb18030_verifystr, 4},\n> + [PG_JOHAB] = {0, 0, pg_johab_mblen, pg_johab_dsplen,\n> pg_johab_verifychar, pg_johab_verifystr, 3},\n> + [PG_SHIFT_JIS_2004] = {0, 0, pg_sjis_mblen, pg_sjis_dsplen,\n> pg_sjis_verifychar, pg_sjis_verifystr, 2},\n> };\n> similarly, last entry, no need an extra comma?\n> also other places last array entry no need extra comma.\n\nFor last entry comma, see [1].\n\n[1] https://www.postgresql.org/message-id/386f8c45-c8ac-4681-8add-e3b0852c1620%40eisentraut.org\n\n\n",
"msg_date": "Mon, 26 Feb 2024 17:00:13 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 05:00:13PM +0800, Japin Li wrote:\n> On Mon, 26 Feb 2024 at 16:41, jian he <[email protected]> wrote:\n>> similarly, last entry, no need an extra comma?\n>> also other places last array entry no need extra comma.\n> \n> For last entry comma, see [1].\n> \n> [1] https://www.postgresql.org/message-id/386f8c45-c8ac-4681-8add-e3b0852c1620%40eisentraut.org\n\nAnd also see commit 611806cd726f. This makes the diffs more elegant\nto the eye when adding new elements at the end of these arrays.\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 14:20:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 05:00:13PM +0800, Japin Li wrote:\n> On Mon, 26 Feb 2024 at 16:41, jian he <[email protected]> wrote:\n>> obvious typo errors.\n\nThese would cause compilation failures. Saying that, this is a very\nnice cleanup, so I've fixed these and applied the patch after checking\nthat the one-one replacements were correct.\n\nAbout 0002, I can't help but notice pg_enc2icu_tbl and\npg_enc2gettext_tb. There are exceptions like MULE_INTERNAL, but is it\npossible to do better?\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 15:25:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 2024-Feb-27, Michael Paquier wrote:\n\n> These would cause compilation failures. Saying that, this is a very\n> nice cleanup, so I've fixed these and applied the patch after checking\n> that the one-one replacements were correct.\n\nOh, I thought we were going to get rid of ObjectClass altogether -- I\nmean, have getObjectClass() return ObjectAddress->classId, and then\ndefine the OCLASS values for each catalog OID [... tries to ...] But\nthis(*) doesn't work for two reasons:\n\n1. some switches processing the OCLASS enum don't have \"default:\" cases.\nThis is so that the compiler tells us when we fail to add support for\nsome new object class (and it's been helpful). If we were to \n\n2. all users of getObjectClass would have to include the catalog header\nspecific to every catalog it wants to handle; so tablecmds.c and\ndependency.c would have to include almost all catalog includes, for\nexample.\n\nWhat this says to me is that ObjectClass is/was a somewhat useful\nabstraction layer on top of catalog definitions. I'm now not 100% that\npoking this hole in the abstraction (by expanding use of catalog OIDs at\nthe expense of ObjectClass) was such a great idea. Maybe we want to\nmake ObjectClass *more* useful/encompassing rather than the opposite.\n\n\n(*) I mean\n\nOid\ngetObjectClass(const ObjectAddress *object)\n{\n /* only pg_class entries can have nonzero objectSubId */\n if (object->classId != RelationRelationId &&\n object->objectSubId != 0)\n elog(ERROR, \"invalid non-zero objectSubId for object class %u\",\n object->classId);\n\n return object->classId;\n}\n\nplus\n\n#define OCLASS_CLASS RelationRelationId\n#define OCLASS_PROC ProcedureRelationId\n#define OCLASS_TYPE TypeRelationId\n\netc.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Tue, 27 Feb 2024 08:57:31 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 07:25, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Feb 26, 2024 at 05:00:13PM +0800, Japin Li wrote:\n> > On Mon, 26 Feb 2024 at 16:41, jian he <[email protected]> wrote:\n> >> obvious typo errors.\n>\n> These would cause compilation failures. Saying that, this is a very\n> nice cleanup, so I've fixed these and applied the patch after checking\n> that the one-one replacements were correct.\n\nSorry about those search/replace mistakes. Not sure how that happened.\nThanks for committing :)\n\n\n",
"msg_date": "Tue, 27 Feb 2024 12:07:58 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 07:25, Michael Paquier <[email protected]> wrote:\n> About 0002, I can't help but notice pg_enc2icu_tbl and\n> pg_enc2gettext_tb. There are exceptions like MULE_INTERNAL, but is it\n> possible to do better?\n\nAttached is an updated patchset to also convert pg_enc2icu_tbl and\npg_enc2gettext_tbl. I converted pg_enc2gettext_tbl in a separate\ncommit, because it actually requires some codechanges too.",
"msg_date": "Tue, 27 Feb 2024 12:52:22 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 12:52, Jelte Fennema-Nio <[email protected]> wrote:\n> Attached is an updated patchset to also convert pg_enc2icu_tbl and\n> pg_enc2gettext_tbl. I converted pg_enc2gettext_tbl in a separate\n> commit, because it actually requires some codechanges too.\n\nAnother small update to also make all arrays changed by this patch\nhave a trailing comma (to avoid future diff noise).",
"msg_date": "Tue, 27 Feb 2024 12:55:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 08:57, Alvaro Herrera <[email protected]> wrote:\n> What this says to me is that ObjectClass is/was a somewhat useful\n> abstraction layer on top of catalog definitions. I'm now not 100% that\n> poking this hole in the abstraction (by expanding use of catalog OIDs at\n> the expense of ObjectClass) was such a great idea. Maybe we want to\n> make ObjectClass *more* useful/encompassing rather than the opposite.\n\nI agree that ObjectClass has some benefits over using the table OIDs,\nbut both the benefits you mention don't apply to add_object_address.\nSo I don't think using ObjectClass for was worth the extra effort to\nmaintain the\nobject_classes array, just for add_object_address.\n\nOne improvement that I think could be worth considering is to link\nObjectClass and the table OIDs more explicitly, by actually making\ntheir values the same:\nenum ObjectClass {\n OCLASS_PGCLASS = RelationRelationId,\n OCLASS_PGPROC = ProcedureRelationId,\n ...\n}\n\nBut that would effectively mean that anyone including dependency.h\nwould also be including all catalog headers. I'm not sure if that's\nconsidered problematic or not. If that is problematic then it would\nalso be possible to reverse the relationship and have each catalog\nheader include dependency.h (or some other header that we move\nObjectClass to), and go about it in the following way:\n\n/* dependency.h */\nenum ObjectClass {\n OCLASS_PGCLASS = 1259,\n OCLASS_PGPROC = 1255,\n ...\n}\n\n/* pg_class.h */\nCATALOG(pg_class,OCLASS_PGCLASS,RelationRelationId) BKI_BOOTSTRAP\nBKI_ROWTYPE_OID(83,RelationRelation_Rowtype_Id) BKI_SCHEMA_MACRO\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:29:57 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "\nOn Tue, 27 Feb 2024 at 19:55, Jelte Fennema-Nio <[email protected]> wrote:\n> On Tue, 27 Feb 2024 at 12:52, Jelte Fennema-Nio <[email protected]> wrote:\n>> Attached is an updated patchset to also convert pg_enc2icu_tbl and\n>> pg_enc2gettext_tbl. I converted pg_enc2gettext_tbl in a separate\n>> commit, because it actually requires some codechanges too.\n>\n> Another small update to also make all arrays changed by this patch\n> have a trailing comma (to avoid future diff noise).\n\nI see the config_group_names[] needs null-terminated because of help_config,\nhowever, I didn't find the reference in help_config.c. Is this comment\noutdated? Here is a patch to remove the null-terminated.\n\ndiff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c\nindex 59904fd007..df849f73fc 100644\n--- a/src/backend/utils/misc/guc_tables.c\n+++ b/src/backend/utils/misc/guc_tables.c\n@@ -715,11 +715,9 @@ const char *const config_group_names[] =\n \t[PRESET_OPTIONS] = gettext_noop(\"Preset Options\"),\n \t[CUSTOM_OPTIONS] = gettext_noop(\"Customized Options\"),\n \t[DEVELOPER_OPTIONS] = gettext_noop(\"Developer Options\"),\n-\t/* help_config wants this array to be null-terminated */\n-\tNULL\n };\n\n-StaticAssertDecl(lengthof(config_group_names) == (DEVELOPER_OPTIONS + 2),\n+StaticAssertDecl(lengthof(config_group_names) == (DEVELOPER_OPTIONS + 1),\n \t\t\t\t \"array length mismatch\");\n\n /*\n\n\n",
"msg_date": "Tue, 27 Feb 2024 23:04:34 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 16:04, Japin Li <[email protected]> wrote:\n> I see the config_group_names[] needs null-terminated because of help_config,\n> however, I didn't find the reference in help_config.c. Is this comment\n> outdated?\n\nYeah, you're correct. That comment has been outdated for more than 20\nyears. The commit that made it unnecessary to null-terminate the array\nwas 9d77708d83ee.\n\nAttached is v5 of the patchset that also includes this change (with\nyou listed as author).",
"msg_date": "Tue, 27 Feb 2024 17:06:47 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "\nOn Wed, 28 Feb 2024 at 00:06, Jelte Fennema-Nio <[email protected]> wrote:\n> On Tue, 27 Feb 2024 at 16:04, Japin Li <[email protected]> wrote:\n>> I see the config_group_names[] needs null-terminated because of help_config,\n>> however, I didn't find the reference in help_config.c. Is this comment\n>> outdated?\n>\n> Yeah, you're correct. That comment has been outdated for more than 20\n> years. The commit that made it unnecessary to null-terminate the array\n> was 9d77708d83ee.\n>\n> Attached is v5 of the patchset that also includes this change (with\n> you listed as author).\n\nThanks for updating the patch!\n\nIt looks good to me except there is an outdated comment.\n\ndiff --git a/src/common/encnames.c b/src/common/encnames.c\nindex bd012fe3a0..dba6bd2c9e 100644\n--- a/src/common/encnames.c\n+++ b/src/common/encnames.c\n@@ -297,7 +297,6 @@ static const pg_encname pg_encname_tbl[] =\n\n /* ----------\n * These are \"official\" encoding names.\n- * XXX must be sorted by the same order as enum pg_enc (in mb/pg_wchar.h)\n * ----------\n */\n #ifndef WIN32\n\n\n",
"msg_date": "Wed, 28 Feb 2024 09:41:42 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 09:41:42AM +0800, Japin Li wrote:\n> On Wed, 28 Feb 2024 at 00:06, Jelte Fennema-Nio <[email protected]> wrote:\n>> Attached is v5 of the patchset that also includes this change (with\n>> you listed as author).\n> \n> Thanks for updating the patch!\n\nCool. I have applied 0004 and most of 0002. Attached is what\nremains, where I am wondering if it would be cleaner to do these bits\ntogether (did not look at the whole, yet).\n\n> It looks good to me except there is an outdated comment.\n\nYep, I've updated that in the attached for now.\n--\nMichael",
"msg_date": "Wed, 28 Feb 2024 12:59:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, 28 Feb 2024 at 04:59, Michael Paquier <[email protected]> wrote:\n> Cool. I have applied 0004 and most of 0002. Attached is what\n> remains, where I am wondering if it would be cleaner to do these bits\n> together (did not look at the whole, yet).\n\nFeel free to squash them if you prefer that. I think now that patch\n0002 only includes encoding changes, I feel 50-50 about grouping them\ntogether. I mainly kept them separate, because 0002 were simple search\n+ replaces and 0003 required code changes. That's still the case, but\nnow I can also see the argument for grouping them together since that\nwould clean up all the encoding arrays in one single commit (without a\nton of other arrays also being modified).\n\n\n",
"msg_date": "Wed, 28 Feb 2024 05:37:22 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 05:37:22AM +0100, Jelte Fennema-Nio wrote:\n> On Wed, 28 Feb 2024 at 04:59, Michael Paquier <[email protected]> wrote:\n>> Cool. I have applied 0004 and most of 0002. Attached is what\n>> remains, where I am wondering if it would be cleaner to do these bits\n>> together (did not look at the whole, yet).\n> \n> Feel free to squash them if you prefer that. I think now that patch\n> 0002 only includes encoding changes, I feel 50-50 about grouping them\n> together. I mainly kept them separate, because 0002 were simple search\n> + replaces and 0003 required code changes. That's still the case, but\n> now I can also see the argument for grouping them together since that\n> would clean up all the encoding arrays in one single commit (without a\n> ton of other arrays also being modified).\n\nI have doubts about the changes in raw_pg_bind_textdomain_codeset(),\nas the encoding could come from the value in the pg_database tuples\nthemselves. The current coding is slightly safer from the perspective\nof bogus input values as we would loop over pg_enc2gettext_tbl looking\nfor a match. 0003 changes that so as we could point to incorrect\nmemory areas rather than fail safely for the NULL check.\n\nThat's not something that shows up as a problem for all the other\nstructures that have been changed afd8ef39094b or ef5e2e90859a.\nThat's not an issue for pg_enc2name_tbl, pg_enc2icu_tbl and\npg_wchar_table either thanks to PG_VALID(_{BE,FE})_ENCODING()\nthat offer protection with the index values used for the table\nlookups.\n\n- * WARNING: the order of this enum must be same as order of entries\n- * in the pg_enc2name_tbl[] array (in src/common/encnames.c), and\n- * in the pg_wchar_table[] array (in src/common/wchar.c)!\n- *\n- * If you add some encoding don't forget to check\n+ * WARNING: If you add some encoding don't forget to check\n * PG_ENCODING_BE_LAST macro.\n\nMentioning the updates to pg_enc2name_tbl[] and pg_wchar_table[] is\nstill important, IMO, because new encoding values added to the central\nenum would cause the lookups of the tables to fail while passing the\nPG_VALID checks, so updating them is mandatory and could be missed.\nI've tweaked the comment to mention both of them; the order does not\nmatter anymore. Applied 0002 with these adjustments.\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 09:56:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Thu, 29 Feb 2024 at 01:57, Michael Paquier <[email protected]> wrote:\n> I have doubts about the changes in raw_pg_bind_textdomain_codeset(),\n> as the encoding could come from the value in the pg_database tuples\n> themselves. The current coding is slightly safer from the perspective\n> of bogus input values as we would loop over pg_enc2gettext_tbl looking\n> for a match. 0003 changes that so as we could point to incorrect\n> memory areas rather than fail safely for the NULL check.\n\nThat's fair. Attached is a patch that adds a PG_VALID_ENCODING check\nto raw_pg_bind_textdomain_codeset to solve this regression.",
"msg_date": "Thu, 29 Feb 2024 04:01:47 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 27.02.24 08:57, Alvaro Herrera wrote:\n> On 2024-Feb-27, Michael Paquier wrote:\n> \n>> These would cause compilation failures. Saying that, this is a very\n>> nice cleanup, so I've fixed these and applied the patch after checking\n>> that the one-one replacements were correct.\n> \n> Oh, I thought we were going to get rid of ObjectClass altogether -- I\n> mean, have getObjectClass() return ObjectAddress->classId, and then\n> define the OCLASS values for each catalog OID [... tries to ...] But\n> this(*) doesn't work for two reasons:\n\nI have long wondered what the point of ObjectClass is. I find the extra \nlayer of redirection, which is used only in small parts of the code, and \nthe similarity to ObjectType confusing. I happened to have a draft \npatch for its removal lying around, so I'll show it here, rebased over \nwhat has already been done in this thread.\n\n> 1. some switches processing the OCLASS enum don't have \"default:\" cases.\n> This is so that the compiler tells us when we fail to add support for\n> some new object class (and it's been helpful). If we were to\n\nI think you can also handle that with some assertions and proper test \ncoverage. It's not even clear how strong this benefit is. For example, \nin AlterObjectNamespace_oid(), you could still put a new OCLASS into the \n\"ignore object types that don't have schema-qualified names\" case, and \nit might or might not be wrong. Also, there are already various OCLASS \nswitches that do have a default case, so it's not even clear what the \npreferred coding style should be.\n\n> 2. all users of getObjectClass would have to include the catalog header\n> specific to every catalog it wants to handle; so tablecmds.c and\n> dependency.c would have to include almost all catalog includes, for\n> example.\n\nThis doesn't seem to be a problem in practice; see patch.",
"msg_date": "Thu, 29 Feb 2024 12:41:38 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 12:41:38PM +0100, Peter Eisentraut wrote:\n> On 27.02.24 08:57, Alvaro Herrera wrote:\n>> On 2024-Feb-27, Michael Paquier wrote:\n>>> These would cause compilation failures. Saying that, this is a very\n>>> nice cleanup, so I've fixed these and applied the patch after checking\n>>> that the one-one replacements were correct.\n>> \n>> Oh, I thought we were going to get rid of ObjectClass altogether -- I\n>> mean, have getObjectClass() return ObjectAddress->classId, and then\n>> define the OCLASS values for each catalog OID [... tries to ...] But\n>> this(*) doesn't work for two reasons:\n> \n> I have long wondered what the point of ObjectClass is. I find the extra\n> layer of redirection, which is used only in small parts of the code, and the\n> similarity to ObjectType confusing. I happened to have a draft patch for\n> its removal lying around, so I'll show it here, rebased over what has\n> already been done in this thread.\n\nThe elimination of getObjectClass() seems like a good end goal IMO, so\nthe direction of the patch is interesting. Would object_type_map and\nObjectProperty follow the same idea of relying on the catalogs OID\ninstead of the OBJECT_*?\n\nNote that there are still two dependencies to getObjectClass() in\nevent_trigger.c and dependency.c.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 13:08:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 04:01:47AM +0100, Jelte Fennema-Nio wrote:\n> On Thu, 29 Feb 2024 at 01:57, Michael Paquier <[email protected]> wrote:\n>> I have doubts about the changes in raw_pg_bind_textdomain_codeset(),\n>> as the encoding could come from the value in the pg_database tuples\n>> themselves. The current coding is slightly safer from the perspective\n>> of bogus input values as we would loop over pg_enc2gettext_tbl looking\n>> for a match. 0003 changes that so as we could point to incorrect\n>> memory areas rather than fail safely for the NULL check.\n> \n> That's fair. Attached is a patch that adds a PG_VALID_ENCODING check\n> to raw_pg_bind_textdomain_codeset to solve this regression.\n\n- for (i = 0; pg_enc2gettext_tbl[i].name != NULL; i++)\n+ if (!PG_VALID_ENCODING(encoding) || pg_enc2gettext_tbl[encoding] == NULL) { \n\nShouldn't PG_MULE_INTERNAL point to NULL in pg_enc2gettext_tbl[]?\nThat just seems safer to me, and more consistent because its values\nsatisfies PG_VALID_ENCODING().\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 13:12:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 05:12, Michael Paquier <[email protected]> wrote:\n> Shouldn't PG_MULE_INTERNAL point to NULL in pg_enc2gettext_tbl[]?\n> That just seems safer to me, and more consistent because its values\n> satisfies PG_VALID_ENCODING().\n\nSafety wise it doesn't matter, because gaps in a designated\ninitializer array will be initialized with 0/NULL. But I agree it's\nmore consistent, so see attached.",
"msg_date": "Fri, 1 Mar 2024 05:34:05 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 05:34:05AM +0100, Jelte Fennema-Nio wrote:\n\n> diff --git a/src/include/mb/pg_wchar.h b/src/include/mb/pg_wchar.h\n> index fd91aefbcb7..32e25a1a6ea 100644\n> --- a/src/include/mb/pg_wchar.h\n> +++ b/src/include/mb/pg_wchar.h\n> @@ -225,7 +225,8 @@ typedef unsigned int pg_wchar;\n> * PostgreSQL encoding identifiers\n> *\n> * WARNING: If you add some encoding don't forget to update\n> - *\t\t\tthe pg_enc2name_tbl[] array (in src/common/encnames.c) and\n> + *\t\t\tthe pg_enc2name_tbl[] array (in src/common/encnames.c),\n> + *\t\t\tthe pg_enc2gettext_tbl[] array (in src/common/encnames.c) and\n> *\t\t\tthe pg_wchar_table[] array (in src/common/wchar.c) and to check\n> *\t\t\tPG_ENCODING_BE_LAST macro.\n\nMostly OK to me. Just note that this comment is incorrect because\npg_enc2gettext_tbl[] includes elements in the range\n[PG_SJIS,_PG_LAST_ENCODING_[ ;)\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 14:08:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 06:08, Michael Paquier <[email protected]> wrote:\n> Mostly OK to me. Just note that this comment is incorrect because\n> pg_enc2gettext_tbl[] includes elements in the range\n> [PG_SJIS,_PG_LAST_ENCODING_[ ;)\n\nfixed",
"msg_date": "Fri, 1 Mar 2024 06:30:10 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 12:08 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 29, 2024 at 12:41:38PM +0100, Peter Eisentraut wrote:\n> > On 27.02.24 08:57, Alvaro Herrera wrote:\n> >> On 2024-Feb-27, Michael Paquier wrote:\n> >>> These would cause compilation failures. Saying that, this is a very\n> >>> nice cleanup, so I've fixed these and applied the patch after checking\n> >>> that the one-one replacements were correct.\n> >>\n> >> Oh, I thought we were going to get rid of ObjectClass altogether -- I\n> >> mean, have getObjectClass() return ObjectAddress->classId, and then\n> >> define the OCLASS values for each catalog OID [... tries to ...] But\n> >> this(*) doesn't work for two reasons:\n> >\n> > I have long wondered what the point of ObjectClass is. I find the extra\n> > layer of redirection, which is used only in small parts of the code, and the\n> > similarity to ObjectType confusing. I happened to have a draft patch for\n> > its removal lying around, so I'll show it here, rebased over what has\n> > already been done in this thread.\n>\n> The elimination of getObjectClass() seems like a good end goal IMO, so\n> the direction of the patch is interesting. Would object_type_map and\n> ObjectProperty follow the same idea of relying on the catalogs OID\n> instead of the OBJECT_*?\n>\n> Note that there are still two dependencies to getObjectClass() in\n> event_trigger.c and dependency.c.\n> --\n\nI refactored dependency.c, event_trigger.c based on\n0001-Remove-ObjectClass.patch.\ndependency.c already includes a bunch of catalog header files, but\nevent_trigger.c doesn't.\nNow we need to \"include\" around 30 header files in event_trigger.c,\nnot sure if it's ok or not.\n\n0001-Remove-ObjectClass.patch\nWe also need to refactor getObjectIdentityParts's below comments?\n/*\n* There's intentionally no default: case here; we want the\n* compiler to warn if a new OCLASS hasn't been handled above.\n*/\nsince OCLASS is removed.\n\n`bool EventTriggerSupportsObjectClass(ObjectClass objclass)`\nchange to\n`bool EventTriggerSupportsObjectClass(Oid classId)`\n\nI think the function name should also be refactored.\nI'm not sure of the new function name, so I didn't change.",
"msg_date": "Fri, 1 Mar 2024 15:03:37 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 01.03.24 05:08, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 12:41:38PM +0100, Peter Eisentraut wrote:\n>> On 27.02.24 08:57, Alvaro Herrera wrote:\n>>> On 2024-Feb-27, Michael Paquier wrote:\n>>>> These would cause compilation failures. Saying that, this is a very\n>>>> nice cleanup, so I've fixed these and applied the patch after checking\n>>>> that the one-one replacements were correct.\n>>>\n>>> Oh, I thought we were going to get rid of ObjectClass altogether -- I\n>>> mean, have getObjectClass() return ObjectAddress->classId, and then\n>>> define the OCLASS values for each catalog OID [... tries to ...] But\n>>> this(*) doesn't work for two reasons:\n>>\n>> I have long wondered what the point of ObjectClass is. I find the extra\n>> layer of redirection, which is used only in small parts of the code, and the\n>> similarity to ObjectType confusing. I happened to have a draft patch for\n>> its removal lying around, so I'll show it here, rebased over what has\n>> already been done in this thread.\n> \n> The elimination of getObjectClass() seems like a good end goal IMO, so\n> the direction of the patch is interesting. Would object_type_map and\n> ObjectProperty follow the same idea of relying on the catalogs OID\n> instead of the OBJECT_*?\n> \n> Note that there are still two dependencies to getObjectClass() in\n> event_trigger.c and dependency.c.\n\nOops, there was a second commit in my branch that I neglected to send \nin. Here is my complete patch set.",
"msg_date": "Fri, 1 Mar 2024 10:26:45 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 06:30:10AM +0100, Jelte Fennema-Nio wrote:\n> On Fri, 1 Mar 2024 at 06:08, Michael Paquier <[email protected]> wrote:\n>> Mostly OK to me. Just note that this comment is incorrect because\n>> pg_enc2gettext_tbl[] includes elements in the range\n>> [PG_SJIS,_PG_LAST_ENCODING_[ ;)\n> \n> fixed\n\n(Forgot to update this thread.)\nThanks, applied this one. I went over a few versions of the comment\nin pg_wchar.h, and tweaked it to something that was of one of the\nprevious versions, I think. \n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 08:46:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 5:26 PM Peter Eisentraut <[email protected]> wrote:\n>\n> Oops, there was a second commit in my branch that I neglected to send\n> in. Here is my complete patch set.\n\nthere is a `OCLASS` at the end of getObjectIdentityParts.\nthere is a `ObjectClass` in typedefs.list\n\n\n",
"msg_date": "Mon, 4 Mar 2024 09:29:03 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, 04 Mar 2024 at 07:46, Michael Paquier <[email protected]> wrote:\n> On Fri, Mar 01, 2024 at 06:30:10AM +0100, Jelte Fennema-Nio wrote:\n>> On Fri, 1 Mar 2024 at 06:08, Michael Paquier <[email protected]> wrote:\n>>> Mostly OK to me. Just note that this comment is incorrect because\n>>> pg_enc2gettext_tbl[] includes elements in the range\n>>> [PG_SJIS,_PG_LAST_ENCODING_[ ;)\n>>\n>> fixed\n>\n> (Forgot to update this thread.)\n> Thanks, applied this one. I went over a few versions of the comment\n> in pg_wchar.h, and tweaked it to something that was of one of the\n> previous versions, I think.\n\nHi,\n\nAttach a patch to rewrite dispatch_table array using C99-designated\ninitializer syntax.",
"msg_date": "Tue, 05 Mar 2024 21:50:05 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 14:50, Japin Li <[email protected]> wrote:\n> Attach a patch to rewrite dispatch_table array using C99-designated\n> initializer syntax.\n\nLooks good. Two small things:\n\n+ [EEOP_LAST] = &&CASE_EEOP_LAST,\n\nIs EEOP_LAST actually needed in this array? It seems unused afaict. If\nindeed not needed, that would be good to remove in an additional\ncommit.\n\n- *\n- * The order of entries needs to be kept in sync with the dispatch_table[]\n- * array in execExprInterp.c:ExecInterpExpr().\n\nI think it would be good to at least keep the comment saying that this\narray should be updated (only the order doesn't need to be strictly\nkept in sync anymore).\n\n\n",
"msg_date": "Tue, 5 Mar 2024 15:03:54 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 05 Mar 2024 at 22:03, Jelte Fennema-Nio <[email protected]> wrote:\n> On Tue, 5 Mar 2024 at 14:50, Japin Li <[email protected]> wrote:\n>> Attach a patch to rewrite dispatch_table array using C99-designated\n>> initializer syntax.\n>\n> Looks good. Two small things:\n\nThanks for the review.\n\n>\n> + [EEOP_LAST] = &&CASE_EEOP_LAST,\n>\n> Is EEOP_LAST actually needed in this array? It seems unused afaict. If\n> indeed not needed, that would be good to remove in an additional\n> commit.\n\nThere is a warning if remove it, so I keep it.\n\n/home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:118:33: warning: label ‘CASE_EEOP_LAST’ defined but not used [-Wunused-label]\n 118 | #define EEO_CASE(name) CASE_##name:\n | ^~~~~\n/home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:1845:17: note: in expansion of macro ‘EEO_CASE’\n 1845 | EEO_CASE(EEOP_LAST)\n | ^~~~~~~~\n\n>\n> - *\n> - * The order of entries needs to be kept in sync with the dispatch_table[]\n> - * array in execExprInterp.c:ExecInterpExpr().\n>\n> I think it would be good to at least keep the comment saying that this\n> array should be updated (only the order doesn't need to be strictly\n> kept in sync anymore).\n\nFixed.",
"msg_date": "Tue, 05 Mar 2024 22:30:34 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 15:30, Japin Li <[email protected]> wrote:\n> There is a warning if remove it, so I keep it.\n>\n> /home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:118:33: warning: label ‘CASE_EEOP_LAST’ defined but not used [-Wunused-label]\n> 118 | #define EEO_CASE(name) CASE_##name:\n> | ^~~~~\n> /home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:1845:17: note: in expansion of macro ‘EEO_CASE’\n> 1845 | EEO_CASE(EEOP_LAST)\n> | ^~~~~~~~\n\nI think if you remove the EEO_CASE(EEOP_LAST) block the warning should\ngo away. That block is clearly marked as unreachable, so it doesn't\nreally serve a purpose.\n\n\n",
"msg_date": "Tue, 5 Mar 2024 18:53:16 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, 06 Mar 2024 at 01:53, Jelte Fennema-Nio <[email protected]> wrote:\n> On Tue, 5 Mar 2024 at 15:30, Japin Li <[email protected]> wrote:\n>> There is a warning if remove it, so I keep it.\n>>\n>> /home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:118:33: warning: label ‘CASE_EEOP_LAST’ defined but not used [-Wunused-label]\n>> 118 | #define EEO_CASE(name) CASE_##name:\n>> | ^~~~~\n>> /home/japin/Codes/postgres/build/../src/backend/executor/execExprInterp.c:1845:17: note: in expansion of macro ‘EEO_CASE’\n>> 1845 | EEO_CASE(EEOP_LAST)\n>> | ^~~~~~~~\n>\n> I think if you remove the EEO_CASE(EEOP_LAST) block the warning should\n> go away. That block is clearly marked as unreachable, so it doesn't\n> really serve a purpose.\n\nThanks! Fixed as you suggested. Please see v3 patch.",
"msg_date": "Wed, 06 Mar 2024 08:24:09 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, Mar 06, 2024 at 08:24:09AM +0800, Japin Li wrote:\n> On Wed, 06 Mar 2024 at 01:53, Jelte Fennema-Nio <[email protected]> wrote:\n>> I think if you remove the EEO_CASE(EEOP_LAST) block the warning should\n>> go away. That block is clearly marked as unreachable, so it doesn't\n>> really serve a purpose.\n> \n> Thanks! Fixed as you suggested. Please see v3 patch.\n\nHmm. I am not sure if this one is a good idea. This makes the code a\nbit more complicated to grasp under EEO_USE_COMPUTED_GOTO with the\nreverse dispatch table, to say the least.\n--\nMichael",
"msg_date": "Fri, 8 Mar 2024 14:21:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 09:29:03AM +0800, jian he wrote:\n> On Fri, Mar 1, 2024 at 5:26 PM Peter Eisentraut <[email protected]> wrote:\n>> Oops, there was a second commit in my branch that I neglected to send\n>> in. Here is my complete patch set.\n\nThanks for the new patch set. The gains are neat, giving nice\nnumbers:\n 7 files changed, 200 insertions(+), 644 deletions(-)\n\n+ default:\n+ DropObjectById(object);\n+ break;\n\nHmm. I am not sure that this is a good idea. Wouldn't it be safer to\nuse as default path something that generates an ERROR so as this code\npath would complain immediately when adding a new catalog? My point\nis to make people consider what they should do on deletion when adding\na catalog that would take this code path, rather than assuming that a\ndeletion is OK to happen. So I would recommend to keep the list of\ncatalog OIDs for the DropObjectById case, keep the list for global\nobjects, and add a third path with a new ERROR.\n\n- /*\n- * There's intentionally no default: case here; we want the\n- * compiler to warn if a new OCLASS hasn't been handled above.\n- */\n\nIn getObjectDescription() and getObjectTypeDescription() this was a\nsafeguard, but we don't have that anymore. So it seems to me that\nthis should be replaced with a default with elog(ERROR)?\n\nThere is a third one in getObjectIdentityParts() that has not been\nremoved, though, but same remark at the two others.\n\nRememberAllDependentForRebuilding() uses a default, so this one looks\ngood to me.\n\n> there is a `OCLASS` at the end of getObjectIdentityParts.\n\nNice catch. A comment is not updated.\n\n> There is a `ObjectClass` in typedefs.list\n\nThis is usually taken care of by committers or updated automatically.\n--\nMichael",
"msg_date": "Fri, 8 Mar 2024 14:50:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "\nOn Fri, 08 Mar 2024 at 13:21, Michael Paquier <[email protected]> wrote:\n> On Wed, Mar 06, 2024 at 08:24:09AM +0800, Japin Li wrote:\n>> On Wed, 06 Mar 2024 at 01:53, Jelte Fennema-Nio <[email protected]> wrote:\n>>> I think if you remove the EEO_CASE(EEOP_LAST) block the warning should\n>>> go away. That block is clearly marked as unreachable, so it doesn't\n>>> really serve a purpose.\n>>\n>> Thanks! Fixed as you suggested. Please see v3 patch.\n>\n> Hmm. I am not sure if this one is a good idea.\n\nSorry for the late reply!\n\n> This makes the code a\n> bit more complicated to grasp under EEO_USE_COMPUTED_GOTO with the\n> reverse dispatch table, to say the least.\n\nI'm not get your mind. Could you explain in more detail?\n\n\n",
"msg_date": "Tue, 12 Mar 2024 09:28:32 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 08.03.24 06:50, Michael Paquier wrote:\n> On Mon, Mar 04, 2024 at 09:29:03AM +0800, jian he wrote:\n>> On Fri, Mar 1, 2024 at 5:26 PM Peter Eisentraut <[email protected]> wrote:\n>>> Oops, there was a second commit in my branch that I neglected to send\n>>> in. Here is my complete patch set.\n> \n> Thanks for the new patch set. The gains are neat, giving nice\n> numbers:\n> 7 files changed, 200 insertions(+), 644 deletions(-)\n> \n> + default:\n> + DropObjectById(object);\n> + break;\n> \n> Hmm. I am not sure that this is a good idea. Wouldn't it be safer to\n> use as default path something that generates an ERROR so as this code\n> path would complain immediately when adding a new catalog?\n\nfixed in new patch\n\n> In getObjectDescription() and getObjectTypeDescription() this was a\n> safeguard, but we don't have that anymore. So it seems to me that\n> this should be replaced with a default with elog(ERROR)?\n\nfixed\n\n>> there is a `OCLASS` at the end of getObjectIdentityParts.\n> \n> Nice catch. A comment is not updated.\n> \n>> There is a `ObjectClass` in typedefs.list\n> \n> This is usually taken care of by committers or updated automatically.\n\nboth fixed",
"msg_date": "Wed, 13 Mar 2024 14:24:32 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 02:24:32PM +0100, Peter Eisentraut wrote:\n> On 08.03.24 06:50, Michael Paquier wrote:\n>> This is usually taken care of by committers or updated automatically.\n> \n> both fixed\n\nLooks mostly fine, thanks for the new version.\n\n-EventTriggerSupportsObjectClass(ObjectClass objclass)\n+EventTriggerSupportsObject(const ObjectAddress *object) \n\nThe shortcut introduced here is interesting, but it is inconsistent.\nHEAD treats OCLASS_SUBSCRIPTION as something supported by event\ntriggers, but as pg_subscription is a shared catalog it would be\ndiscarded with your change. Subscriptions are marked as supported in\nthe event trigger table:\nhttps://www.postgresql.org/docs/devel/event-trigger-matrix.html\n--\nMichael",
"msg_date": "Thu, 14 Mar 2024 09:26:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 14.03.24 01:26, Michael Paquier wrote:\n> -EventTriggerSupportsObjectClass(ObjectClass objclass)\n> +EventTriggerSupportsObject(const ObjectAddress *object)\n> \n> The shortcut introduced here is interesting, but it is inconsistent.\n> HEAD treats OCLASS_SUBSCRIPTION as something supported by event\n> triggers, but as pg_subscription is a shared catalog it would be\n> discarded with your change. Subscriptions are marked as supported in\n> the event trigger table:\n> https://www.postgresql.org/docs/devel/event-trigger-matrix.html\n\nAh, good catch. Subscriptions are a little special there. Here is a \nnew patch that keeps the switch/case arrangement in that function. That \nalso makes it easier to keep the two EventTriggerSupports... functions \naligned. Also added a note about subscriptions and a reference to the \ndocumentation.",
"msg_date": "Mon, 18 Mar 2024 08:09:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 3:09 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 14.03.24 01:26, Michael Paquier wrote:\n> > -EventTriggerSupportsObjectClass(ObjectClass objclass)\n> > +EventTriggerSupportsObject(const ObjectAddress *object)\n> >\n> > The shortcut introduced here is interesting, but it is inconsistent.\n> > HEAD treats OCLASS_SUBSCRIPTION as something supported by event\n> > triggers, but as pg_subscription is a shared catalog it would be\n> > discarded with your change. Subscriptions are marked as supported in\n> > the event trigger table:\n> > https://www.postgresql.org/docs/devel/event-trigger-matrix.html\n>\n> Ah, good catch. Subscriptions are a little special there. Here is a\n> new patch that keeps the switch/case arrangement in that function. That\n> also makes it easier to keep the two EventTriggerSupports... functions\n> aligned. Also added a note about subscriptions and a reference to the\n> documentation.\n\nselect relname from pg_class where relisshared and relkind = 'r';\n relname\n-----------------------\n pg_authid\n pg_subscription\n pg_database\n pg_db_role_setting\n pg_tablespace\n pg_auth_members\n pg_shdepend\n pg_shdescription\n pg_replication_origin\n pg_shseclabel\n pg_parameter_acl\n(11 rows)\n\nEventTriggerSupportsObject should return false for the following:\nSharedSecLabelRelationId\nSharedDescriptionRelationId\nDbRoleSettingRelationId\nSharedDependRelationId\n\nbut I am not sure ReplicationOriginRelationId.\n\n\n",
"msg_date": "Mon, 18 Mar 2024 18:01:20 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 6:01 PM jian he <[email protected]> wrote:\n>\n> On Mon, Mar 18, 2024 at 3:09 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 14.03.24 01:26, Michael Paquier wrote:\n> > > -EventTriggerSupportsObjectClass(ObjectClass objclass)\n> > > +EventTriggerSupportsObject(const ObjectAddress *object)\n> > >\n> > > The shortcut introduced here is interesting, but it is inconsistent.\n> > > HEAD treats OCLASS_SUBSCRIPTION as something supported by event\n> > > triggers, but as pg_subscription is a shared catalog it would be\n> > > discarded with your change. Subscriptions are marked as supported in\n> > > the event trigger table:\n> > > https://www.postgresql.org/docs/devel/event-trigger-matrix.html\n> >\n> > Ah, good catch. Subscriptions are a little special there. Here is a\n> > new patch that keeps the switch/case arrangement in that function. That\n> > also makes it easier to keep the two EventTriggerSupports... functions\n> > aligned. Also added a note about subscriptions and a reference to the\n> > documentation.\n>\n> select relname from pg_class where relisshared and relkind = 'r';\n> relname\n> -----------------------\n> pg_authid\n> pg_subscription\n> pg_database\n> pg_db_role_setting\n> pg_tablespace\n> pg_auth_members\n> pg_shdepend\n> pg_shdescription\n> pg_replication_origin\n> pg_shseclabel\n> pg_parameter_acl\n> (11 rows)\n>\n\nalso in function doDeletion\nwe have:\n\n/*\n* These global object types are not supported here.\n*/\ncase AuthIdRelationId:\ncase DatabaseRelationId:\ncase TableSpaceRelationId:\ncase SubscriptionRelationId:\ncase ParameterAclRelationId:\nelog(ERROR, \"global objects cannot be deleted by doDeletion\");\nbreak;\n\ndo we need to add other global objects?\n\nin the end, it does not matter since we have:\ndefault:\nelog(ERROR, \"unsupported object class: %u\", object->classId);\n\n\n",
"msg_date": "Mon, 18 Mar 2024 20:58:19 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 18.03.24 11:01, jian he wrote:\n> select relname from pg_class where relisshared and relkind = 'r';\n> relname\n> -----------------------\n> pg_authid\n> pg_subscription\n> pg_database\n> pg_db_role_setting\n> pg_tablespace\n> pg_auth_members\n> pg_shdepend\n> pg_shdescription\n> pg_replication_origin\n> pg_shseclabel\n> pg_parameter_acl\n> (11 rows)\n> \n> EventTriggerSupportsObject should return false for the following:\n> SharedSecLabelRelationId\n> SharedDescriptionRelationId\n> DbRoleSettingRelationId\n> SharedDependRelationId\n> \n> but I am not sure ReplicationOriginRelationId.\n\nEventTriggerSupportsObject() (currently named \nEventTriggerSupportsObjectClass()) is only used by the deletion code, \nand these additional classes are not supported there anyway. Also, if \nthey happen to show up there for some reason, then \nEventTriggerSQLDropAddObject() would error out in \ngetObjectIdentityParts() or getObjectTypeDescription(). So you wouldn't \nget an event trigger firing on a previously unsupported class by \naccident. So I think this is robust enough.\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:08:39 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "looking through v4 again.\nv4 looks good to me.\n\n\n",
"msg_date": "Mon, 25 Mar 2024 13:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
},
{
"msg_contents": "On 25.03.24 06:00, jian he wrote:\n> looking through v4 again.\n> v4 looks good to me.\n\nThanks, I have committed this.\n\n\n\n",
"msg_date": "Tue, 26 Mar 2024 11:15:22 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve readability by using designated initializers when\n possible"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Community,\n\nExcited to share how Apache AGE enhances PostgreSQL with smooth graph\nfeatures! Handles complex data, and supports SQL and Cypher. Join our\nawesome community, check tutorials, and let's dive into those data projects!\n\nMore info.: Apache AGE GitHub <https://github.com/apache/age> & Website\n<https://youtu.be/0-qMwpDh0CA>\n\nRegards,\nNandhini Jayakumar\n\nHello PostgreSQL Community,Excited to share how Apache AGE enhances PostgreSQL with smooth graph features! Handles complex data, and supports SQL and Cypher. Join our awesome community, check tutorials, and let's dive into those data projects!More info.: Apache AGE GitHub & Website Regards,Nandhini Jayakumar",
"msg_date": "Wed, 21 Feb 2024 14:41:57 -0800",
"msg_from": "Nandhini Jayakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Discover PostgreSQL's Graph Power with Apache AGE!"
}
] |
[
{
"msg_contents": "Thanks your reply.\r\n\r\n\r\n I understand what you mean and have tried to correct this patch.\r\n According to the previous use case, the result obtained is as follows:\r\n\r\n\r\n\r\nid | name | year | xmax | xmin | ctid \r\n----+----------+------+------+------+-------\r\n 1 | liuwei | 20 | 0 | 859 | (0,1)\r\n 2 | zhangbin | 30 | 866 | 866 | (0,7)\r\n 3 | fuguo | 44 | 866 | 866 | (0,8)\r\n 4 | yihe | 33 | 0 | 865 | (0,6)\r\n 4 | yihe | 33 | 0 | 866 | (0,9)\r\n(5 rows)\r\n\r\n\r\n At present, the behavior of the number of rows for ‘id’ 2 and 3 appears to be normal, but there is duplicate data in the data for ‘id’ 4. \r\n According to what you said, this is a normal manifestation of transaction isolation level. \r\n\r\n But there are still differences between the results and those of Oracle(no duplicate data 'id' 4). \r\n\r\n\r\n\r\n After that I have tried several scenarios in Oracle and PG:\r\n 1、session1: insert, session2:merge into; duplicate data may also occur (pg and oracle consistent).\r\n 2、session1: update + insert ,session2: merge into; there will be no duplicate data in oracle ,pg has duplicate data.\r\n \r\n\r\n It looks like there is an exclusive lock between the update statement and merge statement in oracle. After submitting both update and insert, merge will proceed with locking and execution. \r\n (Of course, this is just my guess.)\r\n \r\n However, it seems that both PG and Oracle have no obvious issues, and their respective designs are reasonable.\r\n\r\n\r\n\r\n If I want to get the same results as Oracle, do I need to adjust the lock behavior of the update and merge statements?\r\n If I want to achieve the same results as Oracle, can I achieve exclusive locking by adjusting update and merge? Do you have any suggestions?\r\n\r\n\r\n\r\nRegards,\r\nwenjiang zhang\r\n\r\n\r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"Dean Rasheed\" <[email protected]>;\r\n发送时间: 2024年2月22日(星期四) 凌晨1:00\r\n收件人: \"zwj\"<[email protected]>;\r\n抄送: \"pgsql-hackers\"<[email protected]>;\r\n主题: Re: bug report: some issues about pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)\r\n\r\n\r\n\r\nOn Tue, 20 Feb 2024 at 14:49, Dean Rasheed <[email protected]> wrote:\r\n>\r\n> On the face of it, the simplest fix is to tweak is_simple_union_all()\r\n> to prevent UNION ALL subquery pullup for MERGE, forcing a\r\n> subquery-scan plan. A quick test shows that that fixes the reported\r\n> issue.\r\n>\r\n> However, that leaves the question of whether we should do the same for\r\n> UPDATE and DELETE.\r\n>\r\n\r\nAttached is a patch that prevents UNION ALL subquery pullup in MERGE only.\r\n\r\nI've re-used and extended the isolation test cases added by\r\n1d5caec221, since it's clear that replacing the plain source relation\r\nin those tests with a UNION ALL subquery that returns the same results\r\nshould produce the same end result. (Without this patch, the UNION ALL\r\nsubquery is pulled up, EPQ rechecking fails to re-find the match, and\r\na WHEN NOT MATCHED THEN INSERT action is executed instead, resulting\r\nin a primary key violation.)\r\n\r\nIt's still not quite clear whether preventing UNION ALL subquery\r\npullup should also apply to UPDATE and DELETE, but I wasn't able to\r\nfind any live bug there, so I think they're best left alone.\r\n\r\nThis fixes the reported issue, though it's worth noting that\r\nconcurrent WHEN NOT MATCHED THEN INSERT actions will still lead to\r\nduplicate rows being inserted, which is a limitation that is already\r\ndocumented [1].\r\n\r\n[1] https://www.postgresql.org/docs/current/transaction-iso.html\r\n\r\nRegards,\r\nDean\nThanks your reply. I understand what you mean and have tried to correct this patch. According to the previous use case, the result obtained is as follows:id | name | year | xmax | xmin | ctid ----+----------+------+------+------+------- 1 | liuwei | 20 | 0 | 859 | (0,1) 2 | zhangbin | 30 | 866 | 866 | (0,7) 3 | fuguo | 44 | 866 | 866 | (0,8) 4 | yihe | 33 | 0 | 865 | (0,6) 4 | yihe | 33 | 0 | 866 | (0,9)(5 rows) At present, the behavior of the number of rows for ‘id’ 2 and 3 appears to be normal, but there is duplicate data in the data for ‘id’ 4. According to what you said, this is a normal manifestation of transaction isolation level. But there are still differences between the results and those of Oracle(no duplicate data 'id' 4). After that I have tried several scenarios in Oracle and PG: 1、session1: insert, session2:merge into; duplicate data may also occur (pg and oracle consistent). 2、session1: update + insert ,session2: merge into; there will be no duplicate data in oracle ,pg has duplicate data. It looks like there is an exclusive lock between the update statement and merge statement in oracle. After submitting both update and insert, merge will proceed with locking and execution. (Of course, this is just my guess.) However, it seems that both PG and Oracle have no obvious issues, and their respective designs are reasonable. If I want to get the same results as Oracle, do I need to adjust the lock behavior of the update and merge statements? If I want to achieve the same results as Oracle, can I achieve exclusive locking by adjusting update and merge? Do you have any suggestions?Regards,wenjiang zhang------------------ 原始邮件 ------------------发件人: \"Dean Rasheed\" <[email protected]>;发送时间: 2024年2月22日(星期四) 凌晨1:00收件人: \"zwj\"<[email protected]>;抄送: \"pgsql-hackers\"<[email protected]>;主题: Re: bug report: some issues about pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)On Tue, 20 Feb 2024 at 14:49, Dean Rasheed <[email protected]> wrote:>> On the face of it, the simplest fix is to tweak is_simple_union_all()> to prevent UNION ALL subquery pullup for MERGE, forcing a> subquery-scan plan. A quick test shows that that fixes the reported> issue.>> However, that leaves the question of whether we should do the same for> UPDATE and DELETE.>Attached is a patch that prevents UNION ALL subquery pullup in MERGE only.I've re-used and extended the isolation test cases added by1d5caec221, since it's clear that replacing the plain source relationin those tests with a UNION ALL subquery that returns the same resultsshould produce the same end result. (Without this patch, the UNION ALLsubquery is pulled up, EPQ rechecking fails to re-find the match, anda WHEN NOT MATCHED THEN INSERT action is executed instead, resultingin a primary key violation.)It's still not quite clear whether preventing UNION ALL subquerypullup should also apply to UPDATE and DELETE, but I wasn't able tofind any live bug there, so I think they're best left alone.This fixes the reported issue, though it's worth noting thatconcurrent WHEN NOT MATCHED THEN INSERT actions will still lead toduplicate rows being inserted, which is a limitation that is alreadydocumented [1].[1] https://www.postgresql.org/docs/current/transaction-iso.htmlRegards,Dean",
"msg_date": "Thu, 22 Feb 2024 11:45:15 +0800",
"msg_from": "\"=?gb18030?B?endq?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?gb18030?B?u9i4tKO6IGJ1ZyByZXBvcnQ6IHNvbWUgaXNzdWVz?=\n =?gb18030?B?IGFib3V0IHBnXzE1X3N0YWJsZSg4ZmE0YTFhYzYx?=\n =?gb18030?B?MTg5ZWZmZmI4Yjg1MWVlNzdlMWJjODczNjBjNDQ1?=\n =?gb18030?B?KQ==?="
},
{
"msg_contents": "On Thu, 22 Feb 2024 at 03:46, zwj <[email protected]> wrote:\n>\n> If I want to get the same results as Oracle, do I need to adjust the lock behavior of the update and merge statements?\n> If I want to achieve the same results as Oracle, can I achieve exclusive locking by adjusting update and merge? Do you have any suggestions?\n>\n\nI think that trying to get the same results in Oracle and Postgres may\nnot always be possible. Each has their own (probably quite different)\nimplementation of these features, that simply may not be compatible.\n\nIn Postgres, MERGE aims to make UPDATE and DELETE actions behave in\nthe same way as standalone UPDATE and DELETE commands under concurrent\nmodifications. However, it does not attempt to prevent INSERT actions\nfrom inserting duplicates.\n\nIn that context, the UNION ALL issue is a clear bug, and I'll aim to\nget that patch committed and back-patched sometime in the next few\ndays, if there are no objections from other hackers.\n\nHowever, the issue with INSERT actions inserting duplicates is a\ndesign choice, rather than something that we regard as a bug. It's\npossible that a future version of Postgres might improve MERGE,\nproviding some way round that issue, but there's no guarantee of that\never happening. Similarly, it sounds like Oracle also sometimes allows\nduplicates, as well as having other \"bugs\" like the one discussed in\n[1], that may be difficult for them to fix within their\nimplementation.\n\nIn Postgres, if the target table is subject to concurrent inserts (or\nprimary key updates), it might be better to use INSERT ... ON CONFLICT\nDO UPDATE [2] instead of MERGE. That would avoid inserting duplicates\n(though I can't say how compatible that is with anything in Oracle).\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/CAEZATCV_6t5E57q7HsWQBX6a5YOjN5o7K-HicZ8a73EPzfwo=A@mail.gmail.com\n\n[2] https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:59:58 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug report: some issues about\n pg_15_stable(8fa4a1ac61189efffb8b851ee77e1bc87360c445)"
}
] |
[
{
"msg_contents": "Hello,\n\nI have been working on ubuntu 22.04 LTS with postgres in my applications \nand need to deploy that application on QNX710.\n\nI have a requirement to port postgresSQL 12.18 to QNX 7.1 ,is it \npossible to build/port postgreSQL libraries for QNX7.1 Intel and Aarch64 \narchitectures.\n\nHope my query is clear for you and expecting a resolution for this.\n\nThanks & Regards,\nRanjith Rao.B\n\n*******************************************************************************************\nDisclaimer:The information contained in this e-mail and/or attachments to it may contain confidential data (or) privileged information of Medha. If you are not the intended recipient, any dissemination, use in any manner, review, distribution, printing, copying of the information contained in this e-mail and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify the sender and immediately delete the message and attachments (if any) permanently.\n\"Please consider the environment before printing this message.\"\n*******************************************************************************************\n",
"msg_date": "Thu, 22 Feb 2024 10:42:28 +0530",
"msg_from": "\"Rajith Rao .B(App Software)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Porting PostgresSQL libraries for QNX710"
},
{
"msg_contents": "> On 22 Feb 2024, at 06:12, Rajith Rao .B(App Software) <[email protected]> wrote:\n\n> Hope my query is clear for you and expecting a resolution for this.\n\nThere is no official port of libpq to QNX, so the short answer is that you're\non your own. QNX support was removed in 8.2, so maybe looking at the code\nbefore that happened might give some insights on how to get started?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 11:12:49 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Porting PostgresSQL libraries for QNX710"
},
{
"msg_contents": "> On 22 Feb 2024, at 11:35, Rajith Rao .B(App Software) <[email protected]> wrote:\n\n> I have been using the Qt IDE with C++ for database connection and query execution, and unfortunately, I cannot share the code with you.\n\nNo worries, I have no intention to work on this.\n\n> You mentioned that PostgreSQL support for QNX was removed starting from version 8.2. Are there any alternative methods to port or build PostgreSQL libraries for QNX 7.1.0?\n\nThere is no other way to build any software on a new architecture than rolling\nup the sleeves and getting started.\n\nI suggest looking at commits f55808828569, a1675649e402 and 6f84b2da75d3 in the\npostgres repo as a starting point for research. The QNX support which was\nremoved in 8.2 was targeting QNX4, so it may or may not be helpful.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Feb 2024 13:15:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Porting PostgresSQL libraries for QNX710"
}
] |
[
{
"msg_contents": "Dear All,\nI'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.\nTo reproduce the problem:\n * Setup logical replication from master to replica with subscription parameter twophase = true. * Create some intermediate load on the master (use pgbench with custom sql with prepare+commit) * Optionally switch off the replica for some time (keep load on master). * Switch on the replica and wait until it reaches the master.\nThe replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load.\nI think, the main proplem of 2PC catchup bad performance - the lack of asynchronous commit support for 2PC. For regular transactions asynchronous commit is used on the replica by default (subscrition sycnronous_commit = off). It allows the replication worker process on the replica to avoid fsync (XLogFLush) and to utilize 100% CPU (the background wal writer or checkpointer will do fsync). I agree, 2PC are mostly used in multimaster configurations with two or more nodes which are performed synchronously, but when the node in catchup (node is not online in a multimaster cluster), asynchronous commit have to be used to speedup the catchup.\nThere is another thing that affects on the disbalance of the master and replica performance. When the master executes requestes from multiple clients, there is a fsync optimization takes place in XLogFlush. It allows to decrease the number of fsync in case when a number of parallel backends write to the WAL simultaneously. The replica applies received transactions in one thread sequentially, such optimization is not applied.\nI see some possible solutions:\n * Implement asyncronous commit for 2PC transactions. * Do some hacking with enableFsync when it is possible.\nI think, asynchronous commit support for 2PC transactions should significantly increase replica performance and help to solve this problem. I tried to implement it (like for usual transactions) but I've found another problem: 2PC state is stored in WAL on prepare, on commit we have to read 2PC state from WAL but the read is delayed until WAL is flushed by the background wal writer (read LSN should be less than flush LSN). Storing 2PC state in a shared memory (as it proposed earlier) may help.\n\nI used the following query to monitor the catchup progress on the master:SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication;\nI used the following script for pgbench to the master:SELECT md5(random()::text) as mygid \\gset\nBEGIN;\nDELETE FROM test WHERE v = pg_backend_pid();\nINSERT INTO test(v) SELECT pg_backend_pid();\nPREPARE TRANSACTION $$:mygid$$;\nCOMMIT PREPARED $$:mygid$$;\n \nWhat do you think?\n \nWith best regards,\nVitaly Davydov\n\nDear All,I'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.To reproduce the problem:Setup logical replication from master to replica with subscription parameter twophase = true.Create some intermediate load on the master (use pgbench with custom sql with prepare+commit)Optionally switch off the replica for some time (keep load on master).Switch on the replica and wait until it reaches the master.The replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load.I think, the main proplem of 2PC catchup bad performance - the lack of asynchronous commit support for 2PC. For regular transactions asynchronous commit is used on the replica by default (subscrition sycnronous_commit = off). It allows the replication worker process on the replica to avoid fsync (XLogFLush) and to utilize 100% CPU (the background wal writer or checkpointer will do fsync). I agree, 2PC are mostly used in multimaster configurations with two or more nodes which are performed synchronously, but when the node in catchup (node is not online in a multimaster cluster), asynchronous commit have to be used to speedup the catchup.There is another thing that affects on the disbalance of the master and replica performance. When the master executes requestes from multiple clients, there is a fsync optimization takes place in XLogFlush. It allows to decrease the number of fsync in case when a number of parallel backends write to the WAL simultaneously. The replica applies received transactions in one thread sequentially, such optimization is not applied.I see some possible solutions:Implement asyncronous commit for 2PC transactions.Do some hacking with enableFsync when it is possible.I think, asynchronous commit support for 2PC transactions should significantly increase replica performance and help to solve this problem. I tried to implement it (like for usual transactions) but I've found another problem: 2PC state is stored in WAL on prepare, on commit we have to read 2PC state from WAL but the read is delayed until WAL is flushed by the background wal writer (read LSN should be less than flush LSN). Storing 2PC state in a shared memory (as it proposed earlier) may help.I used the following query to monitor the catchup progress on the master:SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication;I used the following script for pgbench to the master:SELECT md5(random()::text) as mygid \\gset\nBEGIN;\nDELETE FROM test WHERE v = pg_backend_pid();\nINSERT INTO test(v) SELECT pg_backend_pid();\nPREPARE TRANSACTION $$:mygid$$;\nCOMMIT PREPARED $$:mygid$$; What do you think? With best regards,Vitaly Davydov",
"msg_date": "Thu, 22 Feb 2024 16:29:43 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 6:59 PM Давыдов Виталий\n<[email protected]> wrote:\n>\n> I'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.\n>\n> To reproduce the problem:\n>\n> Setup logical replication from master to replica with subscription parameter twophase = true.\n> Create some intermediate load on the master (use pgbench with custom sql with prepare+commit)\n> Optionally switch off the replica for some time (keep load on master).\n> Switch on the replica and wait until it reaches the master.\n>\n> The replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load.\n>\n> I think, the main proplem of 2PC catchup bad performance - the lack of asynchronous commit support for 2PC. For regular transactions asynchronous commit is used on the replica by default (subscrition sycnronous_commit = off). It allows the replication worker process on the replica to avoid fsync (XLogFLush) and to utilize 100% CPU (the background wal writer or checkpointer will do fsync). I agree, 2PC are mostly used in multimaster configurations with two or more nodes which are performed synchronously, but when the node in catchup (node is not online in a multimaster cluster), asynchronous commit have to be used to speedup the catchup.\n>\n\nI don't see we do anything specific for 2PC transactions to make them\nbehave differently than regular transactions with respect to\nsynchronous_commit setting. What makes you think so? Can you pin point\nthe code you are referring to?\n\n> There is another thing that affects on the disbalance of the master and replica performance. When the master executes requestes from multiple clients, there is a fsync optimization takes place in XLogFlush. It allows to decrease the number of fsync in case when a number of parallel backends write to the WAL simultaneously. The replica applies received transactions in one thread sequentially, such optimization is not applied.\n>\n\nRight, I think for this we need to implement parallel apply.\n\n> I see some possible solutions:\n>\n> Implement asyncronous commit for 2PC transactions.\n> Do some hacking with enableFsync when it is possible.\n>\n\nCan you be a bit more specific about what exactly you have in mind to\nachieve the above solutions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Feb 2024 08:53:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 12:29 AM Давыдов Виталий <[email protected]>\nwrote:\n\n> Dear All,\n>\n> I'd like to present and talk about a problem when 2PC transactions are\n> applied quite slowly on a replica during logical replication. There is a\n> master and a replica with established logical replication from the master\n> to the replica with twophase = true. With some load level on the master,\n> the replica starts to lag behind the master, and the lag will be\n> increasing. We have to significantly decrease the load on the master to\n> allow replica to complete the catchup. Such problem may create significant\n> difficulties in the production. The problem appears at least on\n> REL_16_STABLE branch.\n>\n> To reproduce the problem:\n>\n> - Setup logical replication from master to replica with subscription\n> parameter twophase = true.\n> - Create some intermediate load on the master (use pgbench with custom\n> sql with prepare+commit)\n> - Optionally switch off the replica for some time (keep load on\n> master).\n> - Switch on the replica and wait until it reaches the master.\n>\n> The replica will never reach the master with even some low load on the\n> master. If to remove the load, the replica will reach the master for much\n> greater time, than expected. I tried the same for regular transactions, but\n> such problem doesn't appear even with a decent load.\n>\n>\n>\nI tried this setup and I do see that the logical subscriber does reach the\nmaster in a short time. I'm not sure what I'm missing. I stopped the\nlogical subscriber in between while pgbench was running and then started it\nagain and ran the following:\npostgres=# SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication;\n sent_lsn | pg_current_wal_lsn\n-----------+--------------------\n 0/6793FA0 | 0/6793FA0 <=== caught up\n(1 row)\n\nMy pgbench command:\npgbench postgres -p 6972 -c 2 -j 3 -f /home/ajin/test.sql -T 200 -P 5\n\nmy custom sql file:\ncat test.sql\nSELECT md5(random()::text) as mygid \\gset\nBEGIN;\nDELETE FROM test WHERE v = pg_backend_pid();\nINSERT INTO test(v) SELECT pg_backend_pid();\nPREPARE TRANSACTION $$:mygid$$;\nCOMMIT PREPARED $$:mygid$$;\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Fri, Feb 23, 2024 at 12:29 AM Давыдов Виталий <[email protected]> wrote:Dear All,I'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.To reproduce the problem:Setup logical replication from master to replica with subscription parameter twophase = true.Create some intermediate load on the master (use pgbench with custom sql with prepare+commit)Optionally switch off the replica for some time (keep load on master).Switch on the replica and wait until it reaches the master.The replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load.I tried this setup and I do see that the logical subscriber does reach the master in a short time. I'm not sure what I'm missing. I stopped the logical subscriber in between while pgbench was running and then started it again and ran the following:postgres=# SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication; sent_lsn | pg_current_wal_lsn -----------+-------------------- 0/6793FA0 | 0/6793FA0 <=== caught up(1 row)My pgbench command:pgbench postgres -p 6972 -c 2 -j 3 -f /home/ajin/test.sql -T 200 -P 5my custom sql file:cat test.sql SELECT md5(random()::text) as mygid \\gsetBEGIN;\tDELETE FROM test WHERE v = pg_backend_pid();\tINSERT INTO test(v) SELECT pg_backend_pid();\tPREPARE TRANSACTION $$:mygid$$;\tCOMMIT PREPARED $$:mygid$$;regards,Ajin CherianFujitsu Australia",
"msg_date": "Fri, 23 Feb 2024 15:52:11 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Ajin,\n\nThank you for your feedback. Could you please try to increase the number of clients (-c pgbench option) up to 20 or more? It seems, I forgot to specify it.\n\nWith best regards,\nVitaly Davydov On Fri, Feb 23, 2024 at 12:29 AM Давыдов Виталий <[email protected]> wrote:\nDear All,\nI'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.\nTo reproduce the problem:\n * Setup logical replication from master to replica with subscription parameter twophase = true. * Create some intermediate load on the master (use pgbench with custom sql with prepare+commit) * Optionally switch off the replica for some time (keep load on master). * Switch on the replica and wait until it reaches the master.\nThe replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load.\n I tried this setup and I do see that the logical subscriber does reach the master in a short time. I'm not sure what I'm missing. I stopped the logical subscriber in between while pgbench was running and then started it again and ran the following:postgres=# SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication;\n sent_lsn | pg_current_wal_lsn\n-----------+--------------------\n 0/6793FA0 | 0/6793FA0 <=== caught up\n(1 row)\n My pgbench command:pgbench postgres -p 6972 -c 2 -j 3 -f /home/ajin/test.sql -T 200 -P 5 my custom sql file:cat test.sql\nSELECT md5(random()::text) as mygid \\gset\nBEGIN;\nDELETE FROM test WHERE v = pg_backend_pid();\nINSERT INTO test(v) SELECT pg_backend_pid();\nPREPARE TRANSACTION $$:mygid$$;\nCOMMIT PREPARED $$:mygid$$; regards,Ajin CherianFujitsu Australia \n\n \n\nHi Ajin,Thank you for your feedback. Could you please try to increase the number of clients (-c pgbench option) up to 20 or more? It seems, I forgot to specify it.With best regards,Vitaly Davydov On Fri, Feb 23, 2024 at 12:29 AM Давыдов Виталий <[email protected]> wrote:Dear All,I'd like to present and talk about a problem when 2PC transactions are applied quite slowly on a replica during logical replication. There is a master and a replica with established logical replication from the master to the replica with twophase = true. With some load level on the master, the replica starts to lag behind the master, and the lag will be increasing. We have to significantly decrease the load on the master to allow replica to complete the catchup. Such problem may create significant difficulties in the production. The problem appears at least on REL_16_STABLE branch.To reproduce the problem:Setup logical replication from master to replica with subscription parameter twophase = true.Create some intermediate load on the master (use pgbench with custom sql with prepare+commit)Optionally switch off the replica for some time (keep load on master).Switch on the replica and wait until it reaches the master.The replica will never reach the master with even some low load on the master. If to remove the load, the replica will reach the master for much greater time, than expected. I tried the same for regular transactions, but such problem doesn't appear even with a decent load. I tried this setup and I do see that the logical subscriber does reach the master in a short time. I'm not sure what I'm missing. I stopped the logical subscriber in between while pgbench was running and then started it again and ran the following:postgres=# SELECT sent_lsn, pg_current_wal_lsn() FROM pg_stat_replication; sent_lsn | pg_current_wal_lsn-----------+-------------------- 0/6793FA0 | 0/6793FA0 <=== caught up(1 row) My pgbench command:pgbench postgres -p 6972 -c 2 -j 3 -f /home/ajin/test.sql -T 200 -P 5 my custom sql file:cat test.sqlSELECT md5(random()::text) as mygid \\gsetBEGIN;DELETE FROM test WHERE v = pg_backend_pid();INSERT INTO test(v) SELECT pg_backend_pid();PREPARE TRANSACTION $$:mygid$$;COMMIT PREPARED $$:mygid$$; regards,Ajin CherianFujitsu Australia",
"msg_date": "Fri, 23 Feb 2024 19:29:29 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Hi Amit,\nAmit Kapila <[email protected]> wrote:\nI don't see we do anything specific for 2PC transactions to make them behave differently than regular transactions with respect to synchronous_commit setting. What makes you think so? Can you pin point the code you are referring to?Yes, sure. The function RecordTransactionCommitPrepared is called on prepared transaction commit (twophase.c). It calls XLogFlush unconditionally. The function RecordTransactionCommit (for regular transactions, xact.c) calls XLogFlush if synchronous_commit > OFF, otherwise it calls XLogSetAsyncXactLSN.\n\nThere is some comment in RecordTransactionCommitPrepared (by Bruce Momjian) that shows that async commit is not supported yet:\n/*\n* We don't currently try to sleep before flush here ... nor is there any\n* support for async commit of a prepared xact (the very idea is probably\n* a contradiction)\n*/\n/* Flush XLOG to disk */\nXLogFlush(recptr);\nRight, I think for this we need to implement parallel apply.Yes, parallel apply is a good point. But, I believe, it will not work if asynchronous commit is not supported. You have only one receiver process which should dispatch incoming messages to parallel workers. I guess, you will never reach such rate of parallel execution on replica as on the master with multiple backends.\n \nCan you be a bit more specific about what exactly you have in mind to achieve the above solutions?My proposal is to implement async commit for 2PC transactions as it is for regular transactions. It should significantly speedup the catchup process. Then, think how to apply in parallel, which is much diffcult to do. The current problem is to get 2PC state from the WAL on commit prepared. At this moment, the WAL is not flushed yet, commit function waits until WAL with 2PC state is to be flushed. I just tried to do it in my sandbox and found such a problem. Inability to get 2PC state from unflushed WAL stops me right now. I think about possible solutions.\n\nThe idea with enableFsync is not a suitable solution, in general, I think. I just pointed it as an alternate idea. You just do enableFsync = false before prepare or commit prepared and do enableFsync = true after these functions. In this case, 2PC records will not be fsync-ed, but FlushPtr will be increased. Thus, 2PC state can be read from WAL on commit prepared without waiting. To make it work correctly, I guess, we have to do some additional work to keep more wal on the master and filter some duplicate transactions on the replica, if replica restarts during catchup.\n\nWith best regards,\nVitaly Davydov\n\n \n\nHi Amit,Amit Kapila <[email protected]> wrote:I don't see we do anything specific for 2PC transactions to make them behave differently than regular transactions with respect to synchronous_commit setting. What makes you think so? Can you pin point the code you are referring to?Yes, sure. The function RecordTransactionCommitPrepared is called on prepared transaction commit (twophase.c). It calls XLogFlush unconditionally. The function RecordTransactionCommit (for regular transactions, xact.c) calls XLogFlush if synchronous_commit > OFF, otherwise it calls XLogSetAsyncXactLSN.There is some comment in RecordTransactionCommitPrepared (by Bruce Momjian) that shows that async commit is not supported yet:/** We don't currently try to sleep before flush here ... nor is there any* support for async commit of a prepared xact (the very idea is probably* a contradiction)*//* Flush XLOG to disk */XLogFlush(recptr);Right, I think for this we need to implement parallel apply.Yes, parallel apply is a good point. But, I believe, it will not work if asynchronous commit is not supported. You have only one receiver process which should dispatch incoming messages to parallel workers. I guess, you will never reach such rate of parallel execution on replica as on the master with multiple backends. Can you be a bit more specific about what exactly you have in mind to achieve the above solutions?My proposal is to implement async commit for 2PC transactions as it is for regular transactions. It should significantly speedup the catchup process. Then, think how to apply in parallel, which is much diffcult to do. The current problem is to get 2PC state from the WAL on commit prepared. At this moment, the WAL is not flushed yet, commit function waits until WAL with 2PC state is to be flushed. I just tried to do it in my sandbox and found such a problem. Inability to get 2PC state from unflushed WAL stops me right now. I think about possible solutions.The idea with enableFsync is not a suitable solution, in general, I think. I just pointed it as an alternate idea. You just do enableFsync = false before prepare or commit prepared and do enableFsync = true after these functions. In this case, 2PC records will not be fsync-ed, but FlushPtr will be increased. Thus, 2PC state can be read from WAL on commit prepared without waiting. To make it work correctly, I guess, we have to do some additional work to keep more wal on the master and filter some duplicate transactions on the replica, if replica restarts during catchup.With best regards,Vitaly Davydov",
"msg_date": "Fri, 23 Feb 2024 20:11:46 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 10:41 PM Давыдов Виталий\n<[email protected]> wrote:\n>\n> Amit Kapila <[email protected]> wrote:\n>\n> I don't see we do anything specific for 2PC transactions to make them behave differently than regular transactions with respect to synchronous_commit setting. What makes you think so? Can you pin point the code you are referring to?\n>\n> Yes, sure. The function RecordTransactionCommitPrepared is called on prepared transaction commit (twophase.c). It calls XLogFlush unconditionally. The function RecordTransactionCommit (for regular transactions, xact.c) calls XLogFlush if synchronous_commit > OFF, otherwise it calls XLogSetAsyncXactLSN.\n>\n> There is some comment in RecordTransactionCommitPrepared (by Bruce Momjian) that shows that async commit is not supported yet:\n> /*\n> * We don't currently try to sleep before flush here ... nor is there any\n> * support for async commit of a prepared xact (the very idea is probably\n> * a contradiction)\n> */\n> /* Flush XLOG to disk */\n> XLogFlush(recptr);\n>\n\nIt seems this comment is added in the commit 4a78cdeb where we added\nasync commit support. I think the reason is probably that when the WAL\nrecord for prepared is already flushed then what will be the idea of\nasync commit here?\n\n> Right, I think for this we need to implement parallel apply.\n>\n> Yes, parallel apply is a good point. But, I believe, it will not work if asynchronous commit is not supported. You have only one receiver process which should dispatch incoming messages to parallel workers. I guess, you will never reach such rate of parallel execution on replica as on the master with multiple backends.\n>\n>\n> Can you be a bit more specific about what exactly you have in mind to achieve the above solutions?\n>\n> My proposal is to implement async commit for 2PC transactions as it is for regular transactions. It should significantly speedup the catchup process. Then, think how to apply in parallel, which is much diffcult to do. The current problem is to get 2PC state from the WAL on commit prepared. At this moment, the WAL is not flushed yet, commit function waits until WAL with 2PC state is to be flushed. I just tried to do it in my sandbox and found such a problem. Inability to get 2PC state from unflushed WAL stops me right now. I think about possible solutions.\n>\n\nAt commit prepared, it seems we read prepare's WAL record, right? If\nso, it is not clear to me do you see a problem with a flush of\ncommit_prepared or reading WAL for prepared or both of these.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Feb 2024 18:54:55 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Amit,\n\nThank you for your interest in the discussion!\n\nOn Monday, February 26, 2024 16:24 MSK, Amit Kapila <[email protected]> wrote:\n \nI think the reason is probably that when the WAL record for prepared is already flushed then what will be the idea of async commit here?I think, the idea of async commit should be applied for both transactions: PREPARE and COMMIT PREPARED, which are actually two separate local transactions. For both these transactions we may call XLogSetAsyncXactLSN on commit instead of XLogFlush when async commit is enabled. When I use async commit, I mean to apply async commit to local transactions, not to a twophase (prepared) transaction itself.\n \nAt commit prepared, it seems we read prepare's WAL record, right? If so, it is not clear to me do you see a problem with a flush of commit_prepared or reading WAL for prepared or both of these.The problem with reading WAL is due to async commit of PREPARE TRANSACTION which saves 2PC in the WAL. At the moment of COMMIT PREPARED the WAL with PREPARE TRANSACTION 2PC state may not be XLogFlush-ed yet. So, PREPARE TRANSACTION should wait until its 2PC state is flushed.\n\nI did some experiments with saving 2PC state in the local memory of logical replication worker and, I think, it worked and demonstrated much better performance. Logical replication worker utilized up to 100% CPU. I'm just concerned about possible problems with async commit for twophase transactions.\n\nTo be more specific, I've attached a patch to support async commit for twophase. It is not the final patch but it is presented only for discussion purposes. There were some attempts to save 2PC in memory in past but it was rejected. Now, there might be the second round to discuss it.\n\nWith best regards,\nVitaly",
"msg_date": "Tue, 27 Feb 2024 14:19:41 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 4:49 PM Давыдов Виталий\n<[email protected]> wrote:\n>\n> Thank you for your interest in the discussion!\n>\n> On Monday, February 26, 2024 16:24 MSK, Amit Kapila <[email protected]> wrote:\n>\n>\n> I think the reason is probably that when the WAL record for prepared is already flushed then what will be the idea of async commit here?\n>\n> I think, the idea of async commit should be applied for both transactions: PREPARE and COMMIT PREPARED, which are actually two separate local transactions. For both these transactions we may call XLogSetAsyncXactLSN on commit instead of XLogFlush when async commit is enabled. When I use async commit, I mean to apply async commit to local transactions, not to a twophase (prepared) transaction itself.\n>\n>\n> At commit prepared, it seems we read prepare's WAL record, right? If so, it is not clear to me do you see a problem with a flush of commit_prepared or reading WAL for prepared or both of these.\n>\n> The problem with reading WAL is due to async commit of PREPARE TRANSACTION which saves 2PC in the WAL. At the moment of COMMIT PREPARED the WAL with PREPARE TRANSACTION 2PC state may not be XLogFlush-ed yet.\n>\n\nAs we do XLogFlush() at the time of prepare then why it is not\navailable? OR are you talking about this state after your idea/patch\nwhere you are trying to make both Prepare and Commit_prepared records\nasync?\n\n So, PREPARE TRANSACTION should wait until its 2PC state is flushed.\n>\n> I did some experiments with saving 2PC state in the local memory of logical replication worker and, I think, it worked and demonstrated much better performance. Logical replication worker utilized up to 100% CPU. I'm just concerned about possible problems with async commit for twophase transactions.\n>\n> To be more specific, I've attached a patch to support async commit for twophase. It is not the final patch but it is presented only for discussion purposes. There were some attempts to save 2PC in memory in past but it was rejected.\n>\n\nIt would be good if you could link those threads.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 18:30:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Amit,\n\nOn Tuesday, February 27, 2024 16:00 MSK, Amit Kapila <[email protected]> wrote:\nAs we do XLogFlush() at the time of prepare then why it is not available? OR are you talking about this state after your idea/patch where you are trying to make both Prepare and Commit_prepared records async?Right, I'm talking about my patch where async commit is implemented. There is no such problem with reading 2PC from not flushed WAL in the vanilla because XLogFlush is called unconditionally, as you've described. But an attempt to add some async stuff leads to the problem of reading not flushed WAL. It is why I store 2pc state in the local memory in my patch.\nIt would be good if you could link those threads.Sure, I will find and add some links to the discussions from past.\n\nThank you!\n\nWith best regards,\nVitaly\n On Tue, Feb 27, 2024 at 4:49 PM Давыдов Виталий\n<[email protected]> wrote:\n>\n> Thank you for your interest in the discussion!\n>\n> On Monday, February 26, 2024 16:24 MSK, Amit Kapila <[email protected]> wrote:\n>\n>\n> I think the reason is probably that when the WAL record for prepared is already flushed then what will be the idea of async commit here?\n>\n> I think, the idea of async commit should be applied for both transactions: PREPARE and COMMIT PREPARED, which are actually two separate local transactions. For both these transactions we may call XLogSetAsyncXactLSN on commit instead of XLogFlush when async commit is enabled. When I use async commit, I mean to apply async commit to local transactions, not to a twophase (prepared) transaction itself.\n>\n>\n> At commit prepared, it seems we read prepare's WAL record, right? If so, it is not clear to me do you see a problem with a flush of commit_prepared or reading WAL for prepared or both of these.\n>\n> The problem with reading WAL is due to async commit of PREPARE TRANSACTION which saves 2PC in the WAL. At the moment of COMMIT PREPARED the WAL with PREPARE TRANSACTION 2PC state may not be XLogFlush-ed yet.\n>\n\nAs we do XLogFlush() at the time of prepare then why it is not\navailable? OR are you talking about this state after your idea/patch\nwhere you are trying to make both Prepare and Commit_prepared records\nasync?\n\nSo, PREPARE TRANSACTION should wait until its 2PC state is flushed.\n>\n> I did some experiments with saving 2PC state in the local memory of logical replication worker and, I think, it worked and demonstrated much better performance. Logical replication worker utilized up to 100% CPU. I'm just concerned about possible problems with async commit for twophase transactions.\n>\n> To be more specific, I've attached a patch to support async commit for twophase. It is not the final patch but it is presented only for discussion purposes. There were some attempts to save 2PC in memory in past but it was rejected.\n>\n\nIt would be good if you could link those threads.\n\n--\nWith Regards,\nAmit Kapila.\n\n \n\n \n\nHi Amit,On Tuesday, February 27, 2024 16:00 MSK, Amit Kapila <[email protected]> wrote:As we do XLogFlush() at the time of prepare then why it is not available? OR are you talking about this state after your idea/patch where you are trying to make both Prepare and Commit_prepared records async?Right, I'm talking about my patch where async commit is implemented. There is no such problem with reading 2PC from not flushed WAL in the vanilla because XLogFlush is called unconditionally, as you've described. But an attempt to add some async stuff leads to the problem of reading not flushed WAL. It is why I store 2pc state in the local memory in my patch.It would be good if you could link those threads.Sure, I will find and add some links to the discussions from past.Thank you!With best regards,Vitaly On Tue, Feb 27, 2024 at 4:49 PM Давыдов Виталий<[email protected]> wrote:>> Thank you for your interest in the discussion!>> On Monday, February 26, 2024 16:24 MSK, Amit Kapila <[email protected]> wrote:>>> I think the reason is probably that when the WAL record for prepared is already flushed then what will be the idea of async commit here?>> I think, the idea of async commit should be applied for both transactions: PREPARE and COMMIT PREPARED, which are actually two separate local transactions. For both these transactions we may call XLogSetAsyncXactLSN on commit instead of XLogFlush when async commit is enabled. When I use async commit, I mean to apply async commit to local transactions, not to a twophase (prepared) transaction itself.>>> At commit prepared, it seems we read prepare's WAL record, right? If so, it is not clear to me do you see a problem with a flush of commit_prepared or reading WAL for prepared or both of these.>> The problem with reading WAL is due to async commit of PREPARE TRANSACTION which saves 2PC in the WAL. At the moment of COMMIT PREPARED the WAL with PREPARE TRANSACTION 2PC state may not be XLogFlush-ed yet.>As we do XLogFlush() at the time of prepare then why it is notavailable? OR are you talking about this state after your idea/patchwhere you are trying to make both Prepare and Commit_prepared recordsasync?So, PREPARE TRANSACTION should wait until its 2PC state is flushed.>> I did some experiments with saving 2PC state in the local memory of logical replication worker and, I think, it worked and demonstrated much better performance. Logical replication worker utilized up to 100% CPU. I'm just concerned about possible problems with async commit for twophase transactions.>> To be more specific, I've attached a patch to support async commit for twophase. It is not the final patch but it is presented only for discussion purposes. There were some attempts to save 2PC in memory in past but it was rejected.>It would be good if you could link those threads.--With Regards,Amit Kapila.",
"msg_date": "Tue, 27 Feb 2024 16:34:47 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Dear All,\n\nConsider, please, my patch for async commit for twophase transactions. It can be applicable when catchup performance is not enought with publication parameter twophase = on.\n\nThe key changes are:\n * Use XLogSetAsyncXactLSN instead of XLogFlush as it is for usual transactions. * In case of async commit only, save 2PC state in the pg_twophase file (but not fsync it) in addition to saving in the WAL. The file is used as an alternative to storing 2pc state in the memory. * On recovery, reject pg_twophase files with future xids.Probably, 2PC async commit should be enabled by a GUC (not implemented in the patch).\n\nWith best regards,\nVitaly",
"msg_date": "Thu, 29 Feb 2024 20:34:42 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "On 29/02/2024 19:34, Давыдов Виталий wrote:\n> Dear All,\n> \n> Consider, please, my patch for async commit for twophase transactions. \n> It can be applicable when catchup performance is not enought with \n> publication parameter twophase = on.\n> \n> The key changes are:\n> \n> * Use XLogSetAsyncXactLSN instead of XLogFlush as it is for usual\n> transactions.\n> * In case of async commit only, save 2PC state in the pg_twophase file\n> (but not fsync it) in addition to saving in the WAL. The file is\n> used as an alternative to storing 2pc state in the memory.\n> * On recovery, reject pg_twophase files with future xids.\n> \n> Probably, 2PC async commit should be enabled by a GUC (not implemented \n> in the patch).\n\nIn a nutshell, this changes PREPARE TRANSACTION so that if \nsynchronous_commit is 'off', the PREPARE TRANSACTION is not fsync'd to \ndisk. So if you crash after the PREPARE TRANSACTION has returned, the \ntransaction might be lost. I think that's completely unacceptable.\n\nIf you're ok to lose the prepared state of twophase transactions on \ncrash, why don't you create the subscription with 'two_phase=off' to \nbegin with?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 11:05:40 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Heikki,\n\nThank you for the reply.\n\nOn Tuesday, March 05, 2024 12:05 MSK, Heikki Linnakangas <[email protected]> wrote:\n In a nutshell, this changes PREPARE TRANSACTION so that if\nsynchronous_commit is 'off', the PREPARE TRANSACTION is not fsync'd to\ndisk. So if you crash after the PREPARE TRANSACTION has returned, the\ntransaction might be lost. I think that's completely unacceptable.\nYou are right, the prepared transaction might be lost after crash. The same may happen with regular transactions that are not fsync-ed on replica in logical replication by default. The subscription parameter synchronous_commit is OFF by default. I'm not sure, is there some auto recovery for regular transactions? I think, the main difference between these two cases - how to manually recover when some PREPARE TRANSACTION or COMMIT PREPARED are lost. For regular transactions, some updates or deletes in tables on replica may be enough to fix the problem. For twophase transactions, it may be harder to fix it by hands, but it is possible, I believe. If you create a custom solution that is based on twophase transactions (like multimaster) such auto recovery may happen automatically. Another solution is to ignore errors on commit prepared if the corresponding prepared tx is missing. I don't know other risks that may happen with async commit of twophase transactions.\n If you're ok to lose the prepared state of twophase transactions on\ncrash, why don't you create the subscription with 'two_phase=off' to\nbegin with?In usual work, the subscription has two_phase = on. I have to change this option at catchup stage only, but this parameter can not be altered. There was a patch proposal in past to implement altering of two_phase option, but it was rejected. I think, the recreation of the subscription with two_phase = off will not work.\n\nI believe, async commit for twophase transactions on catchup will significantly improve the catchup performance. It is worth to think about such feature.\n\nP.S. We might introduce a GUC option to allow async commit for twophase transactions. By default, sync commit will be applied for twophase transactions, as it is now.\n\nWith best regards,\nVitaly Davydov\n\nHi Heikki,Thank you for the reply.On Tuesday, March 05, 2024 12:05 MSK, Heikki Linnakangas <[email protected]> wrote: In a nutshell, this changes PREPARE TRANSACTION so that ifsynchronous_commit is 'off', the PREPARE TRANSACTION is not fsync'd todisk. So if you crash after the PREPARE TRANSACTION has returned, thetransaction might be lost. I think that's completely unacceptable.You are right, the prepared transaction might be lost after crash. The same may happen with regular transactions that are not fsync-ed on replica in logical replication by default. The subscription parameter synchronous_commit is OFF by default. I'm not sure, is there some auto recovery for regular transactions? I think, the main difference between these two cases - how to manually recover when some PREPARE TRANSACTION or COMMIT PREPARED are lost. For regular transactions, some updates or deletes in tables on replica may be enough to fix the problem. For twophase transactions, it may be harder to fix it by hands, but it is possible, I believe. If you create a custom solution that is based on twophase transactions (like multimaster) such auto recovery may happen automatically. Another solution is to ignore errors on commit prepared if the corresponding prepared tx is missing. I don't know other risks that may happen with async commit of twophase transactions. If you're ok to lose the prepared state of twophase transactions oncrash, why don't you create the subscription with 'two_phase=off' tobegin with?In usual work, the subscription has two_phase = on. I have to change this option at catchup stage only, but this parameter can not be altered. There was a patch proposal in past to implement altering of two_phase option, but it was rejected. I think, the recreation of the subscription with two_phase = off will not work.I believe, async commit for twophase transactions on catchup will significantly improve the catchup performance. It is worth to think about such feature.P.S. We might introduce a GUC option to allow async commit for twophase transactions. By default, sync commit will be applied for twophase transactions, as it is now.With best regards,Vitaly Davydov",
"msg_date": "Tue, 05 Mar 2024 17:29:32 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 7:59 PM Давыдов Виталий <[email protected]> wrote:\n>\n> Thank you for the reply.\n>\n> On Tuesday, March 05, 2024 12:05 MSK, Heikki Linnakangas <[email protected]> wrote:\n>\n>\n> In a nutshell, this changes PREPARE TRANSACTION so that if\n> synchronous_commit is 'off', the PREPARE TRANSACTION is not fsync'd to\n> disk. So if you crash after the PREPARE TRANSACTION has returned, the\n> transaction might be lost. I think that's completely unacceptable.\n>\n>\n> You are right, the prepared transaction might be lost after crash. The same may happen with regular transactions that are not fsync-ed on replica in logical replication by default. The subscription parameter synchronous_commit is OFF by default. I'm not sure, is there some auto recovery for regular transactions?\n>\n\nUnless the commit WAL is not flushed, we wouldn't have updated the\nreplication origin's LSN and neither the walsender would increase the\nconfirmed_flush_lsn for the corresponding slot till the commit is\nflushed on subscriber. So, if the subscriber crashed before flushing\nthe commit record, server should send the same transaction again. The\nsame should be true for prepared transaction stuff as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Mar 2024 17:55:54 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 1:29 AM Давыдов Виталий <[email protected]>\nwrote:\n\n> In usual work, the subscription has two_phase = on. I have to change this\n> option at catchup stage only, but this parameter can not be altered. There\n> was a patch proposal in past to implement altering of two_phase option, but\n> it was rejected. I think, the recreation of the subscription with two_phase\n> = off will not work.\n>\n>\n>\nThe altering of two_phase was restricted because if there was a previously\nprepared transaction on the subscriber when the two_phase was on, and then\nit was turned off, the apply worker on the subscriber would re-apply the\ntransaction a second time and this might result in an inconsistent replica.\nHere's a patch that allows toggling two_phase option provided that there\nare no pending uncommitted prepared transactions on the subscriber for that\nsubscription.\n\nThanks to Kuroda-san for working on the patch.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Thu, 4 Apr 2024 16:23:15 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 10:53 AM Ajin Cherian <[email protected]> wrote:\n>\n> On Wed, Mar 6, 2024 at 1:29 AM Давыдов Виталий <[email protected]> wrote:\n>>\n>> In usual work, the subscription has two_phase = on. I have to change this option at catchup stage only, but this parameter can not be altered. There was a patch proposal in past to implement altering of two_phase option, but it was rejected. I think, the recreation of the subscription with two_phase = off will not work.\n>>\n>>\n>\n> The altering of two_phase was restricted because if there was a previously prepared transaction on the subscriber when the two_phase was on, and then it was turned off, the apply worker on the subscriber would re-apply the transaction a second time and this might result in an inconsistent replica.\n> Here's a patch that allows toggling two_phase option provided that there are no pending uncommitted prepared transactions on the subscriber for that subscription.\n>\n\nI think this would probably be better than the current situation but\ncan we think of a solution to allow toggling the value of two_phase\neven when prepared transactions are present? Can you please summarize\nthe reason for the problems in doing that and the solutions, if any?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 11:07:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 4:38 PM Amit Kapila <[email protected]> wrote:\n\n>\n> I think this would probably be better than the current situation but\n> can we think of a solution to allow toggling the value of two_phase\n> even when prepared transactions are present? Can you please summarize\n> the reason for the problems in doing that and the solutions, if any?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nUpdated the patch, as it wasn't addressing updating of two-phase in the\nremote slot.\n\n Currently the main issue that needs to be handled is the handling of\npending prepared transactions while the two_phase is altered. I see 3\nissues with the current approach.\n\n1. Uncommitted prepared transactions when toggling two_phase from true to\nfalse\nWhen two_phase was true, prepared transactions were decoded at PREPARE time\nand send to the subscriber, which is then prepared on the subscriber with a\nnew gid. Once the two_phase is toggled to false, then the COMMIT PREPARED\non the publisher is converted to commit and the entire transaction is\ndecoded and sent to the subscriber. This will leave the previously\nprepared transaction pending.\n\n2. Uncommitted prepared transactions when toggling two_phase form false to\ntrue\nWhen two_phase was false, prepared transactions were ignored and not\ndecoded at PREPARE time on the publisher. Once the two_phase is toggled to\ntrue, the apply worker and the walsender are restarted and a replication is\nrestarted from a new \"start_decoding_at\" LSN. Now, this new\n\"start_decoding_at\" could be past the LSN of the PREPARE record and if so,\nthe PREPARE record is skipped and not send to the subscriber. Look at\ncomments in DecodeTXNNeedSkip() for detail. Later when the user issues\nCOMMIT PREPARED, this is decoded and sent to the subscriber. but there is\nno prepared transaction on the subscriber, and this fails because the\ncorresponding gid of the transaction couldn't be found.\n\n3. While altering the two_phase of the subscription, it is required to also\nalter the two_phase field of the slot on the primary. The subscription\ncannot remotely alter the two_phase option of the slot when the\nsubscription is enabled, as the slot is owned by the walsender on the\npublisher side.\n\nPossible solutions for the 3 problems:\n\n1. While toggling two_phase from true to false, we could probably get list\nof prepared transactions for this subscriber id and rollback/abort the\nprepared transactions. This will allow the transactions to be re-applied\nlike a normal transaction when the commit comes. Alternatively, if this\nisn't appropriate doing it in the ALTER SUBSCRIPTION context, we could\nstore the xids of all prepared transactions of this subscription in a list\nand when the corresponding xid is being committed by the apply worker,\nprior to commit, we make sure the previously prepared transaction is rolled\nback. But this would add the overhead of checking this list every time a\ntransaction is committed by the apply worker.\n\n2. No solution yet.\n\n3. We could mandate that the altering of two_phase state only be done after\ndisabling the subscription, just like how it is handled for failover option.\nLet me know your thoughts.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Fri, 5 Apr 2024 22:29:29 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 4:59 PM Ajin Cherian <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 4:38 PM Amit Kapila <[email protected]> wrote:\n>>\n>>\n>> I think this would probably be better than the current situation but\n>> can we think of a solution to allow toggling the value of two_phase\n>> even when prepared transactions are present? Can you please summarize\n>> the reason for the problems in doing that and the solutions, if any?\n>>\n>\n>\n> Updated the patch, as it wasn't addressing updating of two-phase in the remote slot.\n>\n\nVitaly, does the minimal solution provided by the proposed patch\n(Allow to alter two_phase option of a subscriber provided no\nuncommitted\nprepared transactions are pending on that subscription.) address your use case?\n\n> Currently the main issue that needs to be handled is the handling of pending prepared transactions while the two_phase is altered. I see 3 issues with the current approach.\n>\n> 1. Uncommitted prepared transactions when toggling two_phase from true to false\n> When two_phase was true, prepared transactions were decoded at PREPARE time and send to the subscriber, which is then prepared on the subscriber with a new gid. Once the two_phase is toggled to false, then the COMMIT PREPARED on the publisher is converted to commit and the entire transaction is decoded and sent to the subscriber. This will leave the previously prepared transaction pending.\n>\n> 2. Uncommitted prepared transactions when toggling two_phase form false to true\n> When two_phase was false, prepared transactions were ignored and not decoded at PREPARE time on the publisher. Once the two_phase is toggled to true, the apply worker and the walsender are restarted and a replication is restarted from a new \"start_decoding_at\" LSN. Now, this new \"start_decoding_at\" could be past the LSN of the PREPARE record and if so, the PREPARE record is skipped and not send to the subscriber. Look at comments in DecodeTXNNeedSkip() for detail. Later when the user issues COMMIT PREPARED, this is decoded and sent to the subscriber. but there is no prepared transaction on the subscriber, and this fails because the corresponding gid of the transaction couldn't be found.\n>\n> 3. While altering the two_phase of the subscription, it is required to also alter the two_phase field of the slot on the primary. The subscription cannot remotely alter the two_phase option of the slot when the subscription is enabled, as the slot is owned by the walsender on the publisher side.\n>\n\nThanks for summarizing the reasons for not allowing altering the\ntwo_pc property for a subscription.\n\n> Possible solutions for the 3 problems:\n>\n> 1. While toggling two_phase from true to false, we could probably get a list of prepared transactions for this subscriber id and rollback/abort the prepared transactions. This will allow the transactions to be re-applied like a normal transaction when the commit comes. Alternatively, if this isn't appropriate doing it in the ALTER SUBSCRIPTION context, we could store the xids of all prepared transactions of this subscription in a list and when the corresponding xid is being committed by the apply worker, prior to commit, we make sure the previously prepared transaction is rolled back. But this would add the overhead of checking this list every time a transaction is committed by the apply worker.\n>\n\nIn the second solution, if you check at the time of commit whether\nthere exists a prior prepared transaction then won't we end up\napplying the changes twice? I think we can first try to achieve it at\nthe time of Alter Subscription because the other solution can add\noverhead at each commit?\n\n> 2. No solution yet.\n>\n\nOne naive idea is that on the publisher we can remember whether the\nprepare has been sent and if so then only send commit_prepared,\notherwise send the entire transaction. On the subscriber-side, we\nsomehow, need to ensure before applying the first change whether the\ncorresponding transaction is already prepared and if so then skip the\nchanges and just perform the commit prepared. One drawback of this\napproach is that after restart, the prepare flag wouldn't be saved in\nthe memory and we end up sending the entire transaction again. One way\nto avoid this overhead is that the publisher before sending the entire\ntransaction checks with subscriber whether it has a prepared\ntransaction corresponding to the current commit. I understand that\nthis is not a good idea even if it works but I don't have any better\nideas. What do you think?\n\n> 3. We could mandate that the altering of two_phase state only be done after disabling the subscription, just like how it is handled for failover option.\n>\n\nmakes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Apr 2024 16:48:26 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Amit, Ajin, All\n\nThank you for the patch and the responses. I apologize for my delayed answer due to some curcumstances.\nOn Wednesday, April 10, 2024 14:18 MSK, Amit Kapila <[email protected]> wrote:\n\nVitaly, does the minimal solution provided by the proposed patch (Allow to alter two_phase option of a subscriber provided no uncommitted prepared transactions are pending on that subscription.) address your use case?In general, the idea behind the patch seems to be suitable for my case. Furthermore, the case of two_phase switch from false to true with uncommitted pending prepared transactions probably never happens in my case. The switch from false to true means that the replica completes the catchup from the master and switches to the normal mode when it participates in the multi-node configuration. There should be no uncommitted pending prepared transactions at the moment of the switch to the normal mode.\n\nI'm going to try this patch. Give me please some time to investigate the patch. I will come with some feedback a little bit later.\n\nThank you for your help!\n\nWith best regards,\nVitaly Davydov\n\n\n \n\nHi Amit, Ajin, AllThank you for the patch and the responses. I apologize for my delayed answer due to some curcumstances.On Wednesday, April 10, 2024 14:18 MSK, Amit Kapila <[email protected]> wrote:Vitaly, does the minimal solution provided by the proposed patch (Allow to alter two_phase option of a subscriber provided no uncommitted prepared transactions are pending on that subscription.) address your use case?In general, the idea behind the patch seems to be suitable for my case. Furthermore, the case of two_phase switch from false to true with uncommitted pending prepared transactions probably never happens in my case. The switch from false to true means that the replica completes the catchup from the master and switches to the normal mode when it participates in the multi-node configuration. There should be no uncommitted pending prepared transactions at the moment of the switch to the normal mode.I'm going to try this patch. Give me please some time to investigate the patch. I will come with some feedback a little bit later.Thank you for your help!With best regards,Vitaly Davydov",
"msg_date": "Wed, 10 Apr 2024 17:16:59 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> One naive idea is that on the publisher we can remember whether the\r\n> prepare has been sent and if so then only send commit_prepared,\r\n> otherwise send the entire transaction. On the subscriber-side, we\r\n> somehow, need to ensure before applying the first change whether the\r\n> corresponding transaction is already prepared and if so then skip the\r\n> changes and just perform the commit prepared. One drawback of this\r\n> approach is that after restart, the prepare flag wouldn't be saved in\r\n> the memory and we end up sending the entire transaction again. One way\r\n> to avoid this overhead is that the publisher before sending the entire\r\n> transaction checks with subscriber whether it has a prepared\r\n> transaction corresponding to the current commit. I understand that\r\n> this is not a good idea even if it works but I don't have any better\r\n> ideas. What do you think?\r\n\r\nAlternative idea is that worker pass a list of prepared transaction as new\r\noption in START_REPLICATION. This can reduce the number of inter-node\r\ncommunications, but sometimes the list may be huge.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n",
"msg_date": "Thu, 11 Apr 2024 02:07:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Vitaly, does the minimal solution provided by the proposed patch\r\n> (Allow to alter two_phase option of a subscriber provided no\r\n> uncommitted\r\n> prepared transactions are pending on that subscription.) address your use case?\r\n\r\nI think we do not have to handle cases which there are prepared transactions on\r\npublisher/subscriber, as the first step. It leads additional complexity and we\r\ndo not have smarter solutions, especially for problem 2.\r\nIIUC it meets the Vitaly's condition, right?\r\n\r\n> > 1. While toggling two_phase from true to false, we could probably get a list of\r\n> prepared transactions for this subscriber id and rollback/abort the prepared\r\n> transactions. This will allow the transactions to be re-applied like a normal\r\n> transaction when the commit comes. Alternatively, if this isn't appropriate doing it\r\n> in the ALTER SUBSCRIPTION context, we could store the xids of all prepared\r\n> transactions of this subscription in a list and when the corresponding xid is being\r\n> committed by the apply worker, prior to commit, we make sure the previously\r\n> prepared transaction is rolled back. But this would add the overhead of checking\r\n> this list every time a transaction is committed by the apply worker.\r\n> >\r\n> \r\n> In the second solution, if you check at the time of commit whether\r\n> there exists a prior prepared transaction then won't we end up\r\n> applying the changes twice? I think we can first try to achieve it at\r\n> the time of Alter Subscription because the other solution can add\r\n> overhead at each commit?\r\n\r\nYeah, at least the second solution might be problematic. I prototyped\r\nthe first one and worked well. However, to make the feature more consistent,\r\nit is prohibit to exist prepared transactions on subscriber for now.\r\nWe can ease based on the requirement.\r\n\r\n> > 2. No solution yet.\r\n> >\r\n> \r\n> One naive idea is that on the publisher we can remember whether the\r\n> prepare has been sent and if so then only send commit_prepared,\r\n> otherwise send the entire transaction. On the subscriber-side, we\r\n> somehow, need to ensure before applying the first change whether the\r\n> corresponding transaction is already prepared and if so then skip the\r\n> changes and just perform the commit prepared. One drawback of this\r\n> approach is that after restart, the prepare flag wouldn't be saved in\r\n> the memory and we end up sending the entire transaction again. One way\r\n> to avoid this overhead is that the publisher before sending the entire\r\n> transaction checks with subscriber whether it has a prepared\r\n> transaction corresponding to the current commit. I understand that\r\n> this is not a good idea even if it works but I don't have any better\r\n> ideas. What do you think?\r\n\r\nI considered but not sure it is good to add such mechanism. Your idea requires\r\nadditional wait-loop, which might lead bugs and unexpected behavior. And it may\r\ndegrade the performance based on the network environment.\r\nAs for the another solution (worker sends a list of prepared transactions), it\r\nis also not so good because list of prepared transactions may be huge.\r\n\r\nBased on above, I think we can reject the case for now.\r\n\r\nFYI - We also considered the idea which walsender waits until all prepared transactions\r\nare resolved before decoding and sending changes, but it did not work well\r\n- the restarted walsender sent only COMMIT PREPARED record for transactions which\r\nhave been prepared before disabling the subscription. This happened because\r\n1) if the two_phase option of slots is false, the confirmed_flush can be ahead of\r\n PREPARE record, and\r\n2) after the altering and restarting, start_decoding_at becomes same as\r\n confirmed_flush and records behind this won't be decoded.\r\n\r\n> > 3. We could mandate that the altering of two_phase state only be done after\r\n> disabling the subscription, just like how it is handled for failover option.\r\n> >\r\n> \r\n> makes sense.\r\n\r\nOK, this spec was added.\r\n\r\nAccording to above, I updated the patch with Ajin.\r\n0001 - extends ALTER SUBSCRIPTION statement. A tab-completion was added.\r\n0002 - mandates the subscription has been disabled. Since no need to change \r\n AtEOXact_ApplyLauncher(), the change is reverted.\r\n If no objections, this can be included to 0001.\r\n0003 - checks whether there are transactions prepared by the worker. If found,\r\n rejects the ALTER SUBSCRIPTION command.\r\n0004 - checks whether there are transactions prepared on publisher. The backend\r\n connects to the publisher and confirms it. If found, rejects the ALTER\r\n SUBSCRIPTION command.\r\n0005 - adds TAP test for it.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Mon, 15 Apr 2024 07:57:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 1:28 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > Vitaly, does the minimal solution provided by the proposed patch\n> > (Allow to alter two_phase option of a subscriber provided no\n> > uncommitted\n> > prepared transactions are pending on that subscription.) address your use case?\n>\n> I think we do not have to handle cases which there are prepared transactions on\n> publisher/subscriber, as the first step. It leads additional complexity and we\n> do not have smarter solutions, especially for problem 2.\n> IIUC it meets the Vitaly's condition, right?\n>\n> > > 1. While toggling two_phase from true to false, we could probably get a list of\n> > prepared transactions for this subscriber id and rollback/abort the prepared\n> > transactions. This will allow the transactions to be re-applied like a normal\n> > transaction when the commit comes. Alternatively, if this isn't appropriate doing it\n> > in the ALTER SUBSCRIPTION context, we could store the xids of all prepared\n> > transactions of this subscription in a list and when the corresponding xid is being\n> > committed by the apply worker, prior to commit, we make sure the previously\n> > prepared transaction is rolled back. But this would add the overhead of checking\n> > this list every time a transaction is committed by the apply worker.\n> > >\n> >\n> > In the second solution, if you check at the time of commit whether\n> > there exists a prior prepared transaction then won't we end up\n> > applying the changes twice? I think we can first try to achieve it at\n> > the time of Alter Subscription because the other solution can add\n> > overhead at each commit?\n>\n> Yeah, at least the second solution might be problematic. I prototyped\n> the first one and worked well. However, to make the feature more consistent,\n> it is prohibit to exist prepared transactions on subscriber for now.\n> We can ease based on the requirement.\n>\n> > > 2. No solution yet.\n> > >\n> >\n> > One naive idea is that on the publisher we can remember whether the\n> > prepare has been sent and if so then only send commit_prepared,\n> > otherwise send the entire transaction. On the subscriber-side, we\n> > somehow, need to ensure before applying the first change whether the\n> > corresponding transaction is already prepared and if so then skip the\n> > changes and just perform the commit prepared. One drawback of this\n> > approach is that after restart, the prepare flag wouldn't be saved in\n> > the memory and we end up sending the entire transaction again. One way\n> > to avoid this overhead is that the publisher before sending the entire\n> > transaction checks with subscriber whether it has a prepared\n> > transaction corresponding to the current commit. I understand that\n> > this is not a good idea even if it works but I don't have any better\n> > ideas. What do you think?\n>\n> I considered but not sure it is good to add such mechanism. Your idea requires\n> additional wait-loop, which might lead bugs and unexpected behavior. And it may\n> degrade the performance based on the network environment.\n> As for the another solution (worker sends a list of prepared transactions), it\n> is also not so good because list of prepared transactions may be huge.\n>\n> Based on above, I think we can reject the case for now.\n>\n> FYI - We also considered the idea which walsender waits until all prepared transactions\n> are resolved before decoding and sending changes, but it did not work well\n> - the restarted walsender sent only COMMIT PREPARED record for transactions which\n> have been prepared before disabling the subscription. This happened because\n> 1) if the two_phase option of slots is false, the confirmed_flush can be ahead of\n> PREPARE record, and\n> 2) after the altering and restarting, start_decoding_at becomes same as\n> confirmed_flush and records behind this won't be decoded.\n>\n\nI don't understand the exact problem you are facing. IIUC, if the\ncommit is after start_decoding_at point and prepare was before it, we\nexpect to send the entire transaction followed by a commit record. The\nrestart_lsn should be before the start of such a transaction and we\nshould have recorded the changes in the reorder buffer.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 15 Apr 2024 15:16:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear All,\n\nOn Wednesday, April 10, 2024 17:16 MSK, Давыдов Виталий <[email protected]> wrote:\n Hi Amit, Ajin, All\n\nThank you for the patch and the responses. I apologize for my delayed answer due to some curcumstances.\nOn Wednesday, April 10, 2024 14:18 MSK, Amit Kapila <[email protected]> wrote:\n\nVitaly, does the minimal solution provided by the proposed patch (Allow to alter two_phase option of a subscriber provided no uncommitted prepared transactions are pending on that subscription.) address your use case?In general, the idea behind the patch seems to be suitable for my case. Furthermore, the case of two_phase switch from false to true with uncommitted pending prepared transactions probably never happens in my case. The switch from false to true means that the replica completes the catchup from the master and switches to the normal mode when it participates in the multi-node configuration. There should be no uncommitted pending prepared transactions at the moment of the switch to the normal mode.\n\nI'm going to try this patch. Give me please some time to investigate the patch. I will come with some feedback a little bit later.\nI looked at the patch and realized that I can't try it easily in the near future because the solution I'm working on is based on PG16 or earlier. This patch is not easily applicable to the older releases. I have to port my solution to the master, which is not done yet. I apologize for that - so much work should be done before applying the patch. BTW, I tested the idea with async 2PC commit on my side and it seems to work fine in my case. Anyway, I agree, the idea with altering of subscription seems the best one but much harder to implement.\n\nTo summarise my case of a synchronous multimaster where twophase is used to implement global transactions:\n * Replica may have prepared but not committed transactions when I toggle subscription twophase from true to false. In this case, all prepared transactions may be aborted on the replica before altering the subscription. * Replica will not have prepared transactions when subscription is toggled from false to true. In this scenario, the replica completes the catchup (with twophase=off) and becomes the part of the multi-nodal cluster and is ready to accept new 2PC transactions. All the new pending transactions will wait until replica responds. But it may work differently for some other solutions. In general, it would be great to allow toggling for all scenarious.Just interested, does anyone tried to reproduce the problem with slow catchup of twophase transactions (pgbench should be used with big number of clients)? I haven't seen any messages from anyone other that me that the problem takes place.\n\nThank you for your help!\n\nWith best regards,\nVitaly\n\n\n\n\n\n\n\n\n\n\n \n\nDear All,On Wednesday, April 10, 2024 17:16 MSK, Давыдов Виталий <[email protected]> wrote: Hi Amit, Ajin, AllThank you for the patch and the responses. I apologize for my delayed answer due to some curcumstances.On Wednesday, April 10, 2024 14:18 MSK, Amit Kapila <[email protected]> wrote:Vitaly, does the minimal solution provided by the proposed patch (Allow to alter two_phase option of a subscriber provided no uncommitted prepared transactions are pending on that subscription.) address your use case?In general, the idea behind the patch seems to be suitable for my case. Furthermore, the case of two_phase switch from false to true with uncommitted pending prepared transactions probably never happens in my case. The switch from false to true means that the replica completes the catchup from the master and switches to the normal mode when it participates in the multi-node configuration. There should be no uncommitted pending prepared transactions at the moment of the switch to the normal mode.I'm going to try this patch. Give me please some time to investigate the patch. I will come with some feedback a little bit later.I looked at the patch and realized that I can't try it easily in the near future because the solution I'm working on is based on PG16 or earlier. This patch is not easily applicable to the older releases. I have to port my solution to the master, which is not done yet. I apologize for that - so much work should be done before applying the patch. BTW, I tested the idea with async 2PC commit on my side and it seems to work fine in my case. Anyway, I agree, the idea with altering of subscription seems the best one but much harder to implement.To summarise my case of a synchronous multimaster where twophase is used to implement global transactions:Replica may have prepared but not committed transactions when I toggle subscription twophase from true to false. In this case, all prepared transactions may be aborted on the replica before altering the subscription.Replica will not have prepared transactions when subscription is toggled from false to true. In this scenario, the replica completes the catchup (with twophase=off) and becomes the part of the multi-nodal cluster and is ready to accept new 2PC transactions. All the new pending transactions will wait until replica responds. But it may work differently for some other solutions. In general, it would be great to allow toggling for all scenarious.Just interested, does anyone tried to reproduce the problem with slow catchup of twophase transactions (pgbench should be used with big number of clients)? I haven't seen any messages from anyone other that me that the problem takes place.Thank you for your help!With best regards,Vitaly",
"msg_date": "Mon, 15 Apr 2024 18:31:55 +0300",
"msg_from": "\n =?utf-8?q?=D0=94=D0=B0=D0=B2=D1=8B=D0=B4=D0=BE=D0=B2_=D0=92=D0=B8=D1=82=D0=B0=D0=BB=D0=B8=D0=B9?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > FYI - We also considered the idea which walsender waits until all prepared\r\n> transactions\r\n> > are resolved before decoding and sending changes, but it did not work well\r\n> > - the restarted walsender sent only COMMIT PREPARED record for\r\n> transactions which\r\n> > have been prepared before disabling the subscription. This happened because\r\n> > 1) if the two_phase option of slots is false, the confirmed_flush can be ahead of\r\n> > PREPARE record, and\r\n> > 2) after the altering and restarting, start_decoding_at becomes same as\r\n> > confirmed_flush and records behind this won't be decoded.\r\n> >\r\n> \r\n> I don't understand the exact problem you are facing. IIUC, if the\r\n> commit is after start_decoding_at point and prepare was before it, we\r\n> expect to send the entire transaction followed by a commit record. The\r\n> restart_lsn should be before the start of such a transaction and we\r\n> should have recorded the changes in the reorder buffer.\r\n\r\nThis behavior is right for two_phase = false case. But if the parameter is\r\naltered between PREPARE and COMMIT PREPARED, there is a possibility that only\r\nCOMMIT PREPARED is sent. As the first place, the executed workload is below.\r\n\r\n1. created a subscription with (two_phase = false)\r\n2. prepared a transaction on publisher\r\n3. disabled the subscription once\r\n4. altered the subscription to two_phase = true\r\n5. enabled the subscription again\r\n6. did COMMIT PREPARED on the publisher\r\n\r\n-> Apply worker would raise an ERROR while applying COMMIT PREPARED record:\r\nERROR: prepared transaction with identifier \"pg_gid_XXX_YYY\" does not exist\r\n\r\nBelow part describes why the ERROR occurred.\r\n\r\n======\r\n\r\n### Regarding 1) the confirmed_flush can be ahead of PREPARE record,\r\n\r\nIf two_phase is off, as you might know, confirmed_flush can be ahead of PREPARE\r\nrecord by keepalive mechanism.\r\n\r\nWalsender sometimes sends a keepalive message in WalSndKeepalive(). Here the LSN\r\nis written, which is lastly decoded record. Since the PREPARE record is skipped\r\n(just handled by ReorderBufferProcessXid()), sometimes the written LSN in the\r\nmessage can be ahead of PREPARE record. If the WAL records are aligned like below,\r\nthe LSN can point CHECKPOINT_ONLINE.\r\n\r\n...\r\nINSERT\r\nPREPARE txn1\r\nCHECKPOINT_ONLINE\r\n...\r\n\r\nOn worker side, when it receives the keepalive, it compares the LSN in the\r\nmessage and lastly received LSN, and advance last_received. Then, the worker replies\r\nto the walsender, and at that time it replies that last_recevied record has been\r\nflushed on the subscriber. See send_feedback().\r\n \r\nOn publisher, when the walsender receives the message from subscriber, it reads\r\nthe message and advance the confirmed_flush to the written value. If the walsender\r\nsends LSN which locates ahead PREPARE, the confirmed flush is updated as well.\r\n\r\n### Regarding 2) after the altering, records behind the confirmed_flush are not decoded\r\n\r\nThen, at decoding phase. The snapshot builder determines the point where decoding\r\nis resumed, as start_decoding_at. After the restart, the value is same as\r\nconfirmed_flush of the slot. Since the confiremed_fluish is ahead of PREPARE,\r\nthe start_decoding_at becomes ahead as well, so whole of prepared transactions\r\nare not decoded.\r\n\r\n======\r\n\r\nAttached zip file contains the PoC and used script. You can refer what I really did.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 16 Apr 2024 02:18:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 7:48 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > FYI - We also considered the idea which walsender waits until all prepared\n> > transactions\n> > > are resolved before decoding and sending changes, but it did not work well\n> > > - the restarted walsender sent only COMMIT PREPARED record for\n> > transactions which\n> > > have been prepared before disabling the subscription. This happened because\n> > > 1) if the two_phase option of slots is false, the confirmed_flush can be ahead of\n> > > PREPARE record, and\n> > > 2) after the altering and restarting, start_decoding_at becomes same as\n> > > confirmed_flush and records behind this won't be decoded.\n> > >\n> >\n> > I don't understand the exact problem you are facing. IIUC, if the\n> > commit is after start_decoding_at point and prepare was before it, we\n> > expect to send the entire transaction followed by a commit record. The\n> > restart_lsn should be before the start of such a transaction and we\n> > should have recorded the changes in the reorder buffer.\n>\n> This behavior is right for two_phase = false case. But if the parameter is\n> altered between PREPARE and COMMIT PREPARED, there is a possibility that only\n> COMMIT PREPARED is sent.\n>\n\nCan you please once consider the idea shared by me at [1] (One naive\nidea is that on the publisher .....) to solve this problem?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1K1fSkeK%3Dkc26G5cq87vQG4%3D1qs_b%2Bno4%2Bep654SeBy1w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Apr 2024 11:55:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 1:31 AM Давыдов Виталий <[email protected]>\nwrote:\n\n> Dear All,\n> Just interested, does anyone tried to reproduce the problem with slow\n> catchup of twophase transactions (pgbench should be used with big number of\n> clients)? I haven't seen any messages from anyone other that me that the\n> problem takes place.\n>\n>\n>\n\n Yes, I was able to reproduce the slow catchup of twophase transactions\nwith pgbench with 20 clients.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Tue, Apr 16, 2024 at 1:31 AM Давыдов Виталий <[email protected]> wrote:Dear All,Just interested, does anyone tried to reproduce the problem with slow catchup of twophase transactions (pgbench should be used with big number of clients)? I haven't seen any messages from anyone other that me that the problem takes place. Yes, I was able to reproduce the slow catchup of twophase transactions with pgbench with 20 clients.regards,Ajin CherianFujitsu Australia",
"msg_date": "Tue, 16 Apr 2024 18:12:11 +1000",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 4:25 PM Amit Kapila <[email protected]> wrote:\n\n>\n> >\n>\n> Can you please once consider the idea shared by me at [1] (One naive\n> idea is that on the publisher .....) to solve this problem?\n>\n> [1] -\n> https://www.postgresql.org/message-id/CAA4eK1K1fSkeK%3Dkc26G5cq87vQG4%3D1qs_b%2Bno4%2Bep654SeBy1w%40mail.gmail.com\n>\n>\n>\nExpanding on Amit's idea, we found out that there is already a mechanism in\ncode to fully decode prepared transactions prior to a defined LSN where\ntwo_phase is enabled using the \"two_phase_at\" LSN in the slot. Look at\nReorderBufferFinishPrepared() on how this is done. This code was not\nworking as expected in our patch because\nwe were setting two_phase on the slot to true as soon as the alter command\nwas received. This was not the correct way, initially when two_phase is\nenabled, the two_phase changes to pending state and two_phase option on the\nslot should only be set to true when two_phase moves from pending to\nenabled. This will happen once the replication is restarted with two_phase\noption. Look at code in CreateDecodingContext() on how \"two_phase_at\" is\nset in the slot when done this way. So we changed the code to not remotely\nalter two_phase when toggling from false to true. With this change, now\neven if there are pending transactions on the publisher when toggling\ntwo_phase from false to true, these pending transactions will be fully\ndecoded and sent once the commit prepared is decoded as the pending\nprepared transactions are prior to the \"two_phase_at\" LSN. With this patch,\nnow we are able to handle both pending prepared transactions when altering\ntwo_phase from true to false as well as false to true.\n\nAttaching the patch for your review and comments. Big thanks to Kuroda-san\nfor also working on the patch.\n\nregards,\nAjin Cherian\nFujitsu Australia.",
"msg_date": "Thu, 18 Apr 2024 16:26:22 +1000",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 4:26 PM Ajin Cherian <[email protected]> wrote:\n\n>\n> Attaching the patch for your review and comments. Big thanks to Kuroda-san\n> for also working on the patch.\n>\n>\nLooking at this a bit more, maybe rolling back all prepared transactions on\nthe subscriber when toggling two_phase from true to false might not be\ndesirable for the customer. Maybe we should have an option for customers to\ncontrol whether transactions should be rolled back or not. Maybe\ntransactions should only be rolled back if a \"force\" option is also set.\nWhat do people think?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Thu, Apr 18, 2024 at 4:26 PM Ajin Cherian <[email protected]> wrote:Attaching the patch for your review and comments. Big thanks to Kuroda-san for also working on the patch.Looking at this a bit more, maybe rolling back all prepared transactions on the subscriber when toggling two_phase from true to false might not be desirable for the customer. Maybe we should have an option for customers to control whether transactions should be rolled back or not. Maybe transactions should only be rolled back if a \"force\" option is also set.What do people think?regards,Ajin CherianFujitsu Australia",
"msg_date": "Mon, 22 Apr 2024 17:34:58 +1000",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Vitaly,\r\n\r\n> I looked at the patch and realized that I can't try it easily in the near future\r\n> because the solution I'm working on is based on PG16 or earlier. This patch is\r\n> not easily applicable to the older releases. I have to port my solution to the\r\n> master, which is not done yet.\r\n\r\nWe also tried to port our patch for PG16, but the largest barrier was that a\r\nreplication command ALTER_SLOT is not supported. Since the slot option two_phase\r\ncan't be modified, it is difficult to skip decoding PREPARE command even when\r\naltering the option from true to false.\r\nIIUC, Adding a new feature (e.g., replication command) for minor updates is generally\r\nprohibited\r\n\r\nWe must consider another approach for backpatching, but we do not have solutions\r\nfor now.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/global/ \r\n\r\n \r\n",
"msg_date": "Mon, 22 Apr 2024 08:54:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> Looking at this a bit more, maybe rolling back all prepared transactions on the\r\n> subscriber when toggling two_phase from true to false might not be desirable\r\n> for the customer. Maybe we should have an option for customers to control\r\n> whether transactions should be rolled back or not. Maybe transactions should\r\n> only be rolled back if a \"force\" option is also set. What do people think?\r\n\r\nAnd here is a patch for adds new option \"force_alter\" (better name is very welcome).\r\nIt could be used only when altering two_phase option. Let me share examples.\r\n\r\nAssuming that there are logical replication system with two_phase = on, and\r\nthere are prepared transactions:\r\n\r\n```\r\nsubscriber=# SELECT * FROM pg_prepared_xacts ;\r\n transaction | gid | prepared | owner | database \r\n-------------+------------------+-------------------------------+----------+----------\r\n 741 | pg_gid_16390_741 | 2024-04-22 08:02:34.727913+00 | postgres | postgres\r\n 742 | pg_gid_16390_742 | 2024-04-22 08:02:34.729486+00 | postgres | postgres\r\n(2 rows)\r\n```\r\n\r\nAt that time, altering two_phase to false alone will be failed:\r\n\r\n```\r\nsubscriber=# ALTER SUBSCRIPTION sub DISABLE ;\r\nALTER SUBSCRIPTION\r\nsubscriber=# ALTER SUBSCRIPTION sub SET (two_phase = off);\r\nERROR: cannot alter two_phase = false when there are prepared transactions\r\n```\r\n\r\nIt succeeds if force_alter is also expressly set. Prepared transactions will be\r\naborted at that time.\r\n\r\n```\r\nsubscriber=# ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter = on);\r\nALTER SUBSCRIPTION\r\nsubscriber=# SELECT * FROM pg_prepared_xacts ;\r\n transaction | gid | prepared | owner | database \r\n-------------+-----+----------+-------+----------\r\n(0 rows)\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/global/",
"msg_date": "Mon, 22 Apr 2024 08:56:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "> Dear Vitaly,\r\n> \r\n> > I looked at the patch and realized that I can't try it easily in the near future\r\n> > because the solution I'm working on is based on PG16 or earlier. This patch is\r\n> > not easily applicable to the older releases. I have to port my solution to the\r\n> > master, which is not done yet.\r\n> \r\n> We also tried to port our patch for PG16, but the largest barrier was that a\r\n> replication command ALTER_SLOT is not supported. Since the slot option\r\n> two_phase\r\n> can't be modified, it is difficult to skip decoding PREPARE command even when\r\n> altering the option from true to false.\r\n> IIUC, Adding a new feature (e.g., replication command) for minor updates is\r\n> generally\r\n> prohibited\r\n> \r\n> We must consider another approach for backpatching, but we do not have\r\n> solutions\r\n> for now.\r\n\r\nAttached patch set is a ported version for PG16, which breaks ABI. This can be used\r\nfor testing purpose, but it won't be pushed to REL_16_STABLE.\r\nAt least, this patchset can pass my github CI.\r\n\r\nCan you apply and check whether your issue is solved?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Mon, 22 Apr 2024 12:54:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Hayato,\n\nOn Monday, April 22, 2024 15:54 MSK, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n > Dear Vitaly,\n>\n> > I looked at the patch and realized that I can't try it easily in the near future\n> > because the solution I'm working on is based on PG16 or earlier. This patch is\n> > not easily applicable to the older releases. I have to port my solution to the\n> > master, which is not done yet.\n>\n> We also tried to port our patch for PG16, but the largest barrier was that a\n> replication command ALTER_SLOT is not supported. Since the slot option\n> two_phase\n> can't be modified, it is difficult to skip decoding PREPARE command even when\n> altering the option from true to false.\n> IIUC, Adding a new feature (e.g., replication command) for minor updates is\n> generally\n> prohibited\n>\n...\n\nAttached patch set is a ported version for PG16, which breaks ABI. This can be used\nfor testing purpose, but it won't be pushed to REL_16_STABLE.\nAt least, this patchset can pass my github CI.\n\nCan you apply and check whether your issue is solved?It is fantastic. Thank you for your help! I will definitely try your patch. I need some time to test and incorporate it. I also plan to port my stuff to the master branch to simplify testing of patches.\n\nWith best regards,\nVitaly Davydov\n\n \n\nDear Hayato,On Monday, April 22, 2024 15:54 MSK, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote: > Dear Vitaly,>> > I looked at the patch and realized that I can't try it easily in the near future> > because the solution I'm working on is based on PG16 or earlier. This patch is> > not easily applicable to the older releases. I have to port my solution to the> > master, which is not done yet.>> We also tried to port our patch for PG16, but the largest barrier was that a> replication command ALTER_SLOT is not supported. Since the slot option> two_phase> can't be modified, it is difficult to skip decoding PREPARE command even when> altering the option from true to false.> IIUC, Adding a new feature (e.g., replication command) for minor updates is> generally> prohibited>...Attached patch set is a ported version for PG16, which breaks ABI. This can be usedfor testing purpose, but it won't be pushed to REL_16_STABLE.At least, this patchset can pass my github CI.Can you apply and check whether your issue is solved?It is fantastic. Thank you for your help! I will definitely try your patch. I need some time to test and incorporate it. I also plan to port my stuff to the master branch to simplify testing of patches.With best regards,Vitaly Davydov",
"msg_date": "Mon, 22 Apr 2024 17:22:14 +0300",
"msg_from": "\"Vitaly Davydov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?utf-8?q?RE=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Dear hackers,\r\n\r\nPer recent commit (b29cbd3da), our patch needed to be rebased.\r\nHere is an updated version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 23 Apr 2024 12:15:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for patch v6-0001\n\n======\nCommit message\n\n1.\nThis patch allows user to alter two_phase option\n\n/allows user/allows the user/\n\n/to alter two_phase option/to alter the 'two_phase' option/\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.\n<literal>two_phase</literal> can be altered only for disabled subscription.\n\nSUGGEST\nThe <literal>two_phase</literal> parameter can only be altered when\nthe subscription is disabled.\n\n======\nsrc/backend/access/transam/twophase.c\n\n3. checkGid\n+\n+/*\n+ * checkGid\n+ */\n+static bool\n+checkGid(char *gid, Oid subid)\n+{\n+ int ret;\n+ Oid subid_written,\n+ xid;\n+\n+ ret = sscanf(gid, \"pg_gid_%u_%u\", &subid_written, &xid);\n+\n+ if (ret != 2 || subid != subid_written)\n+ return false;\n+\n+ return true;\n+}\n\n3a.\nThe function comment should give more explanation of what it does. I\nthink this function is the counterpart of the TwoPhaseTransactionGid()\nfunction of worker.c so the comment can say that too.\n\n~\n\n3b.\nIndeed, perhaps the function name should be similar to\nTwoPhaseTransactionGid. e.g. call it like\nIsTwoPhaseTransactionGidForSubid?\n\n~\n\n3c.\nProbably 'xid' should be TransactionId instead of Oid.\n\n~\n\n3d.\nWhy not have a single return?\n\nSUGGESTION\nreturn (ret == 2 && subid = subid_written);\n\n~\n\n3e.\nI am wondering if the existing TwoPhaseTransactionGid function\ncurrently in worker.c should be moved here because IMO these 2\nfunctions belong together and twophase.c seems the right place to put\nthem.\n\n~~~\n\n4.\n+LookupGXactBySubid(Oid subid)\n+{\n+ bool found = false;\n+\n+ LWLockAcquire(TwoPhaseStateLock, LW_SHARED);\n+ for (int i = 0; i < TwoPhaseState->numPrepXacts; i++)\n+ {\n+ GlobalTransaction gxact = TwoPhaseState->prepXacts[i];\n+\n+ /* Ignore not-yet-valid GIDs. */\n+ if (gxact->valid && checkGid(gxact->gid, subid))\n+ {\n+ found = true;\n+ break;\n+ }\n+ }\n+ LWLockRelease(TwoPhaseStateLock);\n+ return found;\n+}\n\nAFAIK the gxact also has the 'xid' available, so won't it be better to\npass BOTH the 'xid' and 'subid' to the checkGid so you can do a full\ncomparison instead of comparing only the subid part of the gid?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n5. AlterSubscription\n\n+ /* XXX */\n+ if (IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\n+ {\n\nThe \"XXX\" comment looks like it is meant to say something more...\n\n~~~\n\n6.\n+ /*\n+ * two_phase can be only changed for disabled\n+ * subscriptions\n+ */\n+ if (form->subenabled)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot set %s for enabled subscription\",\n+ \"two_phase\")));\n\n6a.\nShould this have a more comprehensive comment giving the reason like\nthe 'failover' option has?\n\n~~~\n\n6b.\nMaybe this should include a \"translator\" comment to say don't\ntranslate the option name.\n\n~~~\n\n7.\n+ /* Check whether the number of prepared transactions */\n+ if (!opts.twophase &&\n+ form->subtwophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\n+ LookupGXactBySubid(subid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot disable two_phase when uncommitted prepared\ntransactions present\")));\n+\n\n7a.\nThe first comment seems to be an incomplete sentence. I think it\nshould say something a bit like:\ntwo_phase cannot be disabled if there are any uncommitted prepared\ntransactions present.\n\n~\n\n7b.\nAlso, if ereport occurs what is the user supposed to do about it?\nShouldn't the ereport include some errhint with appropriate advice?\n\n~~~\n\n8.\n+ /*\n+ * The changed failover option of the slot can't be rolled\n+ * back.\n+ */\n+ PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\n(two_phase)\");\n+\n+ /* Change system catalog acoordingly */\n+ values[Anum_pg_subscription_subtwophasestate - 1] =\n+ CharGetDatum(opts.twophase ?\n+ LOGICALREP_TWOPHASE_STATE_PENDING :\n+ LOGICALREP_TWOPHASE_STATE_DISABLED);\n+ replaces[Anum_pg_subscription_subtwophasestate - 1] = true;\n+ }\n\nTypo I think: /failover option/two_phase option/\n\n======\n.../libpqwalreceiver/libpqwalreceiver.c\n\n9.\n static void\n libpqrcv_alter_slot(WalReceiverConn *conn, const char *slotname,\n- bool failover)\n+ bool two_phase, bool failover)\n\nSame comment as mentioned elsewhere (#15), IMO the new 'two_phase'\nparameter should be last.\n\n======\nsrc/backend/replication/logical/launcher.c\n\n10.\n+/*\n+ * Stop all the subscription workers.\n+ */\n+void\n+logicalrep_workers_stop(Oid subid)\n+{\n+ List *subworkers;\n+ ListCell *lc;\n+\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+ subworkers = logicalrep_workers_find(subid, false);\n+ LWLockRelease(LogicalRepWorkerLock);\n+ foreach(lc, subworkers)\n+ {\n+ LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n+\n+ logicalrep_worker_stop(w->subid, w->relid);\n+ }\n+ list_free(subworkers);\n+}\n\nI was confused by the logicalrep_workers_find(subid, false). IIUC the\n'false' means everything (instead of 'only_running') but then I don't\nknow why we want to \"stop\" anything that is NOT running. OTOH I see\nthat this code was extracted from where it was previously inlined in\nsubscriptioncmds.c, so maybe the 'false' is necessary for another\nreason? At least maybe some explanatory comment is needed for why you\nare passing this flag as false?\n\n======\nsrc/backend/replication/logical/worker.c\n\n11.\n- /* two-phase should not be altered */\n+ /* two-phase should not be altered while the worker exists */\n Assert(newsub->twophasestate == MySubscription->twophasestate);\n/should not/cannot/\n\n======\nsrc/backend/replication/slot.c\n\n12.\n void\n-ReplicationSlotAlter(const char *name, bool failover)\n+ReplicationSlotAlter(const char *name, bool two_phase, bool failover)\n\nSame comment as mentioned elsewhere (#15), IMO the new 'two_phase'\nparameter should be last.\n\n~~~\n\n13.\n+ if (MyReplicationSlot->data.two_phase != two_phase)\n+ {\n+ SpinLockAcquire(&MyReplicationSlot->mutex);\n+ MyReplicationSlot->data.two_phase = two_phase;\n+ SpinLockRelease(&MyReplicationSlot->mutex);\n+\n+ update_slot = true;\n+ }\n+\n+\n if (MyReplicationSlot->data.failover != failover)\n {\n SpinLockAcquire(&MyReplicationSlot->mutex);\n MyReplicationSlot->data.failover = failover;\n SpinLockRelease(&MyReplicationSlot->mutex);\n\n+ update_slot = true;\n+ }\n\n13a.\nDoesn't it make more sense for the whole check/set to be \"atomic\",\ni.e. put the mutex also around the check?\n\nSUGGEST\nSpinLockAcquire(&MyReplicationSlot->mutex);\nif (MyReplicationSlot->data.two_phase != two_phase)\n{\n MyReplicationSlot->data.two_phase = two_phase;\n update_slot = true;\n}\nSpinLockRelease(&MyReplicationSlot->mutex);\n\n~\n\nAlso, (if you agree with the above) why not include both checks\n(two_phase and failover) within the same mutex instead of\nacquiring/releasing it twice:\n\nSUGGEST\nSpinLockAcquire(&MyReplicationSlot->mutex);\nif (MyReplicationSlot->data.two_phase != two_phase)\n{\n MyReplicationSlot->data.two_phase = two_phase;\n update_slot = true;\n}\nif (MyReplicationSlot->data.failover != failover)\n{\n MyReplicationSlot->data.failover = failover;\n update_slot = true;\n}\nSpinLockAcquire(&MyReplicationSlot->mutex);\n\n~~~\n\n13b.\nThere are double blank lines after the first if-block.\n\n======\nsrc/backend/replication/walsender.c\n\n14.\n static void\n-ParseAlterReplSlotOptions(AlterReplicationSlotCmd *cmd, bool *failover)\n+ParseAlterReplSlotOptions(AlterReplicationSlotCmd *cmd,\n+ bool *two_phase, bool *failover)\n\nSame comment as mentioned elsewhere (#15), IMO the new 'two_phase'\nparameter should be last.\n\n======\nsrc/include/replication/walreceiver.h\n\n15.\n typedef void (*walrcv_alter_slot_fn) (WalReceiverConn *conn,\n const char *slotname,\n+ bool two_phase,\n bool failover);\n\nSomehow, I feel it is more normal to add the new code (the 'two_phase'\nparameter) at the END, instead of into the middle of the existing\nparameters. It also keeps it alphabetical which makes it consistent\nwith other places like the tab-completion code.\n\nThis comment about swapping the order (putting new stuff last) will\npropagate changes to lots of other related places. I refer to this\ncomment in a few other places in this post but there are probably more\nthe same that I missed.\n\n======\nsrc/test/regress/sql/subscription.sql\n\n16.\nI know you do this already in the TAP test, but doesn't the test case\nto demonstrate that 'two-phase' option can be altered when the\nsubscription is disabled actually belong here in the regression\ninstead?\n\n======\nsrc/test/subscription/t/021_twophase.pl\n\n17.\n+# Disable the subscription and alter it to two_phase = false,\n+# verify that the altered subscription reflects the two_phase option.\n\n/verify/then verify/\n\n~~~\n\n18.\n+# Now do a prepare on publisher and make sure that it is not replicated.\n+$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION tap_sub\");\n+$node_publisher->safe_psql(\n+ 'postgres', qq{\n+ BEGIN;\n+ INSERT INTO tab_copy VALUES (100);\n+ PREPARE TRANSACTION 'newgid';\n+ });\n+\n\n18a.\n/on publisher/on the publisher/\n\n18b.\nWhat is that \"DROP SUBSCRIPTION tap_sub\" doing here? It seems\nmisplaced under this comment.\n\n~~~\n\n19.\n+# Make sure that there is 0 prepared transaction on the subscriber\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_prepared_xacts;\");\n+is($result, qq(0), 'transaction is prepared on subscriber');\n\n19a.\nSUGGESTION\nMake sure there are no prepared transactions on the subscriber\n\n~~~\n\n19b.\n/'transaction is prepared on subscriber'/'should be no prepared\ntransactions on subscriber'/\n\n~~~\n\n20.\n+# Made sure that the commited transaction is replicated.\n\n/Made sure/Make sure/\n\n/commited/committed/\n\n~~~\n\n21.\n+# Make sure that the two-phase is enabled on the subscriber\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT subtwophasestate FROM pg_subscription WHERE subname = 'tap_sub_copy';\"\n+);\n+is($result, qq(e), 'two-phase is disabled');\n\nThe 'two-phase is disabled' is the identical message used in the\nopposite case earlier, so something is amiss. Maybe this one should\nsay 'two-phase should be enabled' and the earlier counterpart should\nsay 'two-phase should be disabled'.\n\n======\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 May 2024 10:07:34 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! Here are updated patches.\r\nI updated patches only for HEAD.\r\n\r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> This patch allows user to alter two_phase option\r\n> \r\n> /allows user/allows the user/\r\n> \r\n> /to alter two_phase option/to alter the 'two_phase' option/\r\n\r\nFixed.\r\n\r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 2.\r\n> <literal>two_phase</literal> can be altered only for disabled subscription.\r\n> \r\n> SUGGEST\r\n> The <literal>two_phase</literal> parameter can only be altered when\r\n> the subscription is disabled.\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/backend/access/transam/twophase.c\r\n> \r\n> 3. checkGid\r\n> +\r\n> +/*\r\n> + * checkGid\r\n> + */\r\n> +static bool\r\n> +checkGid(char *gid, Oid subid)\r\n> +{\r\n> + int ret;\r\n> + Oid subid_written,\r\n> + xid;\r\n> +\r\n> + ret = sscanf(gid, \"pg_gid_%u_%u\", &subid_written, &xid);\r\n> +\r\n> + if (ret != 2 || subid != subid_written)\r\n> + return false;\r\n> +\r\n> + return true;\r\n> +}\r\n> \r\n> 3a.\r\n> The function comment should give more explanation of what it does. I\r\n> think this function is the counterpart of the TwoPhaseTransactionGid()\r\n> function of worker.c so the comment can say that too.\r\n\r\nComments were updated.\r\n\r\n> 3b.\r\n> Indeed, perhaps the function name should be similar to\r\n> TwoPhaseTransactionGid. e.g. call it like\r\n> IsTwoPhaseTransactionGidForSubid?\r\n\r\nReplaced to IsTwoPhaseTransactionGidForSubid().\r\n\r\n> 3c.\r\n> Probably 'xid' should be TransactionId instead of Oid.\r\n\r\nRight, fixed.\r\n\r\n> 3d.\r\n> Why not have a single return?\r\n> \r\n> SUGGESTION\r\n> return (ret == 2 && subid = subid_written);\r\n\r\nFixed.\r\n\r\n> 3e.\r\n> I am wondering if the existing TwoPhaseTransactionGid function\r\n> currently in worker.c should be moved here because IMO these 2\r\n> functions belong together and twophase.c seems the right place to put\r\n> them.\r\n\r\n+1, moved.\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> +LookupGXactBySubid(Oid subid)\r\n> +{\r\n> + bool found = false;\r\n> +\r\n> + LWLockAcquire(TwoPhaseStateLock, LW_SHARED);\r\n> + for (int i = 0; i < TwoPhaseState->numPrepXacts; i++)\r\n> + {\r\n> + GlobalTransaction gxact = TwoPhaseState->prepXacts[i];\r\n> +\r\n> + /* Ignore not-yet-valid GIDs. */\r\n> + if (gxact->valid && checkGid(gxact->gid, subid))\r\n> + {\r\n> + found = true;\r\n> + break;\r\n> + }\r\n> + }\r\n> + LWLockRelease(TwoPhaseStateLock);\r\n> + return found;\r\n> +}\r\n> \r\n> AFAIK the gxact also has the 'xid' available, so won't it be better to\r\n> pass BOTH the 'xid' and 'subid' to the checkGid so you can do a full\r\n> comparison instead of comparing only the subid part of the gid?\r\n\r\nIIUC, the xid written in the gxact means the transaction id on the subscriber,\r\nbut formatted GID has xid on the publisher. So the value cannot be used.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 5. AlterSubscription\r\n> \r\n> + /* XXX */\r\n> + if (IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\r\n> + {\r\n> \r\n> The \"XXX\" comment looks like it is meant to say something more...\r\n\r\nThis flag was used only for me, removed.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> + /*\r\n> + * two_phase can be only changed for disabled\r\n> + * subscriptions\r\n> + */\r\n> + if (form->subenabled)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot set %s for enabled subscription\",\r\n> + \"two_phase\")));\r\n> \r\n> 6a.\r\n> Should this have a more comprehensive comment giving the reason like\r\n> the 'failover' option has?\r\n\r\nModified, but it is almost the same as failover's one.\r\n\r\n> 6b.\r\n> Maybe this should include a \"translator\" comment to say don't\r\n> translate the option name.\r\n\r\nHmm, but other parts in AlterSubscription() does not have.\r\nFor now, I kept current style.\r\n\r\n> ~~~\r\n> \r\n> 7.\r\n> + /* Check whether the number of prepared transactions */\r\n> + if (!opts.twophase &&\r\n> + form->subtwophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED\r\n> &&\r\n> + LookupGXactBySubid(subid))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot disable two_phase when uncommitted prepared\r\n> transactions present\")));\r\n> +\r\n> \r\n> 7a.\r\n> The first comment seems to be an incomplete sentence. I think it\r\n> should say something a bit like:\r\n> two_phase cannot be disabled if there are any uncommitted prepared\r\n> transactions present.\r\n\r\nModified, but this part would be replaced by upcoming patches.\r\n\r\n> 7b.\r\n> Also, if ereport occurs what is the user supposed to do about it?\r\n> Shouldn't the ereport include some errhint with appropriate advice?\r\n\r\nThe hint was added, but this part would be replaced by upcoming patches.\r\n\r\n> ~~~\r\n> \r\n> 8.\r\n> + /*\r\n> + * The changed failover option of the slot can't be rolled\r\n> + * back.\r\n> + */\r\n> + PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\r\n> (two_phase)\");\r\n> +\r\n> + /* Change system catalog acoordingly */\r\n> + values[Anum_pg_subscription_subtwophasestate - 1] =\r\n> + CharGetDatum(opts.twophase ?\r\n> + LOGICALREP_TWOPHASE_STATE_PENDING :\r\n> + LOGICALREP_TWOPHASE_STATE_DISABLED);\r\n> + replaces[Anum_pg_subscription_subtwophasestate - 1] = true;\r\n> + }\r\n> \r\n> Typo I think: /failover option/two_phase option/\r\n\r\nRight, fixed.\r\n\r\n> ======\r\n> .../libpqwalreceiver/libpqwalreceiver.c\r\n> \r\n> 9.\r\n> static void\r\n> libpqrcv_alter_slot(WalReceiverConn *conn, const char *slotname,\r\n> - bool failover)\r\n> + bool two_phase, bool failover)\r\n> \r\n> Same comment as mentioned elsewhere (#15), IMO the new 'two_phase'\r\n> parameter should be last.\r\n\r\nFixed. Also, some ordering of declarations and if-blocks were also changed.\r\nIn later part, I did not reply similar comments but I addressed all of them.\r\n\r\n> ======\r\n> src/backend/replication/logical/launcher.c\r\n> \r\n> 10.\r\n> +/*\r\n> + * Stop all the subscription workers.\r\n> + */\r\n> +void\r\n> +logicalrep_workers_stop(Oid subid)\r\n> +{\r\n> + List *subworkers;\r\n> + ListCell *lc;\r\n> +\r\n> + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> + subworkers = logicalrep_workers_find(subid, false);\r\n> + LWLockRelease(LogicalRepWorkerLock);\r\n> + foreach(lc, subworkers)\r\n> + {\r\n> + LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\r\n> +\r\n> + logicalrep_worker_stop(w->subid, w->relid);\r\n> + }\r\n> + list_free(subworkers);\r\n> +}\r\n> \r\n> I was confused by the logicalrep_workers_find(subid, false). IIUC the\r\n> 'false' means everything (instead of 'only_running') but then I don't\r\n> know why we want to \"stop\" anything that is NOT running. OTOH I see\r\n> that this code was extracted from where it was previously inlined in\r\n> subscriptioncmds.c, so maybe the 'false' is necessary for another\r\n> reason? At least maybe some explanatory comment is needed for why you\r\n> are passing this flag as false?\r\n\r\nSorry, let me give time for more investigation around here. For now,\r\nI added \"XXX\" mark.\r\nI think it is listed just in case, but there may be a timing issue.\r\n\r\n> ======\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 11.\r\n> - /* two-phase should not be altered */\r\n> + /* two-phase should not be altered while the worker exists */\r\n> Assert(newsub->twophasestate == MySubscription->twophasestate);\r\n> /should not/cannot/\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 13.\r\n> + if (MyReplicationSlot->data.two_phase != two_phase)\r\n> + {\r\n> + SpinLockAcquire(&MyReplicationSlot->mutex);\r\n> + MyReplicationSlot->data.two_phase = two_phase;\r\n> + SpinLockRelease(&MyReplicationSlot->mutex);\r\n> +\r\n> + update_slot = true;\r\n> + }\r\n> +\r\n> +\r\n> if (MyReplicationSlot->data.failover != failover)\r\n> {\r\n> SpinLockAcquire(&MyReplicationSlot->mutex);\r\n> MyReplicationSlot->data.failover = failover;\r\n> SpinLockRelease(&MyReplicationSlot->mutex);\r\n> \r\n> + update_slot = true;\r\n> + }\r\n> \r\n> 13a.\r\n> Doesn't it make more sense for the whole check/set to be \"atomic\",\r\n> i.e. put the mutex also around the check?\r\n> \r\n> SUGGEST\r\n> SpinLockAcquire(&MyReplicationSlot->mutex);\r\n> if (MyReplicationSlot->data.two_phase != two_phase)\r\n> {\r\n> MyReplicationSlot->data.two_phase = two_phase;\r\n> update_slot = true;\r\n> }\r\n> SpinLockRelease(&MyReplicationSlot->mutex);\r\n> \r\n> ~\r\n> \r\n> Also, (if you agree with the above) why not include both checks\r\n> (two_phase and failover) within the same mutex instead of\r\n> acquiring/releasing it twice:\r\n> \r\n> SUGGEST\r\n> SpinLockAcquire(&MyReplicationSlot->mutex);\r\n> if (MyReplicationSlot->data.two_phase != two_phase)\r\n> {\r\n> MyReplicationSlot->data.two_phase = two_phase;\r\n> update_slot = true;\r\n> }\r\n> if (MyReplicationSlot->data.failover != failover)\r\n> {\r\n> MyReplicationSlot->data.failover = failover;\r\n> update_slot = true;\r\n> }\r\n> SpinLockAcquire(&MyReplicationSlot->mutex);\r\n\r\nHmm. According to comments atop ReplicationSlot, backends which own the slot do\r\nnot have to set mutex for reading attributes. Concurrent backends, which do not\r\nacquire the slot, must set the mutex lock before the read. Based on the manner,\r\nI want to keep current style.\r\n\r\n```\r\n* - Individual fields are protected by mutex where only the backend owning\r\n * the slot is authorized to update the fields from its own slot. The\r\n * backend owning the slot does not need to take this lock when reading its\r\n * own fields, while concurrent backends not owning this slot should take the\r\n * lock when reading this slot's data.\r\n */\r\ntypedef struct ReplicationSlot\r\n```\r\n\r\n> 13b.\r\n> There are double blank lines after the first if-block.\r\n\r\nRemoved.\r\n\r\n> ======\r\n> src/test/regress/sql/subscription.sql\r\n> \r\n> 16.\r\n> I know you do this already in the TAP test, but doesn't the test case\r\n> to demonstrate that 'two-phase' option can be altered when the\r\n> subscription is disabled actually belong here in the regression\r\n> instead?\r\n\r\nActually it cannot be done at main regression test. Because altering two_phase\r\nrequires the connection between pub/sub, but it is not established in subscription.sql\r\nfile. Succeeded case for altering failover has not been tested neither, and\r\nI think they have same reason.\r\n\r\n> src/test/subscription/t/021_twophase.pl\r\n> \r\n> 17.\r\n> +# Disable the subscription and alter it to two_phase = false,\r\n> +# verify that the altered subscription reflects the two_phase option.\r\n> \r\n> /verify/then verify/\r\n\r\nFixed.\r\n\r\n> 18.\r\n> +# Now do a prepare on publisher and make sure that it is not replicated.\r\n> +$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION tap_sub\");\r\n> +$node_publisher->safe_psql(\r\n> + 'postgres', qq{\r\n> + BEGIN;\r\n> + INSERT INTO tab_copy VALUES (100);\r\n> + PREPARE TRANSACTION 'newgid';\r\n> + });\r\n> +\r\n> \r\n> 18a.\r\n> /on publisher/on the publisher/\r\n\r\nFixed.\r\n\r\n> 18b.\r\n> What is that \"DROP SUBSCRIPTION tap_sub\" doing here? It seems\r\n> misplaced under this comment.\r\n\r\nThe subscription must be dropped because it also prepares a transaction.\r\nMoved before the test case and added comments.\r\n\r\n> 19.\r\n> +# Make sure that there is 0 prepared transaction on the subscriber\r\n> +$result = $node_subscriber->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_prepared_xacts;\");\r\n> +is($result, qq(0), 'transaction is prepared on subscriber');\r\n> \r\n> 19a.\r\n> SUGGESTION\r\n> Make sure there are no prepared transactions on the subscriber\r\n\r\nFixed.\r\n\r\n> 19b.\r\n> /'transaction is prepared on subscriber'/'should be no prepared\r\n> transactions on subscriber'/\r\n\r\nReplaced/\r\n\r\n> 20.\r\n> +# Made sure that the commited transaction is replicated.\r\n> \r\n> /Made sure/Make sure/\r\n> \r\n> /commited/committed/\r\n\r\nFixed.\r\n\r\n> 21.\r\n> +# Make sure that the two-phase is enabled on the subscriber\r\n> +$result = $node_subscriber->safe_psql('postgres',\r\n> + \"SELECT subtwophasestate FROM pg_subscription WHERE subname =\r\n> 'tap_sub_copy';\"\r\n> +);\r\n> +is($result, qq(e), 'two-phase is disabled');\r\n> \r\n> The 'two-phase is disabled' is the identical message used in the\r\n> opposite case earlier, so something is amiss. Maybe this one should\r\n> say 'two-phase should be enabled' and the earlier counterpart should\r\n> say 'two-phase should be disabled'.\r\n\r\nBoth of them were fixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Wed, 8 May 2024 08:26:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san,\n\nThanks for addressing most of my v6-0001 review comments.\n\nBelow are some minor follow-up comments for v7-0001.\n\n======\nsrc/backend/access/transam/twophase.c\n\n1. IsTwoPhaseTransactionGidForSubid\n\n+/*\n+ * IsTwoPhaseTransactionGidForSubid\n+ * Check whether the given GID is formed by TwoPhaseTransactionGid.\n+ */\n+static bool\n+IsTwoPhaseTransactionGidForSubid(Oid subid, char *gid)\n\nI think the function comment should mention something about 'subid'.\n\nSUGGESTION\nCheck whether the given GID (as formed by TwoPhaseTransactionGid) is\nfor the specified 'subid'.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2. AlterSubscription\n\n+ if (!opts.twophase &&\n+ form->subtwophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\n+ LookupGXactBySubid(subid))\n+ /* Add error message */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot disable two_phase when uncommitted prepared\ntransactions present\"),\n+ errhint(\"Resolve these transactions and try again\")));\n\nThe comment \"/* Add error message */\" seems unnecessary.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 May 2024 12:14:59 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for v7-0002\n\n======\nCommit message\n\n1.\nIIUC there is quite a lot of subtlety and details about why the slot\noption needs to be changed only when altering \"true\" to \"false\", but\nnot when altering \"false\" to \"true\".\n\nIt also should explain why PreventInTransactionBlock is only needed\nwhen altering two_phase \"true\" to \"false\".\n\nPlease include a commit message to describe all those tricky details.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2. AlterSubscription\n\n- PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\n(two_phase)\");\n+ if (!opts.twophase)\n+ PreventInTransactionBlock(isTopLevel,\n+ \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\n\nIMO this needs a comment to explain why PreventInTransactionBlock is\nonly needed when changing the 'two_phase' option from on to off.\n\n~~~\n\n3. AlterSubscription\n\n/*\n* Try to acquire the connection necessary for altering slot.\n*\n* This has to be at the end because otherwise if there is an error while\n* doing the database operations we won't be able to rollback altered\n* slot.\n*/\nif (replaces[Anum_pg_subscription_subfailover - 1] ||\nreplaces[Anum_pg_subscription_subtwophasestate - 1])\n{\nbool must_use_password;\nchar *err;\nWalReceiverConn *wrconn;\nbool failover_needs_to_be_updated;\nbool two_phase_needs_to_be_updated;\n\n/* Load the library providing us libpq calls. */\nload_file(\"libpqwalreceiver\", false);\n\n/* Try to connect to the publisher. */\nmust_use_password = sub->passwordrequired && !sub->ownersuperuser;\nwrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,\nsub->name, &err);\nif (!wrconn)\nereport(ERROR,\n(errcode(ERRCODE_CONNECTION_FAILURE),\nerrmsg(\"could not connect to the publisher: %s\", err)));\n\n/*\n* Consider which slot option must be altered.\n*\n* We must alter the failover option whenever subfailover is updated.\n* Two_phase, however, is altered only when changing true to false.\n*/\nfailover_needs_to_be_updated =\nreplaces[Anum_pg_subscription_subfailover - 1];\ntwo_phase_needs_to_be_updated =\n(replaces[Anum_pg_subscription_subtwophasestate - 1] &&\n!opts.twophase);\n\nPG_TRY();\n{\nif (two_phase_needs_to_be_updated || failover_needs_to_be_updated)\nwalrcv_alter_slot(wrconn, sub->slotname,\n failover_needs_to_be_updated ? &opts.failover : NULL,\n two_phase_needs_to_be_updated ? &opts.twophase : NULL);\n}\nPG_FINALLY();\n{\nwalrcv_disconnect(wrconn);\n}\nPG_END_TRY();\n}\n\n3a.\nThe block comment \"Consider which slot option must be altered...\" says\nWHEN those options need to be updated, but it doesn't say WHY. e.g.\nwhy only update the 'two_phase' when it is being disabled but not when\nit is being enabled? In other words, I think there needs to be more\nbackground/reason details given in this comment.\n\n~~~\n\n3b.\nCan't those 2 new variable assignments be done up-front and guard this\nentire \"if-block\" instead of the current replaces[] guarding it? Then\nthe code is somewhat simplified.\n\nSUGGESTION:\n/*\n * <improved comment here to explain these variables>\n */\nupdate_failover = replaces[Anum_pg_subscription_subfailover - 1];\nupdate_two_phase = (replaces[Anum_pg_subscription_subtwophasestate -\n1] && !opts.twophase);\n\n/*\n * Try to acquire the connection necessary for altering slot.\n *\n * This has to be at the end because otherwise if there is an error while\n * doing the database operations we won't be able to rollback altered\n * slot.\n */\nif (update_failover || update_two_phase)\n{\n ...\n\n /* Load the library providing us libpq calls. */\n load_file(\"libpqwalreceiver\", false);\n\n /* Try to connect to the publisher. */\n must_use_password = sub->passwordrequired && !sub->ownersuperuser;\n wrconn = walrcv_connect(sub->conninfo, true, true,\nmust_use_password, sub->name, &err);\n if (!wrconn)\n ereport(ERROR, ...);\n\n PG_TRY();\n {\n walrcv_alter_slot(wrconn, sub->slotname,\n update_failover ? &opts.failover : NULL,\n update_two_phase ? &opts.twophase : NULL);\n }\n PG_FINALLY();\n {\n walrcv_disconnect(wrconn);\n }\n PG_END_TRY();\n}\n\n======\n.../libpqwalreceiver/libpqwalreceiver.c\n\n4.\n+ appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( \",\n+ quote_identifier(slotname));\n+\n+ if (failover)\n+ appendStringInfo(&cmd, \"FAILOVER %s \",\n+ (*failover) ? \"true\" : \"false\");\n+\n+ if (two_phase)\n+ appendStringInfo(&cmd, \"TWO_PHASE %s%s \",\n+ (*two_phase) ? \"true\" : \"false\",\n+ failover ? \", \" : \"\");\n+\n+ appendStringInfoString(&cmd, \");\");\n\n4a.\nIIUC the comma logic here was broken in v7 when you swapped the order.\nAnyway, IMO it will be better NOT to try combining that comma logic\nwith the existing appendStringInfo. Doing it separately is both easier\nand less error-prone.\n\nFurthermore, the parentheses like \"(*two_phase)\" instead of just\n\"*two_phase\" seemed a bit overkill.\n\nSUGGESTION:\n+ if (failover)\n+ appendStringInfo(&cmd, \"FAILOVER %s\",\n+ *failover ? \"true\" : \"false\");\n+\n+ if (failover && two_phase)\n+ appendStringInfo(&cmd, \", \");\n+\n+ if (two_phase)\n+ appendStringInfo(&cmd, \"TWO_PHASE %s\",\n+ *two_phase ? \"true\" : \"false\");\n+\n+ appendStringInfoString(&cmd, \" );\");\n\n~~\n\n4b.\nLike I said above, IMO the current separator logic in v7 is broken. So\nit is a bit concerning the tests all passed anyway. How did that\nhappen? I think this indicates that there needs to be an additional\ntest scenario where both 'failover' and 'two_phase' get altered at the\nsame time so this code gets exercised properly.\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n5.\n+# Define pre-existing tables on both nodes\n\nWhy say they are \"pre-existing\"? They are not pre-existing because you\nare creating them right here!\n\n~~~\n\n6.\n+######\n+# Check the case that prepared transactions exist on publisher node\n+######\n\nI think this needs a slightly more detailed comment.\n\nSUGGESTION (this is just an example, but you can surely improve it)\n\n# Check the case that prepared transactions exist on the publisher node.\n#\n# Since two_phase is \"off\", then normally this PREPARE will do nothing until\n# the COMMIT PREPARED, but in this test, we toggle the two_phase to \"on\" again\n# before the COMMIT PREPARED happens.\n\n~~~\n\n7.\nMaybe this test case needs a few more one-line comments for each of\nthe sub-steps. e.g.:\n\n# prepare a transaction to insert some rows to the table\n\n# verify the prepared tx is not yet replicated to the subscriber\n(because 'two_phase = off')\n\n# toggle the two_phase to 'on' *before* the COMMIT PREPARED\n\n# verify the inserted rows got replicated ok\n\n~~~\n\n8.\nIIUC this test will behave the same even if you DON'T do the toggle\n'two_phase = on'. So I wonder is there something more you can do to\ntest this scenario more convincingly?\n\n======\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 May 2024 12:21:24 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for the patch v7-0003.\n\n======\nCommit Message\n\n1.\nThe patch needs a commit message to describe the purpose and highlight\nany limitations and other details.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.\n+\n+ <para>\n+ The <literal>two_phase</literal> parameter can only be altered when the\n+ subscription is disabled. When altering the parameter from\n<literal>true</literal>\n+ to <literal>false</literal>, the backend process checks prepared\n+ transactions done by the logical replication worker and aborts them.\n+ </para>\n\nHere, the para is referring to \"true\" and \"false\" but earlier on this\npage it talks about \"twophase = off\". IMO it is better to use a\nconsistent terminology like \"on|off\" everywhere instead of randomly\nchanging the way it is described each time.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3. AlterSubscription\n\n if (IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\n {\n+ List *prepared_xacts = NIL;\n\nThis 'prepared_xacts' can be declared at a lower scrope because it is\nonly used if (!opts.twophase).\n\nFurthermore, IIUC you don't need to assign NIL in the declaration\nbecause there is no chance for it to be unassigned anyway.\n\n~~~\n\n4. AlterSubscription\n\n+ /*\n+ * The changed two_phase option (true->false) of the\n+ * slot can't be rolled back.\n+ */\n PreventInTransactionBlock(isTopLevel,\n \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\n\nHere is another example of inconsistent mixing of the terminology\nwhere the comment says \"true\"/\"false\" but the message says \"off\".\nLet's keep everything consistent. (I prefer on|off).\n\n~~~\n\n5.\n+ if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\n+ (prepared_xacts = GetGidListBySubid(subid)) != NIL)\n+ {\n+ ListCell *cell;\n+\n+ /* Abort all listed transactions */\n+ foreach(cell, prepared_xacts)\n+ FinishPreparedTransaction((char *) lfirst(cell),\n+ false);\n+\n+ list_free(prepared_xacts);\n+ }\n\n5A.\nIIRC there is a cleaner way to write this loop without needing\nListCell variable -- e.g. foreach_ptr() macro?\n\n~\n\n5B.\nShouldn't this be using list_free_deep() so the pstrdup gid gets freed too?\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n6.\n+######\n+# Check the case that prepared transactions exist on subscriber node\n+######\n+\n\nGive some more detailed comments here similar to the review comment of\npatch v7-0002 for the other part of this TAP test.\n\n~~~\n\n7. TAP test - comments\n\nSame as for my v7-0002 review comments, I think this test case also\nneeds a few more one-line comments to describe the sub-steps. e.g.:\n\n# prepare a transaction to insert some rows to the table\n\n# verify the prepared tx is replicated to the subscriber (because\n'two_phase = on')\n\n# toggle the two_phase to 'off' *before* the COMMIT PREPARED\n\n# verify the prepared tx got aborted\n\n# do the COMMIT PREPARED (note that now two_phase is 'off')\n\n# verify the inserted rows got replicated ok\n\n~~~\n\n8. TAP test - subscription name\n\nIt's better to rename the SUBSCRIPTION in this TAP test so you can\navoid getting log warnings like:\n\npsql:<stdin>:4: WARNING: subscriptions created by regression test\ncases should have names starting with \"regress_\"\npsql:<stdin>:4: NOTICE: created replication slot \"sub\" on publisher\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 May 2024 16:11:44 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for patch v7-0004\n\n======\nCommit message\n\n1.\nA detailed commit message is needed to describe the purpose and\ndetails of this patch.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2. CREATE SUBSCRIPTION\n\nShouldn't there be an entry for \"force_alter\" parameter in the CREATE\nSUBSCRIPTION \"parameters\" section, instead of just vaguely mentioning\nit in passing when describing the \"two_phase\" in ALTER SUBSCRIPTION?\n\n~\n\n3. ALTER SUBSCRIPTION - alterable parameters\n\nAnd shouldn't this new option also be named in the ALTER SUBSCRIPTION\nlist: \"The parameters that can be altered are...\"\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n4.\n XLogRecPtr lsn;\n+ bool twophase_force;\n } SubOpts;\n\nIMO this field ought to be called 'force_alter' to be the same as the\noption name. Sure, now it is only relevant for 'two_phase', but that\nmight not always be the case in the future.\n\n~~~\n\n5. AlterSubscription\n\n+ /*\n+ * Abort prepared transactions if force option is also\n+ * specified. Otherwise raise an ERROR.\n+ */\n+ if (!opts.twophase_force)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot alter %s when there are prepared transactions\",\n+ \"two_phase = false\")));\n+\n\n5a.\n/if force option is also specified/only if the 'force_alter' option is true/\n\n~\n\n5b.\n\"two_phase = false\" -- IMO that should say \"two_phase = off\"\n\n~\n\n5c.\nIMO this ereport should include a errhint to tell the user they can\nuse 'force_alter = true' to avoid getting this error.\n\n~~~\n\n6.\n\n+ /* force_alter cannot be used standalone */\n+ if (IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER) &&\n+ !IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"%s must be specified with %s\",\n+ \"force_alter\", \"two_phase\")));\n+\n\nIMO this rule is not necessary so the code should be removed. I think\nusing 'force_alter' standalone doesn't do anything at all (certainly,\nit does no harm) so why add more complications (more rules, more code,\nmore tests) just for the sake of it?\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n7.\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter = on);\");\n\n\"force\" is a verb, so it is better to say 'force_alter = true' instead\nof 'force_alter = on'.\n\n~~~\n\n8.\n $result = $node_subscriber->safe_psql('postgres',\n \"SELECT count(*) FROM pg_prepared_xacts;\");\n is($result, q(0), \"prepared transaction done by worker is aborted\");\n\n+$node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub ENABLE;\");\n+\n\nI felt the ENABLE statement should be above the SELECT statement so\nthat the code is more like it was before applying the patch.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 May 2024 17:28:57 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! The patch will be posted in the upcoming post.\r\n\r\n> ======\r\n> src/backend/access/transam/twophase.c\r\n> \r\n> 1. IsTwoPhaseTransactionGidForSubid\r\n> \r\n> +/*\r\n> + * IsTwoPhaseTransactionGidForSubid\r\n> + * Check whether the given GID is formed by TwoPhaseTransactionGid.\r\n> + */\r\n> +static bool\r\n> +IsTwoPhaseTransactionGidForSubid(Oid subid, char *gid)\r\n> \r\n> I think the function comment should mention something about 'subid'.\r\n> \r\n> SUGGESTION\r\n> Check whether the given GID (as formed by TwoPhaseTransactionGid) is\r\n> for the specified 'subid'.\r\n\r\nFixed.\r\n\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 2. AlterSubscription\r\n> \r\n> + if (!opts.twophase &&\r\n> + form->subtwophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED\r\n> &&\r\n> + LookupGXactBySubid(subid))\r\n> + /* Add error message */\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot disable two_phase when uncommitted prepared\r\n> transactions present\"),\r\n> + errhint(\"Resolve these transactions and try again\")));\r\n> \r\n> The comment \"/* Add error message */\" seems unnecessary.\r\n\r\nYeah, this was an internal flag. Removed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Thu, 9 May 2024 08:26:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> IIUC there is quite a lot of subtlety and details about why the slot\r\n> option needs to be changed only when altering \"true\" to \"false\", but\r\n> not when altering \"false\" to \"true\".\r\n> \r\n> It also should explain why PreventInTransactionBlock is only needed\r\n> when altering two_phase \"true\" to \"false\".\r\n> \r\n> Please include a commit message to describe all those tricky details.\r\n\r\nAdded.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 2. AlterSubscription\r\n> \r\n> - PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\r\n> (two_phase)\");\r\n> + if (!opts.twophase)\r\n> + PreventInTransactionBlock(isTopLevel,\r\n> + \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\r\n> \r\n> IMO this needs a comment to explain why PreventInTransactionBlock is\r\n> only needed when changing the 'two_phase' option from on to off.\r\n\r\nAdded. Thoutht?\r\n\r\n> 3. AlterSubscription\r\n> \r\n> /*\r\n> * Try to acquire the connection necessary for altering slot.\r\n> *\r\n> * This has to be at the end because otherwise if there is an error while\r\n> * doing the database operations we won't be able to rollback altered\r\n> * slot.\r\n> */\r\n> if (replaces[Anum_pg_subscription_subfailover - 1] ||\r\n> replaces[Anum_pg_subscription_subtwophasestate - 1])\r\n> {\r\n> bool must_use_password;\r\n> char *err;\r\n> WalReceiverConn *wrconn;\r\n> bool failover_needs_to_be_updated;\r\n> bool two_phase_needs_to_be_updated;\r\n> \r\n> /* Load the library providing us libpq calls. */\r\n> load_file(\"libpqwalreceiver\", false);\r\n> \r\n> /* Try to connect to the publisher. */\r\n> must_use_password = sub->passwordrequired && !sub->ownersuperuser;\r\n> wrconn = walrcv_connect(sub->conninfo, true, true, must_use_password,\r\n> sub->name, &err);\r\n> if (!wrconn)\r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_CONNECTION_FAILURE),\r\n> errmsg(\"could not connect to the publisher: %s\", err)));\r\n> \r\n> /*\r\n> * Consider which slot option must be altered.\r\n> *\r\n> * We must alter the failover option whenever subfailover is updated.\r\n> * Two_phase, however, is altered only when changing true to false.\r\n> */\r\n> failover_needs_to_be_updated =\r\n> replaces[Anum_pg_subscription_subfailover - 1];\r\n> two_phase_needs_to_be_updated =\r\n> (replaces[Anum_pg_subscription_subtwophasestate - 1] &&\r\n> !opts.twophase);\r\n> \r\n> PG_TRY();\r\n> {\r\n> if (two_phase_needs_to_be_updated || failover_needs_to_be_updated)\r\n> walrcv_alter_slot(wrconn, sub->slotname,\r\n> failover_needs_to_be_updated ? &opts.failover : NULL,\r\n> two_phase_needs_to_be_updated ? &opts.twophase : NULL);\r\n> }\r\n> PG_FINALLY();\r\n> {\r\n> walrcv_disconnect(wrconn);\r\n> }\r\n> PG_END_TRY();\r\n> }\r\n> \r\n> 3a.\r\n> The block comment \"Consider which slot option must be altered...\" says\r\n> WHEN those options need to be updated, but it doesn't say WHY. e.g.\r\n> why only update the 'two_phase' when it is being disabled but not when\r\n> it is being enabled? In other words, I think there needs to be more\r\n> background/reason details given in this comment.\r\n> \r\n> ~~~\r\n> \r\n> 3b.\r\n> Can't those 2 new variable assignments be done up-front and guard this\r\n> entire \"if-block\" instead of the current replaces[] guarding it? Then\r\n> the code is somewhat simplified.\r\n> \r\n> SUGGESTION:\r\n> /*\r\n> * <improved comment here to explain these variables>\r\n> */\r\n> update_failover = replaces[Anum_pg_subscription_subfailover - 1];\r\n> update_two_phase = (replaces[Anum_pg_subscription_subtwophasestate -\r\n> 1] && !opts.twophase);\r\n> \r\n> /*\r\n> * Try to acquire the connection necessary for altering slot.\r\n> *\r\n> * This has to be at the end because otherwise if there is an error while\r\n> * doing the database operations we won't be able to rollback altered\r\n> * slot.\r\n> */\r\n> if (update_failover || update_two_phase)\r\n> {\r\n> ...\r\n> \r\n> /* Load the library providing us libpq calls. */\r\n> load_file(\"libpqwalreceiver\", false);\r\n> \r\n> /* Try to connect to the publisher. */\r\n> must_use_password = sub->passwordrequired && !sub->ownersuperuser;\r\n> wrconn = walrcv_connect(sub->conninfo, true, true,\r\n> must_use_password, sub->name, &err);\r\n> if (!wrconn)\r\n> ereport(ERROR, ...);\r\n> \r\n> PG_TRY();\r\n> {\r\n> walrcv_alter_slot(wrconn, sub->slotname,\r\n> update_failover ? &opts.failover : NULL,\r\n> update_two_phase ? &opts.twophase : NULL);\r\n> }\r\n> PG_FINALLY();\r\n> {\r\n> walrcv_disconnect(wrconn);\r\n> }\r\n> PG_END_TRY();\r\n> }\r\n\r\nTwo variables were added and comments were updated.\r\n\r\n> .../libpqwalreceiver/libpqwalreceiver.c\r\n> \r\n> 4.\r\n> + appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( \",\r\n> + quote_identifier(slotname));\r\n> +\r\n> + if (failover)\r\n> + appendStringInfo(&cmd, \"FAILOVER %s \",\r\n> + (*failover) ? \"true\" : \"false\");\r\n> +\r\n> + if (two_phase)\r\n> + appendStringInfo(&cmd, \"TWO_PHASE %s%s \",\r\n> + (*two_phase) ? \"true\" : \"false\",\r\n> + failover ? \", \" : \"\");\r\n> +\r\n> + appendStringInfoString(&cmd, \");\");\r\n> \r\n> 4a.\r\n> IIUC the comma logic here was broken in v7 when you swapped the order.\r\n> Anyway, IMO it will be better NOT to try combining that comma logic\r\n> with the existing appendStringInfo. Doing it separately is both easier\r\n> and less error-prone.\r\n> \r\n> Furthermore, the parentheses like \"(*two_phase)\" instead of just\r\n> \"*two_phase\" seemed a bit overkill.\r\n> \r\n> SUGGESTION:\r\n> + if (failover)\r\n> + appendStringInfo(&cmd, \"FAILOVER %s\",\r\n> + *failover ? \"true\" : \"false\");\r\n> +\r\n> + if (failover && two_phase)\r\n> + appendStringInfo(&cmd, \", \");\r\n> +\r\n> + if (two_phase)\r\n> + appendStringInfo(&cmd, \"TWO_PHASE %s\",\r\n> + *two_phase ? \"true\" : \"false\");\r\n> +\r\n> + appendStringInfoString(&cmd, \" );\");\r\n\r\nFixed.\r\n\r\n> 4b.\r\n> Like I said above, IMO the current separator logic in v7 is broken. So\r\n> it is a bit concerning the tests all passed anyway. How did that\r\n> happen? I think this indicates that there needs to be an additional\r\n> test scenario where both 'failover' and 'two_phase' get altered at the\r\n> same time so this code gets exercised properly.\r\n\r\nRight, it was added.\r\n\r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 5.\r\n> +# Define pre-existing tables on both nodes\r\n> \r\n> Why say they are \"pre-existing\"? They are not pre-existing because you\r\n> are creating them right here!\r\n\r\nRemoved the word.\r\n\r\n> 6.\r\n> +######\r\n> +# Check the case that prepared transactions exist on publisher node\r\n> +######\r\n> \r\n> I think this needs a slightly more detailed comment.\r\n> \r\n> SUGGESTION (this is just an example, but you can surely improve it)\r\n> \r\n> # Check the case that prepared transactions exist on the publisher node.\r\n> #\r\n> # Since two_phase is \"off\", then normally this PREPARE will do nothing until\r\n> # the COMMIT PREPARED, but in this test, we toggle the two_phase to \"on\" again\r\n> # before the COMMIT PREPARED happens.\r\n\r\nChanged with adjustments.\r\n\r\n> 7.\r\n> Maybe this test case needs a few more one-line comments for each of\r\n> the sub-steps. e.g.:\r\n> \r\n> # prepare a transaction to insert some rows to the table\r\n> \r\n> # verify the prepared tx is not yet replicated to the subscriber\r\n> (because 'two_phase = off')\r\n> \r\n> # toggle the two_phase to 'on' *before* the COMMIT PREPARED\r\n> \r\n> # verify the inserted rows got replicated ok\r\n\r\nModified like yours, but changed based on the suggestion by Grammarly.\r\n\r\n> 8.\r\n> IIUC this test will behave the same even if you DON'T do the toggle\r\n> 'two_phase = on'. So I wonder is there something more you can do to\r\n> test this scenario more convincingly?\r\n\r\nI found an indicator. When the apply starts, it outputs the current status of\r\ntwo_phase option. I added wait_for_log() to ensure below appeared. Thought?\r\n\r\n```\r\n\tereport(DEBUG1,\r\n\t\t\t(errmsg_internal(\"logical replication apply worker for subscription \\\"%s\\\" two_phase is %s\",\r\n\t\t\t\t\t\t\t MySubscription->name,\r\n\t\t\t\t\t\t\t MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_DISABLED ? \"DISABLED\" :\r\n\t\t\t\t\t\t\t MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING ? \"PENDING\" :\r\n\t\t\t\t\t\t\t MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED ? \"ENABLED\" :\r\n\t\t\t\t\t\t\t \"?\")));\r\n```\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Thu, 9 May 2024 08:54:36 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> Commit Message\r\n> \r\n> 1.\r\n> The patch needs a commit message to describe the purpose and highlight\r\n> any limitations and other details.\r\n\r\nAdded.\r\n\r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 2.\r\n> +\r\n> + <para>\r\n> + The <literal>two_phase</literal> parameter can only be altered when\r\n> the\r\n> + subscription is disabled. When altering the parameter from\r\n> <literal>true</literal>\r\n> + to <literal>false</literal>, the backend process checks prepared\r\n> + transactions done by the logical replication worker and aborts them.\r\n> + </para>\r\n> \r\n> Here, the para is referring to \"true\" and \"false\" but earlier on this\r\n> page it talks about \"twophase = off\". IMO it is better to use a\r\n> consistent terminology like \"on|off\" everywhere instead of randomly\r\n> changing the way it is described each time.\r\n\r\nI checked contents and changed to \"on|off\".\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. AlterSubscription\r\n> \r\n> if (IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\r\n> {\r\n> + List *prepared_xacts = NIL;\r\n> \r\n> This 'prepared_xacts' can be declared at a lower scrope because it is\r\n> only used if (!opts.twophase).\r\n> \r\n> Furthermore, IIUC you don't need to assign NIL in the declaration\r\n> because there is no chance for it to be unassigned anyway.\r\n\r\nMade the namespace narrower and initialization was removed.\r\n\r\n> ~~~\r\n> \r\n> 4. AlterSubscription\r\n> \r\n> + /*\r\n> + * The changed two_phase option (true->false) of the\r\n> + * slot can't be rolled back.\r\n> + */\r\n> PreventInTransactionBlock(isTopLevel,\r\n> \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\r\n> \r\n> Here is another example of inconsistent mixing of the terminology\r\n> where the comment says \"true\"/\"false\" but the message says \"off\".\r\n> Let's keep everything consistent. (I prefer on|off).\r\n\r\nModified.\r\n\r\n> ~~~\r\n> \r\n> 5.\r\n> + if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\r\n> + (prepared_xacts = GetGidListBySubid(subid)) != NIL)\r\n> + {\r\n> + ListCell *cell;\r\n> +\r\n> + /* Abort all listed transactions */\r\n> + foreach(cell, prepared_xacts)\r\n> + FinishPreparedTransaction((char *) lfirst(cell),\r\n> + false);\r\n> +\r\n> + list_free(prepared_xacts);\r\n> + }\r\n> \r\n> 5A.\r\n> IIRC there is a cleaner way to write this loop without needing\r\n> ListCell variable -- e.g. foreach_ptr() macro?\r\n\r\nChanged.\r\n\r\n> 5B.\r\n> Shouldn't this be using list_free_deep() so the pstrdup gid gets freed too?\r\n\r\nYeah, fixed.\r\n\r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 6.\r\n> +######\r\n> +# Check the case that prepared transactions exist on subscriber node\r\n> +######\r\n> +\r\n> \r\n> Give some more detailed comments here similar to the review comment of\r\n> patch v7-0002 for the other part of this TAP test.\r\n> \r\n> ~~~\r\n> \r\n> 7. TAP test - comments\r\n> \r\n> Same as for my v7-0002 review comments, I think this test case also\r\n> needs a few more one-line comments to describe the sub-steps. e.g.:\r\n> \r\n> # prepare a transaction to insert some rows to the table\r\n> \r\n> # verify the prepared tx is replicated to the subscriber (because\r\n> 'two_phase = on')\r\n> \r\n> # toggle the two_phase to 'off' *before* the COMMIT PREPARED\r\n> \r\n> # verify the prepared tx got aborted\r\n> \r\n> # do the COMMIT PREPARED (note that now two_phase is 'off')\r\n> \r\n> # verify the inserted rows got replicated ok\r\n\r\nThey were fixed based on your previous comments.\r\n\r\n> \r\n> 8. TAP test - subscription name\r\n> \r\n> It's better to rename the SUBSCRIPTION in this TAP test so you can\r\n> avoid getting log warnings like:\r\n> \r\n> psql:<stdin>:4: WARNING: subscriptions created by regression test\r\n> cases should have names starting with \"regress_\"\r\n> psql:<stdin>:4: NOTICE: created replication slot \"sub\" on publisher\r\n\r\nModified, but it was included in 0001.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n\r\n",
"msg_date": "Thu, 9 May 2024 09:10:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> A detailed commit message is needed to describe the purpose and\r\n> details of this patch.\r\n\r\nAdded.\r\n\r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 2. CREATE SUBSCRIPTION\r\n> \r\n> Shouldn't there be an entry for \"force_alter\" parameter in the CREATE\r\n> SUBSCRIPTION \"parameters\" section, instead of just vaguely mentioning\r\n> it in passing when describing the \"two_phase\" in ALTER SUBSCRIPTION?\r\n>\r\n> 3. ALTER SUBSCRIPTION - alterable parameters\r\n> \r\n> And shouldn't this new option also be named in the ALTER SUBSCRIPTION\r\n> list: \"The parameters that can be altered are...\"\r\n\r\nHmm, but the parameter cannot be used for CREATE SUBSCRIPTION. Should we\r\nmodify to accept and add the description in the doc? This was not accepted.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 4.\r\n> XLogRecPtr lsn;\r\n> + bool twophase_force;\r\n> } SubOpts;\r\n> \r\n> IMO this field ought to be called 'force_alter' to be the same as the\r\n> option name. Sure, now it is only relevant for 'two_phase', but that\r\n> might not always be the case in the future.\r\n\r\nModified.\r\n\r\n> 5. AlterSubscription\r\n> \r\n> + /*\r\n> + * Abort prepared transactions if force option is also\r\n> + * specified. Otherwise raise an ERROR.\r\n> + */\r\n> + if (!opts.twophase_force)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot alter %s when there are prepared transactions\",\r\n> + \"two_phase = false\")));\r\n> +\r\n> \r\n> 5a.\r\n> /if force option is also specified/only if the 'force_alter' option is true/\r\n\r\nModified.\r\n\r\n> \r\n> 5b.\r\n> \"two_phase = false\" -- IMO that should say \"two_phase = off\"\r\n\r\nModified.\r\n\r\n> 5c.\r\n> IMO this ereport should include a errhint to tell the user they can\r\n> use 'force_alter = true' to avoid getting this error.\r\n\r\nHint was added.\r\n\r\n> 6.\r\n> \r\n> + /* force_alter cannot be used standalone */\r\n> + if (IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER) &&\r\n> + !IsSet(opts.specified_opts, SUBOPT_TWOPHASE_COMMIT))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"%s must be specified with %s\",\r\n> + \"force_alter\", \"two_phase\")));\r\n> +\r\n> \r\n> IMO this rule is not necessary so the code should be removed. I think\r\n> using 'force_alter' standalone doesn't do anything at all (certainly,\r\n> it does no harm) so why add more complications (more rules, more code,\r\n> more tests) just for the sake of it?\r\n\r\nRemoved. So standalone 'force_alter' is now no-op.\r\n\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 7.\r\n> +$node_subscriber->safe_psql('postgres',\r\n> + \"ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter = on);\");\r\n> \r\n> \"force\" is a verb, so it is better to say 'force_alter = true' instead\r\n> of 'force_alter = on'.\r\n\r\nFixed. Actually not sure it is better because I'm not a native.\r\n\r\n> 8.\r\n> $result = $node_subscriber->safe_psql('postgres',\r\n> \"SELECT count(*) FROM pg_prepared_xacts;\");\r\n> is($result, q(0), \"prepared transaction done by worker is aborted\");\r\n> \r\n> +$node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub\r\n> ENABLE;\");\r\n> +\r\n> \r\n> I felt the ENABLE statement should be above the SELECT statement so\r\n> that the code is more like it was before applying the patch.\r\n\r\nFixed.\r\n\r\nPlease see attached patch set.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Thu, 9 May 2024 09:15:59 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for v8-0002.\n\n======\nCommit message\n\n1.\nRegarding the off->on case, the logical replication already has a mechanism for\nit, so there is no need to do anything special for the on->off case; however,\nwe must connect to the publisher and expressly change the parameter. The\noperation cannot be rolled back, and altering the parameter from \"on\" to \"off\"\nwithin a transaction is prohibited.\n\n~\n\nI think the difference between \"off\"-to\"on\" and \"on\"-to\"off\" needs to\nbe explained in more detail. Specifically \"already has a mechanism for\nit\" seems very vague.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2.\n /*\n- * The changed two_phase option of the slot can't be rolled\n- * back.\n+ * Since the altering two_phase option of subscriptions\n+ * also leads to the change of slot option, this command\n+ * cannot be rolled back. So prevent we are in the\n+ * transaction block.\n */\n- PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\n(two_phase)\");\n+ if (!opts.twophase)\n+ PreventInTransactionBlock(isTopLevel,\n+\n\n2a.\nThere is a typo: \"So prevent we are\".\n\nSUGGESTION (minor adjustment and typo fix)\nSince altering the two_phase option of subscriptions also leads to\nchanging the slot option, this command cannot be rolled back. So\nprevent this if we are in a transaction block.\n\n~\n\n2b.\nBut, in my previous review [v7-0002#3] I asked if the comment could\nexplain why this check is only needed for two_phase \"on\"-to-\"off\" but\nnot for \"off\"-to-\"on\". That explanation/reason is still not yet given\nin the latest comment.\n\n~~~\n\n3.\n /*\n- * Try to acquire the connection necessary for altering slot.\n+ * Check the need to alter the replication slot. Failover and two_phase\n+ * options are controlled by both the publisher (as a slot option) and the\n+ * subscriber (as a subscription option).\n+ */\n+ update_failover = replaces[Anum_pg_subscription_subfailover - 1];\n+ update_two_phase = (replaces[Anum_pg_subscription_subtwophasestate - 1] &&\n+ !opts.twophase);\n\n\n(This is similar to the previous comment). In my previous review\n[v7-0002#3a] I asked why update_two_phase is TRUE only if 'two-phase'\nis being updated \"on\"-to-\"off\", but not when it is being updated\n\"off\"-to-\"on\". That explanation/reason is still not yet given in the\nlatest comment.\n\n======\nsrc/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n\n4.\n- appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( FAILOVER %s,\nTWO_PHASE %s )\",\n- quote_identifier(slotname),\n- failover ? \"true\" : \"false\",\n- two_phase ? \"true\" : \"false\");\n+ appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( \",\n+ quote_identifier(slotname));\n+\n+ if (failover)\n+ appendStringInfo(&cmd, \"FAILOVER %s\",\n+ *failover ? \"true\" : \"false\");\n+\n+ if (failover && two_phase)\n+ appendStringInfo(&cmd, \", \");\n+\n+ if (two_phase)\n+ appendStringInfo(&cmd, \"TWO_PHASE %s\",\n+ *two_phase ? \"true\" : \"false\");\n+\n+ appendStringInfoString(&cmd, \");\");\n\nIt will be better if that last line includes the extra space like I\nhad suggested in [v7-0002#4a] so the result will have the same spacing\nas in the original code. e.g.\n\n+ appendStringInfoString(&cmd, \" );\");\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n5.\n+# Check the case that prepared transactions exist on the publisher node.\n+#\n+# Since the two_phase is \"off\", then normally, this PREPARE will do nothing\n+# until the COMMIT PREPARED, but in this test, we toggle the two_phase to \"on\"\n+# again before the COMMIT PREPARED happens.\n\nThis is a major test part so IMO this comment should have\n##################### like it had before, to distinguish it from all\nthe sub-step comments.\n\n======\n\nMy v7-0002 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPtu_w_UCGR-5DbenA%2By7wRiA8QPi_ZP%3DCCJ3SGdTn-%3D%3Dg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 13 May 2024 16:34:35 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some review comments for v8-0003.\n\n======\nsrc/sgml/ref/alter_subscription.sgml\n\n1.\n+ <para>\n+ The <literal>two_phase</literal> parameter can only be altered when the\n+ subscription is disabled. When altering the parameter from\n<literal>on</literal>\n+ to <literal>off</literal>, the backend process checks prepared\n+ transactions done by the logical replication worker and aborts them.\n+ </para>\n\nThe text may be OK as-is, but I was wondering if it might be better to\ngive a more verbose explanation.\n\nBEFORE\n... the backend process checks prepared transactions done by the\nlogical replication worker and aborts them.\n\nSUGGESTION\n... the backend process checks for any incomplete prepared\ntransactions done by the logical replication worker (from when\ntwo_phase parameter was still \"on\") and, if any are found, those are\naborted.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2. AlterSubscription\n\n- /*\n- * Since the altering two_phase option of subscriptions\n- * also leads to the change of slot option, this command\n- * cannot be rolled back. So prevent we are in the\n- * transaction block.\n+ * If two_phase was enabled, there is a possibility the\n+ * transactions has already been PREPARE'd. They must be\n+ * checked and rolled back.\n */\n\nBEFORE\n... there is a possibility the transactions has already been PREPARE'd.\n\nSUGGESTION\n... there is a possibility that transactions have already been PREPARE'd.\n\n~~~\n\n3. AlterSubscription\n+ /*\n+ * Since the altering two_phase option of subscriptions\n+ * (especially on->off case) also leads to the\n+ * change of slot option, this command cannot be rolled\n+ * back. So prevent we are in the transaction block.\n+ */\n PreventInTransactionBlock(isTopLevel,\n \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\n\n\nThis comment is a bit vague and includes some typos, but IIUC these\nproblems will already be addressed by the 0002 patch changes.AFAIK\npatch 0003 is only moving the 0002 comment.\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n4.\n+# Check the case that prepared transactions exist on the subscriber node\n+#\n+# If the two_phase is altering from \"on\" to \"off\" and there are prepared\n+# transactions on the subscriber, they must be aborted. This test checks it.\n+\n\nSimilar to the comment that I gave for v8-0002. I think there should\nbe #################### comment for the major test comment to\ndistinguish it from comments for the sub-steps.\n\n~~~\n\n5.\n+# Verify the prepared transaction are aborted because two_phase is changed to\n+# \"off\".\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_prepared_xacts;\");\n+is($result, q(0), \"prepared transaction done by worker is aborted\");\n+\n\n/the prepared transaction are aborted/any prepared transactions are aborted/\n\n======\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 May 2024 16:37:21 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are some comments for v8-0004\n\n======\n0.1 General - Patch name\n\n/SUBSCIRPTION/SUBSCRIPTION/\n\n======\n0.2 General - Apply\n\nFYI, there are whitespace warnings:\n\ngit apply ../patches_misc/v8-0004-Add-force_alter-option-for-ALTER-SUBSCIRPTION-.-S.patch\n../patches_misc/v8-0004-Add-force_alter-option-for-ALTER-SUBSCIRPTION-.-S.patch:191:\ntrailing whitespace.\n# command will abort the prepared transaction and succeed.\nwarning: 1 line adds whitespace errors.\n\n======\n0.3 General - Regression test fails\n\nThe subscription regression tests are not working.\n\nok 158 + publication 1187 ms\nnot ok 159 + subscription 123 ms\n\nSee review comments #4 and #5 below for the reason why.\n\n======\nsrc/sgml/ref/alter_subscription.sgml\n\n1.\n <para>\n The <literal>two_phase</literal> parameter can only be altered when the\n- subscription is disabled. When altering the parameter from\n<literal>on</literal>\n- to <literal>off</literal>, the backend process checks prepared\n- transactions done by the logical replication worker and aborts them.\n+ subscription is disabled. Altering the parameter from\n<literal>on</literal>\n+ to <literal>off</literal> will be failed when there are prepared\n+ transactions done by the logical replication worker. If you want to alter\n+ the parameter forcibly in this case, <literal>force_alter</literal>\n+ option must be set to <literal>on</literal> at the same time. If\n+ specified, the backend process aborts prepared transactions.\n </para>\n1a.\nThat \"will be failed when...\" seems strange. Maybe say \"will give an\nerror when...\"\n\n~\n1b.\nBecause \"force\" is a verb, I think true/false is more natural than\non/off for this new boolean option. e.g. it acts more like a \"flag\"\nthan a \"mode\". See all the other boolean options in CREATE\nSUBSCRIPTION -- those are mostly all verbs too and are all true/false\nAFAIK.\n\n======\n\n2. CREATE SUBSCRIPTION\n\nFor my previous review, two comments [v7-0004#2] and [v7-0004#3] were\nnot addressed. Kuroda-san wrote:\nHmm, but the parameter cannot be used for CREATE SUBSCRIPTION. Should\nwe modify to accept and add the description in the doc?\n\n~\n\nYes, that is what I am suggesting. IMO it is odd for the user to be\nable to ALTER a parameter that cannot be included in the CREATE\nSUBSCRIPTION in the first place. AFAIK there are no other parameters\nthat behave that way.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3. AlterSubscription\n\n+ if (!opts.force_alter)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot alter %s when there are prepared transactions\",\n+ \"two_phase = off\"),\n+ errhint(\"Resolve these transactions or set %s at the same time, and\nthen try again.\",\n+ \"force_alter = true\")));\n\nI think saying \"at the same time\" in the hint is unnecessary. Surely\nthe user is allowed to set this parameter separately if they want to?\n\ne.g.\nALTER SUBSCRIPTION sub SET (force_alter=true);\nALTER SUBSCRIPTION sub SET (two_phase=off);\n\n======\nsrc/test/regress/expected/subscription.out\n\n4.\n+-- fail - force_alter cannot be set alone\n+ALTER SUBSCRIPTION regress_testsub SET (force_alter = true);\n+ERROR: force_alter must be specified with two_phase\n\nThis error cannot happen. You removed that error!\n\n======\nsrc/test/regress/sql/subscription.sql\n\n5.\n+-- fail - force_alter cannot be set alone\n+ALTER SUBSCRIPTION regress_testsub SET (force_alter = true);\n\nWhy is this being tested? You removed that error condition.\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n6.\n+# Try altering the two_phase option to \"off.\" The command will fail since there\n+# is a prepared transaction and the force option is not specified.\n+my $stdout;\n+my $stderr;\n+\n+($result, $stdout, $stderr) = $node_subscriber->psql(\n+ 'postgres', \"ALTER SUBSCRIPTION regress_sub SET (two_phase = off);\");\n+ok($stderr =~ /cannot alter two_phase = off when there are prepared\ntransactions/,\n+ 'ALTER SUBSCRIPTION failed');\n\n/force option is not specified./'force_alter' option is not specified as true./\n\n~~~\n\n7.\n+# Verify the prepared transaction still exists\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_prepared_xacts;\");\n+is($result, q(1), \"prepared transaction still exits\");\n+\n\nTYPO: /exits/exists/\n\n~~~\n\n8.\n+# Alter the two_phase with the force_alter option. Apart from the above, the\n+# command will abort the prepared transaction and succeed.\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION regress_sub SET (two_phase = off, force_alter\n= true);\");\n+$node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION\nregress_sub ENABLE;\");\n+\n\nWhat does \"Apart from the above\" mean? Be more explicit.\n\n~~~\n\n9.\n+# Verify the prepared transaction are aborted\n $result = $node_subscriber->safe_psql('postgres',\n \"SELECT count(*) FROM pg_prepared_xacts;\");\n is($result, q(0), \"prepared transaction done by worker is aborted\");\n\n/transaction are aborted/transaction was aborted/\n\n======\nResponse to my v7-0004 review --\nhttps://www.postgresql.org/message-id/OSBPR01MB2552F738ACF1DA6838025C4FF5E62%40OSBPR01MB2552.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 May 2024 16:40:34 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! I attached updated version.\r\n\r\n> 1.\r\n> Regarding the off->on case, the logical replication already has a mechanism for\r\n> it, so there is no need to do anything special for the on->off case; however,\r\n> we must connect to the publisher and expressly change the parameter. The\r\n> operation cannot be rolled back, and altering the parameter from \"on\" to \"off\"\r\n> within a transaction is prohibited.\r\n> \r\n> ~\r\n> \r\n> I think the difference between \"off\"-to\"on\" and \"on\"-to\"off\" needs to\r\n> be explained in more detail. Specifically \"already has a mechanism for\r\n> it\" seems very vague.\r\n\r\nNew paragraph was added.\r\n\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 2.\r\n> /*\r\n> - * The changed two_phase option of the slot can't be rolled\r\n> - * back.\r\n> + * Since the altering two_phase option of subscriptions\r\n> + * also leads to the change of slot option, this command\r\n> + * cannot be rolled back. So prevent we are in the\r\n> + * transaction block.\r\n> */\r\n> - PreventInTransactionBlock(isTopLevel, \"ALTER SUBSCRIPTION ... SET\r\n> (two_phase)\");\r\n> + if (!opts.twophase)\r\n> + PreventInTransactionBlock(isTopLevel,\r\n> +\r\n> \r\n> 2a.\r\n> There is a typo: \"So prevent we are\".\r\n> \r\n> SUGGESTION (minor adjustment and typo fix)\r\n> Since altering the two_phase option of subscriptions also leads to\r\n> changing the slot option, this command cannot be rolled back. So\r\n> prevent this if we are in a transaction block.\r\n\r\nFixed.\r\n\r\n> 2b.\r\n> But, in my previous review [v7-0002#3] I asked if the comment could\r\n> explain why this check is only needed for two_phase \"on\"-to-\"off\" but\r\n> not for \"off\"-to-\"on\". That explanation/reason is still not yet given\r\n> in the latest comment.\r\n\r\nAdded.\r\n\r\n> 3.\r\n> /*\r\n> - * Try to acquire the connection necessary for altering slot.\r\n> + * Check the need to alter the replication slot. Failover and two_phase\r\n> + * options are controlled by both the publisher (as a slot option) and the\r\n> + * subscriber (as a subscription option).\r\n> + */\r\n> + update_failover = replaces[Anum_pg_subscription_subfailover - 1];\r\n> + update_two_phase = (replaces[Anum_pg_subscription_subtwophasestate - 1]\r\n> &&\r\n> + !opts.twophase);\r\n> \r\n> \r\n> (This is similar to the previous comment). In my previous review\r\n> [v7-0002#3a] I asked why update_two_phase is TRUE only if 'two-phase'\r\n> is being updated \"on\"-to-\"off\", but not when it is being updated\r\n> \"off\"-to-\"on\". That explanation/reason is still not yet given in the\r\n> latest comment.\r\n\r\nAdded.\r\n\r\n> \r\n> ======\r\n> src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\r\n> \r\n> 4.\r\n> - appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( FAILOVER %s,\r\n> TWO_PHASE %s )\",\r\n> - quote_identifier(slotname),\r\n> - failover ? \"true\" : \"false\",\r\n> - two_phase ? \"true\" : \"false\");\r\n> + appendStringInfo(&cmd, \"ALTER_REPLICATION_SLOT %s ( \",\r\n> + quote_identifier(slotname));\r\n> +\r\n> + if (failover)\r\n> + appendStringInfo(&cmd, \"FAILOVER %s\",\r\n> + *failover ? \"true\" : \"false\");\r\n> +\r\n> + if (failover && two_phase)\r\n> + appendStringInfo(&cmd, \", \");\r\n> +\r\n> + if (two_phase)\r\n> + appendStringInfo(&cmd, \"TWO_PHASE %s\",\r\n> + *two_phase ? \"true\" : \"false\");\r\n> +\r\n> + appendStringInfoString(&cmd, \");\");\r\n> \r\n> It will be better if that last line includes the extra space like I\r\n> had suggested in [v7-0002#4a] so the result will have the same spacing\r\n> as in the original code. e.g.\r\n> \r\n> + appendStringInfoString(&cmd, \" );\");\r\n\r\nI missed the blank, added.\r\n\r\n> \r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 5.\r\n> +# Check the case that prepared transactions exist on the publisher node.\r\n> +#\r\n> +# Since the two_phase is \"off\", then normally, this PREPARE will do nothing\r\n> +# until the COMMIT PREPARED, but in this test, we toggle the two_phase to \"on\"\r\n> +# again before the COMMIT PREPARED happens.\r\n> \r\n> This is a major test part so IMO this comment should have\r\n> ##################### like it had before, to distinguish it from all\r\n> the sub-step comments.\r\n\r\nAdded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Mon, 13 May 2024 12:25:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! Patch can be available in [1].\r\n\r\n> ======\r\n> src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 1.\r\n> + <para>\r\n> + The <literal>two_phase</literal> parameter can only be altered when\r\n> the\r\n> + subscription is disabled. When altering the parameter from\r\n> <literal>on</literal>\r\n> + to <literal>off</literal>, the backend process checks prepared\r\n> + transactions done by the logical replication worker and aborts them.\r\n> + </para>\r\n> \r\n> The text may be OK as-is, but I was wondering if it might be better to\r\n> give a more verbose explanation.\r\n> \r\n> BEFORE\r\n> ... the backend process checks prepared transactions done by the\r\n> logical replication worker and aborts them.\r\n> \r\n> SUGGESTION\r\n> ... the backend process checks for any incomplete prepared\r\n> transactions done by the logical replication worker (from when\r\n> two_phase parameter was still \"on\") and, if any are found, those are\r\n> aborted.\r\n>\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 2. AlterSubscription\r\n> \r\n> - /*\r\n> - * Since the altering two_phase option of subscriptions\r\n> - * also leads to the change of slot option, this command\r\n> - * cannot be rolled back. So prevent we are in the\r\n> - * transaction block.\r\n> + * If two_phase was enabled, there is a possibility the\r\n> + * transactions has already been PREPARE'd. They must be\r\n> + * checked and rolled back.\r\n> */\r\n> \r\n> BEFORE\r\n> ... there is a possibility the transactions has already been PREPARE'd.\r\n> \r\n> SUGGESTION\r\n> ... there is a possibility that transactions have already been PREPARE'd.\r\n\r\nFixed.\r\n\r\n> 3. AlterSubscription\r\n> + /*\r\n> + * Since the altering two_phase option of subscriptions\r\n> + * (especially on->off case) also leads to the\r\n> + * change of slot option, this command cannot be rolled\r\n> + * back. So prevent we are in the transaction block.\r\n> + */\r\n> PreventInTransactionBlock(isTopLevel,\r\n> \"ALTER SUBSCRIPTION ... SET (two_phase = off)\");\r\n> \r\n> \r\n> This comment is a bit vague and includes some typos, but IIUC these\r\n> problems will already be addressed by the 0002 patch changes.AFAIK\r\n> patch 0003 is only moving the 0002 comment.\r\n\r\nYeah, the comment was updated accordingly.\r\n\r\n> \r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 4.\r\n> +# Check the case that prepared transactions exist on the subscriber node\r\n> +#\r\n> +# If the two_phase is altering from \"on\" to \"off\" and there are prepared\r\n> +# transactions on the subscriber, they must be aborted. This test checks it.\r\n> +\r\n> \r\n> Similar to the comment that I gave for v8-0002. I think there should\r\n> be #################### comment for the major test comment to\r\n> distinguish it from comments for the sub-steps.\r\n\r\nAdded.\r\n\r\n> 5.\r\n> +# Verify the prepared transaction are aborted because two_phase is changed to\r\n> +# \"off\".\r\n> +$result = $node_subscriber->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_prepared_xacts;\");\r\n> +is($result, q(0), \"prepared transaction done by worker is aborted\");\r\n> +\r\n> \r\n> /the prepared transaction are aborted/any prepared transactions are aborted/\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/OSBPR01MB2552FEA48D265EA278AA9F7AF5E22%40OSBPR01MB2552.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Mon, 13 May 2024 12:27:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! New patch was posted in [1].\r\n\r\n> 0.1 General - Patch name\r\n> \r\n> /SUBSCIRPTION/SUBSCRIPTION/\r\n\r\nFixed.\r\n\r\n> ======\r\n> 0.2 General - Apply\r\n> \r\n> FYI, there are whitespace warnings:\r\n> \r\n> git\r\n> apply ../patches_misc/v8-0004-Add-force_alter-option-for-ALTER-SUBSCIRPTI\r\n> ON-.-S.patch\r\n> ../patches_misc/v8-0004-Add-force_alter-option-for-ALTER-SUBSCIRPTION-.-\r\n> S.patch:191:\r\n> trailing whitespace.\r\n> # command will abort the prepared transaction and succeed.\r\n> warning: 1 line adds whitespace errors.\r\n\r\nI didn't recognize, fixed.\r\n\r\n> ======\r\n> 0.3 General - Regression test fails\r\n> \r\n> The subscription regression tests are not working.\r\n> \r\n> ok 158 + publication 1187 ms\r\n> not ok 159 + subscription 123 ms\r\n> \r\n> See review comments #4 and #5 below for the reason why.\r\n\r\nYeah, I missed to update the expected result. Fixed.\r\n\r\n> ======\r\n> src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 1.\r\n> <para>\r\n> The <literal>two_phase</literal> parameter can only be altered when\r\n> the\r\n> - subscription is disabled. When altering the parameter from\r\n> <literal>on</literal>\r\n> - to <literal>off</literal>, the backend process checks prepared\r\n> - transactions done by the logical replication worker and aborts them.\r\n> + subscription is disabled. Altering the parameter from\r\n> <literal>on</literal>\r\n> + to <literal>off</literal> will be failed when there are prepared\r\n> + transactions done by the logical replication worker. If you want to alter\r\n> + the parameter forcibly in this case, <literal>force_alter</literal>\r\n> + option must be set to <literal>on</literal> at the same time. If\r\n> + specified, the backend process aborts prepared transactions.\r\n> </para>\r\n> 1a.\r\n> That \"will be failed when...\" seems strange. Maybe say \"will give an\r\n> error when...\"\r\n> \r\n> ~\r\n> 1b.\r\n> Because \"force\" is a verb, I think true/false is more natural than\r\n> on/off for this new boolean option. e.g. it acts more like a \"flag\"\r\n> than a \"mode\". See all the other boolean options in CREATE\r\n> SUBSCRIPTION -- those are mostly all verbs too and are all true/false\r\n> AFAIK.\r\n\r\nFixed, but note that the part was moved.\r\n\r\n> \r\n> ======\r\n> \r\n> 2. CREATE SUBSCRIPTION\r\n> \r\n> For my previous review, two comments [v7-0004#2] and [v7-0004#3] were\r\n> not addressed. Kuroda-san wrote:\r\n> Hmm, but the parameter cannot be used for CREATE SUBSCRIPTION. Should\r\n> we modify to accept and add the description in the doc?\r\n> \r\n> ~\r\n> \r\n> Yes, that is what I am suggesting. IMO it is odd for the user to be\r\n> able to ALTER a parameter that cannot be included in the CREATE\r\n> SUBSCRIPTION in the first place. AFAIK there are no other parameters\r\n> that behave that way.\r\n\r\nHmm. I felt that this change required the new attribute in pg_subscription system\r\ncatalog. Previously I did not like because it contains huge change, but...I tried to do.\r\nNew attribute 'subforcealter', and some parts were updated accordingly.\r\n\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. AlterSubscription\r\n> \r\n> + if (!opts.force_alter)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot alter %s when there are prepared transactions\",\r\n> + \"two_phase = off\"),\r\n> + errhint(\"Resolve these transactions or set %s at the same time, and\r\n> then try again.\",\r\n> + \"force_alter = true\")));\r\n> \r\n> I think saying \"at the same time\" in the hint is unnecessary. Surely\r\n> the user is allowed to set this parameter separately if they want to?\r\n> \r\n> e.g.\r\n> ALTER SUBSCRIPTION sub SET (force_alter=true);\r\n> ALTER SUBSCRIPTION sub SET (two_phase=off);\r\n\r\nActually, it was correct. Since force_alter was not recorded in the system catalog, it must\r\nbe specified at the same time.\r\nNow, we allow to be separated, so removed.\r\n\r\n> ======\r\n> src/test/regress/expected/subscription.out\r\n> \r\n> 4.\r\n> +-- fail - force_alter cannot be set alone\r\n> +ALTER SUBSCRIPTION regress_testsub SET (force_alter = true);\r\n> +ERROR: force_alter must be specified with two_phase\r\n> \r\n> This error cannot happen. You removed that error!\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 6.\r\n> +# Try altering the two_phase option to \"off.\" The command will fail since there\r\n> +# is a prepared transaction and the force option is not specified.\r\n> +my $stdout;\r\n> +my $stderr;\r\n> +\r\n> +($result, $stdout, $stderr) = $node_subscriber->psql(\r\n> + 'postgres', \"ALTER SUBSCRIPTION regress_sub SET (two_phase = off);\");\r\n> +ok($stderr =~ /cannot alter two_phase = off when there are prepared\r\n> transactions/,\r\n> + 'ALTER SUBSCRIPTION failed');\r\n> \r\n> /force option is not specified./'force_alter' option is not specified as true./\r\n\r\nFixed.\r\n\r\n> \r\n> 7.\r\n> +# Verify the prepared transaction still exists\r\n> +$result = $node_subscriber->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_prepared_xacts;\");\r\n> +is($result, q(1), \"prepared transaction still exits\");\r\n> +\r\n> \r\n> TYPO: /exits/exists/\r\n\r\nFixed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 8.\r\n> +# Alter the two_phase with the force_alter option. Apart from the above, the\r\n> +# command will abort the prepared transaction and succeed.\r\n> +$node_subscriber->safe_psql('postgres',\r\n> + \"ALTER SUBSCRIPTION regress_sub SET (two_phase = off, force_alter\r\n> = true);\");\r\n> +$node_subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION\r\n> regress_sub ENABLE;\");\r\n> +\r\n> \r\n> What does \"Apart from the above\" mean? Be more explicit.\r\n\r\nClarified like \"Apart from the last ALTER SUBSCRIPTION command...\".\r\n\r\n> 9.\r\n> +# Verify the prepared transaction are aborted\r\n> $result = $node_subscriber->safe_psql('postgres',\r\n> \"SELECT count(*) FROM pg_prepared_xacts;\");\r\n> is($result, q(0), \"prepared transaction done by worker is aborted\");\r\n> \r\n> /transaction are aborted/transaction was aborted/\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/OSBPR01MB2552FEA48D265EA278AA9F7AF5E22%40OSBPR01MB2552.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Mon, 13 May 2024 12:28:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san,\n\nI'm having second thoughts about how these patches mention the option\nvalues \"on|off\". These are used in the ALTER SUBSCRIPTION document\npage for 'two_phase' and 'failover' parameters, and then those\n\"on|off\" get propagated to the code comments, error messages, and\ntests...\n\nNow I see that on the CREATE SUBSCRIPTION page [1], every boolean\nparameter (even including 'two_phase' and 'failover') is described in\nterms of \"true|false\" (not \"on|off\").\n\nIn hindsight, it is probably better to refer only to true|false\neverywhere for these boolean parameters, instead of sometimes using\ndifferent values like on|off.\n\nWhat do you think?\n\n======\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 May 2024 14:06:34 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san, Here are some review comments for all patches v9*\n\n//////////\nPatch v9-0001\n//////////\n\nThere were no changes since v8-0001, so no comments.\n\n//////////\nPatch v9-0002\n//////////\n\n======\nCommit Message\n\n2.1.\nRegarding the off->on case, the logical replication already has a\nmechanism for it, so there is no need to do anything special for the\non->off case; however, we must connect to the publisher and expressly\nchange the parameter. The operation cannot be rolled back, and\naltering the parameter from \"on\" to \"off\" within a transaction is\nprohibited.\n\nIn the opposite case, there is no need to prevent this because the\nlogical replication worker already had the mechanism to alter the slot\noption at a convenient time.\n\n~\n\nThis explanation seems to be going around in circles, without giving\nany new information:\n\nAFAICT, \"Regarding the off->on case, the logical replication already\nhas a mechanism for it, so there is no need to do anything special for\nthe on->off case;\"\n\nis saying pretty much the same as:\n\n\"In the opposite case, there is no need to prevent this because the\nlogical replication worker already had the mechanism to alter the slot\noption at a convenient time.\"\n\nBut, what I hoped for in previous review comments was an explanation\nsomewhat less vague than \"already has a mechanism\" or \"already had the\nmechanism\". Can't this have just 1 or 2 lines to say WHAT is that\nexisting mechanism for the \"off\" to \"on\" case, and WHY that means\nthere is nothing special to do in that scenario?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2.2. AlterSubscription\n\n /*\n- * The changed two_phase option of the slot can't be rolled\n- * back.\n+ * Since altering the two_phase option of subscriptions\n+ * also leads to changing the slot option, this command\n+ * cannot be rolled back. So prevent this if we are in a\n+ * transaction block. In the opposite case, there is no\n+ * need to prevent this because the logical replication\n+ * worker already had the mechanism to alter the slot\n+ * option at a convenient time.\n */\n\n(Same previous review comments, and same as my review comment for the\ncommit message above).\n\nI don't think \"already had the mechanism\" is enough explanation.\n\nAlso, the 2nd sentence doesn't make sense here because the comment\nonly said \"altering the slot option\" -- it didn't say it was altering\nit to \"on\" or altering it to \"off\", so \"the opposite case\" has no\nmeaning.\n\n~~~\n\n2.3. AlterSubscription\n\n /*\n- * Try to acquire the connection necessary for altering slot.\n+ * Check the need to alter the replication slot. Failover and two_phase\n+ * options are controlled by both the publisher (as a slot option) and the\n+ * subscriber (as a subscription option). The slot option must be altered\n+ * only when changing \"on\" to \"off\". Because in opposite case, the logical\n+ * replication worker already has the mechanism to do so at a convenient\n+ * time.\n+ */\n+ update_failover = replaces[Anum_pg_subscription_subfailover - 1];\n+ update_two_phase = (replaces[Anum_pg_subscription_subtwophasestate - 1] &&\n+ !opts.twophase);\n\nThis is again the same as other review comments above. Probably, when\nsome better explanation can be found for \"already has the mechanism to\ndo so at a convenient time.\" then all of these places can be changed\nusing similar text.\n\n//////////\nPatch v9-0003\n//////////\n\nThere are some imperfect code comments but AFAIK they are the same\nones from patch 0002. I think patch 0003 is just moving those comments\nto different places, so probably they would already be addressed by\npatch 0002.\n\n//////////\nPatch v9-0004\n//////////\n\n======\ndoc/src/sgml/catalogs.sgml\n\n4.1.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subforcealter</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription can be altered <literal>two_phase</literal>\n+ option, even if there are prepared transactions\n+ </para></entry>\n+ </row>\n+\n\nBEFORE\nIf true, the subscription can be altered <literal>two_phase</literal>\noption, even if there are prepared transactions\n\nSUGGESTION\nIf true, then the ALTER SUBSCRIPTION command can disable\n<literal>two_phase</literal> option, even if there are uncommitted\nprepared transactions from when <literal>two_phase</literal> was\nenabled\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n4.2.\n-\n- <para>\n- The <literal>two_phase</literal> parameter can only be altered when the\n- subscription is disabled. When altering the parameter from\n<literal>on</literal>\n- to <literal>off</literal>, the backend process checks for any incomplete\n- prepared transactions done by the logical replication worker (from when\n- <literal>two_phase</literal> parameter was still <literal>on</literal>)\n- and, if any are found, those are aborted.\n- </para>\n\nWell, I still think there ought to be some mention of the relationship\nbetween 'force_alter' and 'two_phase' given on this ALTER SUBSCRIPTION\npage. Then the user can cross-reference to read what the 'force_alter'\nactually does.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n4.3.\n+\n+ <varlistentry id=\"sql-createsubscription-params-with-force-alter\">\n+ <term><literal>force_alter</literal> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies whether the subscription can be altered\n+ <literal>two_phase</literal> option, even if there are prepared\n+ transactions. If specified, the backend process checks for any\n+ incomplete prepared transactions done by the logical replication\n+ worker (from when <literal>two_phase</literal> parameter was still\n+ <literal>on</literal>), if any are found, those are aborted.\n+ Otherwise, Altering the parameter from <literal>on</literal> to\n+ <literal>off</literal> will give an error when there are prepared\n+ transactions done by the logical replication worker.\n+ The default is <literal>false</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nThis explanation seems a bit repetitive. I think it can be improved as follows:\n\nSUGGESTION\nSpecifies if the ALTER SUBSCRIPTION can be forced to proceed instead\nof giving an error.\n\nThere is currently only one scenario where this parameter has any\neffect: When altering two_phase option from true to false it is\npossible for there to be incomplete prepared transactions done by the\nlogical replication worker (from when two_phase parameter was still\ntrue). If force_alter is false, then this will give an error; if\nforce_alter is true, then the incomplete prepared transactions are\naborted and the alter will proceed.\n\nThe default is false.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n4.4. CreateSubscription\n\n values[Anum_pg_subscription_subfailover - 1] = BoolGetDatum(opts.failover);\n+ values[Anum_pg_subscription_subforcealter] = BoolGetDatum(opts.force_alter);\n values[Anum_pg_subscription_subconninfo - 1] =\n\nHmm, looks like a bug. Shouldn't that index say -1?\n\n~~~\n4.5. AlterSubscription\n\n+ /*\n+ * Abort prepared transactions only if\n+ * 'force_alter' option is true. Otherwise raise\n+ * an ERROR.\n+ */\n+ if (IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER))\n+ {\n+ if (!opts.force_alter)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot alter %s when there are prepared transactions\",\n+ \"two_phase = off\"),\n+ errhint(\"Resolve these transactions or set %s, and then try again.\",\n+ \"force_alter = true\")));\n+ }\n+ else\n+ {\n+ if (!sub->forcealter)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot alter %s when there are prepared transactions\",\n+ \"two_phase = off\"),\n+ errhint(\"Resolve these transactions or set %s, and then try again.\",\n+ \"force_alter = true\")));\n+ }\n+\n\nIIUC this code can be simplified to remove the error duplication.\nSomething like below:\n\nSUGGESTION\n\nbool raise_error = IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER) ?\n!opts.force_alter : !sub->forcealter;\n\nif (raise_error)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"cannot alter %s when there are prepared transactions\",\n \"two_phase = off\"),\n errhint(\"Resolve these transactions or set %s, and then try again.\",\n \"force_alter = true\")));\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n4.6. getSubscriptions\n\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query,\n+ \" s.subforcealter\\n\");\n+ else\n+ appendPQExpBuffer(query,\n+ \" false AS subforcealter\\n\");\n+\n+\n\n4.6a.\nShould this just be combined with the existing \"if\n(fout->remoteVersion >= 170000)\" for failover?\n\n~\n\n4.6b.\nDouble blank lines.\n\n======\nsrc/bin/psql/describe.c\n\n4.7.\n+ if (pset.sversion >= 170000)\n+ appendPQExpBuffer(&buf,\n+ \", subforcealter AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Force_alter\"));\n\nIMO the column title should be \"Force alter\" (i.e. without the underscore)\n\n======\nsrc/include/catalog/pg_subscription.h\n\n4.8. CATALOG\n\n+ bool subforcealter; /* True if we allow to drop prepared transactions\n+ * when altering two_phase \"on\"->\"off\". */\n\nI think this is not actually the description of 'force_alter'. What\nyou wrote just happens to be the only case that this option currently\nworks for. Maybe a more correct description is something like below.\n\nSUGGESTION\nTrue allows the ALTER SUBSCRIPTION command to proceed under conditions\nthat would otherwise result in an error. Currently, 'force_alter' only\nhas an effect when altering the two_phase option from \"true\" to\n\"false\".\n\n~~~\n\n4.9. struct Subscription\n\n+ bool forcealter; /* True if we allow to drop prepared\n+ * transactions when altering two_phase\n+ * \"on\"->\"off\". */\n\nDitto the previous review comment.\n\n======\nsrc/test/regress/expected/subscription.out\n\n4.10.\n-\n List of subscriptions\n- Name | Owner | Enabled | Publication\n| Binary | Streaming | Two-phase commit | Disable on error | Origin |\nPassword required | Run as owner? | Failover | Synchronous commit |\n Conninfo | Skip LSN\n-------------------+---------------------------+---------+-------------+--------+-----------+------------------+------------------+--------+-------------------+---------------+----------+--------------------+-----------------------------+----------\n- regress_testsub4 | regress_subscription_user | f | {testpub}\n| f | off | d | f | none |\nt | f | f | off |\ndbname=regress_doesnotexist | 0/0\n+\n List of\nsubscriptions\n+ Name | Owner | Enabled | Publication\n| Binary | Streaming | Two-phase commit | Disable on error | Origin |\nPassword required | Run as owner? | Failover | Force_alter |\nSynchronous commit | Conninfo | Skip LSN\n+------------------+---------------------------+---------+-------------+--------+-----------+------------------+------------------+--------+-------------------+---------------+----------+-------------+--------------------+-----------------------------+----------\n\nThe column heading should be \"Force alter\", as already mentioned by an\nearlier review comment (#4.7)\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n4.11.\n\n+# Alter the two_phase with the force_alter option. Apart from the the last\n+# ALTER SUBSCRIPTION command, the command will abort the prepared transaction\n+# and succeed.\n\nThere is typo \"the the\" and the wording is a bit strange. Why not just say:\n\nSUGGESTION\nAlter the two_phase true to false with the force_alter option enabled.\nThis command will succeed after aborting the prepared transaction.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 May 2024 14:18:45 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! Here is new version patch.\r\n\r\n> //////////\r\n> Patch v9-0002\r\n> //////////\r\n> \r\n> ======\r\n> Commit Message\r\n> \r\n> 2.1.\r\n> Regarding the off->on case, the logical replication already has a\r\n> mechanism for it, so there is no need to do anything special for the\r\n> on->off case; however, we must connect to the publisher and expressly\r\n> change the parameter. The operation cannot be rolled back, and\r\n> altering the parameter from \"on\" to \"off\" within a transaction is\r\n> prohibited.\r\n> \r\n> In the opposite case, there is no need to prevent this because the\r\n> logical replication worker already had the mechanism to alter the slot\r\n> option at a convenient time.\r\n> \r\n> ~\r\n> \r\n> This explanation seems to be going around in circles, without giving\r\n> any new information:\r\n> \r\n> AFAICT, \"Regarding the off->on case, the logical replication already\r\n> has a mechanism for it, so there is no need to do anything special for\r\n> the on->off case;\"\r\n> \r\n> is saying pretty much the same as:\r\n> \r\n> \"In the opposite case, there is no need to prevent this because the\r\n> logical replication worker already had the mechanism to alter the slot\r\n> option at a convenient time.\"\r\n> \r\n> But, what I hoped for in previous review comments was an explanation\r\n> somewhat less vague than \"already has a mechanism\" or \"already had the\r\n> mechanism\". Can't this have just 1 or 2 lines to say WHAT is that\r\n> existing mechanism for the \"off\" to \"on\" case, and WHY that means\r\n> there is nothing special to do in that scenario?\r\n>\r\n\r\nReworded. Thought?\r\n\r\n> 2.2. AlterSubscription\r\n> \r\n> /*\r\n> - * The changed two_phase option of the slot can't be rolled\r\n> - * back.\r\n> + * Since altering the two_phase option of subscriptions\r\n> + * also leads to changing the slot option, this command\r\n> + * cannot be rolled back. So prevent this if we are in a\r\n> + * transaction block. In the opposite case, there is no\r\n> + * need to prevent this because the logical replication\r\n> + * worker already had the mechanism to alter the slot\r\n> + * option at a convenient time.\r\n> */\r\n> \r\n> (Same previous review comments, and same as my review comment for the\r\n> commit message above).\r\n> \r\n> I don't think \"already had the mechanism\" is enough explanation.\r\n> \r\n> Also, the 2nd sentence doesn't make sense here because the comment\r\n> only said \"altering the slot option\" -- it didn't say it was altering\r\n> it to \"on\" or altering it to \"off\", so \"the opposite case\" has no\r\n> meaning.\r\n\r\nFixed.\r\n\r\n> 2.3. AlterSubscription\r\n> \r\n> /*\r\n> - * Try to acquire the connection necessary for altering slot.\r\n> + * Check the need to alter the replication slot. Failover and two_phase\r\n> + * options are controlled by both the publisher (as a slot option) and the\r\n> + * subscriber (as a subscription option). The slot option must be altered\r\n> + * only when changing \"on\" to \"off\". Because in opposite case, the logical\r\n> + * replication worker already has the mechanism to do so at a convenient\r\n> + * time.\r\n> + */\r\n> + update_failover = replaces[Anum_pg_subscription_subfailover - 1];\r\n> + update_two_phase = (replaces[Anum_pg_subscription_subtwophasestate - 1]\r\n> &&\r\n> + !opts.twophase);\r\n> \r\n> This is again the same as other review comments above. Probably, when\r\n> some better explanation can be found for \"already has the mechanism to\r\n> do so at a convenient time.\" then all of these places can be changed\r\n> using similar text.\r\n\r\nAdded a reference.\r\n\r\n> \r\n> //////////\r\n> Patch v9-0003\r\n> //////////\r\n> \r\n> There are some imperfect code comments but AFAIK they are the same\r\n> ones from patch 0002. I think patch 0003 is just moving those comments\r\n> to different places, so probably they would already be addressed by\r\n> patch 0002.\r\n>\r\n\r\nThe comment was moved, so no need to modify here.\r\n\r\n> ======\r\n> doc/src/sgml/catalogs.sgml\r\n> \r\n> 4.1.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subforcealter</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, the subscription can be altered <literal>two_phase</literal>\r\n> + option, even if there are prepared transactions\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> \r\n> BEFORE\r\n> If true, the subscription can be altered <literal>two_phase</literal>\r\n> option, even if there are prepared transactions\r\n> \r\n> SUGGESTION\r\n> If true, then the ALTER SUBSCRIPTION command can disable\r\n> <literal>two_phase</literal> option, even if there are uncommitted\r\n> prepared transactions from when <literal>two_phase</literal> was\r\n> enabled\r\n\r\nFixed, added a link for ALTER SUBSCRIPTION.\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 4.2.\r\n> -\r\n> - <para>\r\n> - The <literal>two_phase</literal> parameter can only be altered when\r\n> the\r\n> - subscription is disabled. When altering the parameter from\r\n> <literal>on</literal>\r\n> - to <literal>off</literal>, the backend process checks for any incomplete\r\n> - prepared transactions done by the logical replication worker (from when\r\n> - <literal>two_phase</literal> parameter was still <literal>on</literal>)\r\n> - and, if any are found, those are aborted.\r\n> - </para>\r\n> \r\n> Well, I still think there ought to be some mention of the relationship\r\n> between 'force_alter' and 'two_phase' given on this ALTER SUBSCRIPTION\r\n> page. Then the user can cross-reference to read what the 'force_alter'\r\n> actually does.\r\n>\r\n\r\nRevived the content, and added an link. Thought?\r\n\r\n> ======\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 4.3.\r\n> +\r\n> + <varlistentry id=\"sql-createsubscription-params-with-force-alter\">\r\n> + <term><literal>force_alter</literal>\r\n> (<type>boolean</type>)</term>\r\n> + <listitem>\r\n> + <para>\r\n> + Specifies whether the subscription can be altered\r\n> + <literal>two_phase</literal> option, even if there are prepared\r\n> + transactions. If specified, the backend process checks for any\r\n> + incomplete prepared transactions done by the logical replication\r\n> + worker (from when <literal>two_phase</literal> parameter was\r\n> still\r\n> + <literal>on</literal>), if any are found, those are aborted.\r\n> + Otherwise, Altering the parameter from <literal>on</literal> to\r\n> + <literal>off</literal> will give an error when there are prepared\r\n> + transactions done by the logical replication worker.\r\n> + The default is <literal>false</literal>.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> \r\n> This explanation seems a bit repetitive. I think it can be improved as follows:\r\n> \r\n> SUGGESTION\r\n> Specifies if the ALTER SUBSCRIPTION can be forced to proceed instead\r\n> of giving an error.\r\n> \r\n> There is currently only one scenario where this parameter has any\r\n> effect: When altering two_phase option from true to false it is\r\n> possible for there to be incomplete prepared transactions done by the\r\n> logical replication worker (from when two_phase parameter was still\r\n> true). If force_alter is false, then this will give an error; if\r\n> force_alter is true, then the incomplete prepared transactions are\r\n> aborted and the alter will proceed.\r\n> \r\n> The default is false.\r\n\r\nFixed, but added attributes.\r\n\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 4.4. CreateSubscription\r\n> \r\n> values[Anum_pg_subscription_subfailover - 1] = BoolGetDatum(opts.failover);\r\n> + values[Anum_pg_subscription_subforcealter] =\r\n> BoolGetDatum(opts.force_alter);\r\n> values[Anum_pg_subscription_subconninfo - 1] =\r\n> \r\n> Hmm, looks like a bug. Shouldn't that index say -1?\r\n>\r\n\r\nRight, fixed.\r\n\r\n> ~~~\r\n> 4.5. AlterSubscription\r\n> \r\n> + /*\r\n> + * Abort prepared transactions only if\r\n> + * 'force_alter' option is true. Otherwise raise\r\n> + * an ERROR.\r\n> + */\r\n> + if (IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER))\r\n> + {\r\n> + if (!opts.force_alter)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot alter %s when there are prepared transactions\",\r\n> + \"two_phase = off\"),\r\n> + errhint(\"Resolve these transactions or set %s, and then try again.\",\r\n> + \"force_alter = true\")));\r\n> + }\r\n> + else\r\n> + {\r\n> + if (!sub->forcealter)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot alter %s when there are prepared transactions\",\r\n> + \"two_phase = off\"),\r\n> + errhint(\"Resolve these transactions or set %s, and then try again.\",\r\n> + \"force_alter = true\")));\r\n> + }\r\n> +\r\n> \r\n> IIUC this code can be simplified to remove the error duplication.\r\n> Something like below:\r\n> \r\n> SUGGESTION\r\n> \r\n> bool raise_error = IsSet(opts.specified_opts, SUBOPT_FORCE_ALTER) ?\r\n> !opts.force_alter : !sub->forcealter;\r\n> \r\n> if (raise_error)\r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"cannot alter %s when there are prepared transactions\",\r\n> \"two_phase = off\"),\r\n> errhint(\"Resolve these transactions or set %s, and then try again.\",\r\n> \"force_alter = true\")));\r\n>\r\n\r\nModified.\r\n\r\n> ======\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 4.6. getSubscriptions\r\n> \r\n> + if (fout->remoteVersion >= 170000)\r\n> + appendPQExpBufferStr(query,\r\n> + \" s.subforcealter\\n\");\r\n> + else\r\n> + appendPQExpBuffer(query,\r\n> + \" false AS subforcealter\\n\");\r\n> +\r\n> +\r\n> \r\n> 4.6a.\r\n> Should this just be combined with the existing \"if\r\n> (fout->remoteVersion >= 170000)\" for failover?\r\n\r\nThis was intentional. Features for PG17 have already been frozen, so\r\nthe patch will be pushed for PG18. After removeVersion is bumped, \r\nI want to replace to \"(fout->remoteVersion >= 180000)\"\r\n\r\n> \r\n> ~\r\n> \r\n> 4.6b.\r\n> Double blank lines.\r\n\r\nFixed.\r\n\r\n> src/bin/psql/describe.c\r\n> \r\n> 4.7.\r\n> + if (pset.sversion >= 170000)\r\n> + appendPQExpBuffer(&buf,\r\n> + \", subforcealter AS \\\"%s\\\"\\n\",\r\n> + gettext_noop(\"Force_alter\"));\r\n> \r\n> IMO the column title should be \"Force alter\" (i.e. without the underscore)\r\n>\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/include/catalog/pg_subscription.h\r\n> \r\n> 4.8. CATALOG\r\n> \r\n> + bool subforcealter; /* True if we allow to drop prepared transactions\r\n> + * when altering two_phase \"on\"->\"off\". */\r\n> \r\n> I think this is not actually the description of 'force_alter'. What\r\n> you wrote just happens to be the only case that this option currently\r\n> works for. Maybe a more correct description is something like below.\r\n> \r\n> SUGGESTION\r\n> True allows the ALTER SUBSCRIPTION command to proceed under conditions\r\n> that would otherwise result in an error. Currently, 'force_alter' only\r\n> has an effect when altering the two_phase option from \"true\" to\r\n> \"false\".\r\n>\r\n\r\nHmm. Seems bit long, but used yours.\r\n\r\n> ~~~\r\n> \r\n> 4.9. struct Subscription\r\n> \r\n> + bool forcealter; /* True if we allow to drop prepared\r\n> + * transactions when altering two_phase\r\n> + * \"on\"->\"off\". */\r\n> \r\n> Ditto the previous review comment.\r\n>\r\n\r\nDitto.\r\n\r\n> ======\r\n> src/test/regress/expected/subscription.out\r\n> \r\n> 4.10.\r\n> -\r\n> List of subscriptions\r\n> - Name | Owner | Enabled | Publication\r\n> | Binary | Streaming | Two-phase commit | Disable on error | Origin |\r\n> Password required | Run as owner? | Failover | Synchronous commit |\r\n> Conninfo | Skip LSN\r\n> -------------------+---------------------------+---------+-------------+--------\r\n> +-----------+------------------+------------------+--------+-------------------\r\n> +---------------+----------+--------------------+-----------------------------+\r\n> ----------\r\n> - regress_testsub4 | regress_subscription_user | f | {testpub}\r\n> | f | off | d | f | none |\r\n> t | f | f | off |\r\n> dbname=regress_doesnotexist | 0/0\r\n> +\r\n> List of\r\n> subscriptions\r\n> + Name | Owner | Enabled | Publication\r\n> | Binary | Streaming | Two-phase commit | Disable on error | Origin |\r\n> Password required | Run as owner? | Failover | Force_alter |\r\n> Synchronous commit | Conninfo | Skip LSN\r\n> +------------------+---------------------------+---------+-------------+-------\r\n> -+-----------+------------------+------------------+--------+------------------\r\n> -+---------------+----------+-------------+--------------------+---------------\r\n> --------------+----------\r\n> \r\n> The column heading should be \"Force alter\", as already mentioned by an\r\n> earlier review comment (#4.7)\r\n\r\nYeah, fixed.\r\n\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 4.11.\r\n> \r\n> +# Alter the two_phase with the force_alter option. Apart from the the last\r\n> +# ALTER SUBSCRIPTION command, the command will abort the prepared\r\n> transaction\r\n> +# and succeed.\r\n> \r\n> There is typo \"the the\" and the wording is a bit strange. Why not just say:\r\n> \r\n> SUGGESTION\r\n> Alter the two_phase true to false with the force_alter option enabled.\r\n> This command will succeed after aborting the prepared transaction.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 14 May 2024 12:03:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! New patch is available in [1].\r\n\r\n> I'm having second thoughts about how these patches mention the option\r\n> values \"on|off\". These are used in the ALTER SUBSCRIPTION document\r\n> page for 'two_phase' and 'failover' parameters, and then those\r\n> \"on|off\" get propagated to the code comments, error messages, and\r\n> tests...\r\n> \r\n> Now I see that on the CREATE SUBSCRIPTION page [1], every boolean\r\n> parameter (even including 'two_phase' and 'failover') is described in\r\n> terms of \"true|false\" (not \"on|off\").\r\n\r\nHmm. But I could sentences like \"The default value is off,...\". Also, in alter_subscription.sgml,\r\n\"on|off\" notation has already been used. Not sure, but I felt there are no rules around here.\r\n\r\n> In hindsight, it is probably better to refer only to true|false\r\n> everywhere for these boolean parameters, instead of sometimes using\r\n> different values like on|off.\r\n> \r\n> What do you think?\r\n\r\nIt's OK for me to make message/code comments consistent. Not sure the documentation,\r\nbut followed only my part.\r\n\r\n[1]: https://www.postgresql.org/message-id/OSBPR01MB2552F66463EFCFD654E87C09F5E32%40OSBPR01MB2552.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 14 May 2024 12:04:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san. Here are my review comments for latest v10* patches.\n\n//////////\npatch v10-0001\n//////////\n\nNo changes. No comments.\n\n//////////\npatch v10-0002\n//////////\n\n======\nCommit message\n\n2.1.\nRegarding the false->true case, the backend process alters the subtwophase to\nLOGICALREP_TWOPHASE_STATE_PENDING once. After the subscription is enabled, a new\nlogical replication worker requests to change the two_phase option of its slot\nfrom pending to true after the initial data synchronization is done. The code\npath is the same as the case in which two_phase is initially set to true, so\nthere is no need to do something remarkable. However, for the true->false case,\nthe backend must connect to the publisher and expressly change the parameter\nbecause the apply worker does not alter the option to false. The\noperation cannot\nbe rolled back, and altering the parameter from \"true\" to \"false\" within a\ntransaction is prohibited.\n\n~\n\nBEFORE\nThe operation cannot be rolled back, and altering the parameter from\n\"true\" to \"false\" within a transaction is prohibited.\n\nSUGGESTION\nBecause this operation cannot be rolled back, altering the two_phase\nparameter from \"true\" to \"false\" within a transaction is prohibited.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.2.\n <command>ALTER SUBSCRIPTION ... SET (failover = on|off)</command> and\n- <command>ALTER SUBSCRIPTION ... SET (two_phase = on|off)</command>\n+ <command>ALTER SUBSCRIPTION ... SET (two_phase = off)</command>\n\nI wasn't sure why you chose to keep on|off here instead of true|false,\nsince in subsequence patch 0003 you changed it true/false everywhere\nas discussed in previous reviews.\n\nOTOH if you only did this to be consistent with the \"failover=on|off\"\nthen that is OK; but in that case I might raise a separate hackers\nthread to propose those should also be changed to true|false for\nconsistency with the parameer listed on the CREATE SUBSCRIPTION page.\nWhat do you think?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2.3.\n /*\n- * The changed two_phase option of the slot can't be rolled\n- * back.\n+ * Altering the parameter from \"true\" to \"false\" within a\n+ * transaction is prohibited. Since the apply worker does\n+ * not alter the slot option to false, the backend must\n+ * connect to the publisher and expressly change the\n+ * parameter.\n+ *\n+ * There is no need to do something remarkable regarding\n+ * the \"false\" to \"true\" case; the backend process alters\n+ * subtwophase to LOGICALREP_TWOPHASE_STATE_PENDING once.\n+ * After the subscription is enabled, a new logical\n+ * replication worker requests to change the two_phase\n+ * option of its slot when the initial data synchronization\n+ * is done. The code path is the same as the case in which\n+ * two_phase is initially set to true.\n */\n\nBEFORE\n...worker requests to change the two_phase option of its slot when...\n\nSUGGESTION\n...worker requests to change the two_phase option of its slot from\npending to true when...\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n2.4.\n+#####################\n+# Check the case that prepared transactions exist on the publisher node.\n+#\n+# Since the two_phase is \"off\", then normally, this PREPARE will do nothing\n+# until the COMMIT PREPARED, but in this test, we toggle the two_phase to\n+# \"true\" again before the COMMIT PREPARED happens.\n\n/Since the two_phase is \"off\"/Since the two_phase is \"false\"/\n\n//////////\npatch v10-0003\n//////////\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3.1. AlterSubscription\n\n+ * If two_phase was enabled, there is a possibility that\n+ * transactions have already been PREPARE'd. They must be\n+ * checked and rolled back.\n */\n if (!opts.twophase)\n\nI think it will less ambiguous if you modify this to say \"If two_phase\nwas previously enabled\"\n\n~~~\n\n3.2.\nif (!opts.twophase)\n{\nList *prepared_xacts;\n\n/*\n* Altering the parameter from \"true\" to \"false\" within\n* a transaction is prohibited. Since the apply worker\n* does not alter the slot option to false, the backend\n* must connect to the publisher and expressly change\n* the parameter.\n*\n* There is no need to do something remarkable\n* regarding the \"false\" to \"true\" case; the backend\n* process alters subtwophase to\n* LOGICALREP_TWOPHASE_STATE_PENDING once. After the\n* subscription is enabled, a new logical replication\n* worker requests to change the two_phase option of\n* its slot when the initial data synchronization is\n* done. The code path is the same as the case in which\n* two_phase is initially set to true.\n*/\nif (!opts.twophase)\nPreventInTransactionBlock(isTopLevel,\n\"ALTER SUBSCRIPTION ... SET (two_phase = false)\");\n\n/*\n* To prevent prepared transactions from being\n* isolated, they must manually be aborted.\n*/\nif (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\n(prepared_xacts = GetGidListBySubid(subid)) != NIL)\n{\n/* Abort all listed transactions */\nforeach_ptr(char, gid, prepared_xacts)\nFinishPreparedTransaction(gid, false);\n\nlist_free_deep(prepared_xacts);\n}\n}\n\n/* Change system catalog acoordingly */\nvalues[Anum_pg_subscription_subtwophasestate - 1] =\nCharGetDatum(opts.twophase ?\nLOGICALREP_TWOPHASE_STATE_PENDING :\nLOGICALREP_TWOPHASE_STATE_DISABLED);\nreplaces[Anum_pg_subscription_subtwophasestate - 1] = true;\n}\n\n~\n\nWhy is \"if (!opts.twophase)\" being checked at the top and then\nimmediately being checed again here:\n+ if (!opts.twophase)\n+ PreventInTransactionBlock(isTopLevel,\n+ \"ALTER SUBSCRIPTION ... SET (two_phase = false)\");\n\nAnd then again here:\nCharGetDatum(opts.twophase ?\nLOGICALREP_TWOPHASE_STATE_PENDING :\nLOGICALREP_TWOPHASE_STATE_DISABLED);\n\nThere is no need to re-check a flag that was already checked, so\nclearly some of this logic/code is either wrong or redundant.\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n(Let's change these on|off to true|false to match what you did already\nin patch 0002).\n\n3.3.\n+#####################\n+# Check the case that prepared transactions exist on the subscriber node\n+#\n+# If the two_phase is altering from \"on\" to \"off\" and there are prepared\n+# transactions on the subscriber, they must be aborted. This test checks it.\n\n\n/off/false/\n\n/on/true/\n\n~~~\n\n3.4.\n+# Verify the prepared transaction has been replicated to the subscriber because\n+# two_phase is set to \"on\".\n\n/on/true/\n\n~~~\n\n3.5.\n+# Toggle the two_phase to \"off\" before the COMMIT PREPARED\n+$node_subscriber->safe_psql(\n+ 'postgres', \"\n+ ALTER SUBSCRIPTION regress_sub DISABLE;\n+ ALTER SUBSCRIPTION regress_sub SET (two_phase = off);\n+ ALTER SUBSCRIPTION regress_sub ENABLE;\");\n\n/off/false/\n\n/two_phase = off/two_phase = false/\n\n~~~\n\n3.6.\n+# Verify any prepared transactions are aborted because two_phase is changed to\n+# \"off\".\n\n/off/false/\n\n//////////\npatch v10-0004\n//////////\n\n======\n4.g1. GENERAL - document rendering fails\n\nFYI - The document failed to build after I apply patch 0003. Did you try it?\n\nIn my environment it reported some unbalanced tags:\n\nref/create_subscription.sgml:448: parser error : Opening and ending\ntag mismatch: link line 436 and para\n </para>\n ^\nref/create_subscription.sgml:449: parser error : Opening and ending\ntag mismatch: para line 435 and listitem\n </listitem>\n\netc.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n4.1.\n <para>\n The <literal>two_phase</literal> parameter can only be altered when the\n- subscription is disabled. When altering the parameter from\n<literal>true</literal>\n- to <literal>false</literal>, the backend process checks for any\nincomplete\n- prepared transactions done by the logical replication worker (from when\n- <literal>two_phase</literal> parameter was still <literal>true</literal>)\n- and, if any are found, those are aborted.\n+ subscription is disabled. Altering the parameter from\n<literal>true</literal>\n+ to <literal>false</literal> will give an error when when there are\n+ prepared transactions done by the logical replication worker. If you want\n+ to alter the parameter forcibly in this case,\n+ <link linkend=\"sql-createsubscription-params-with-force-alter\"><literal>force_alter</literal></link>\n+ option must be set to <literal>true</literal> at the same time.\n </para>\n\nTYPO: \"when when\"\n\nWhy is necessary to say \"at the same time\"?\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n4.2.\n+ <varlistentry id=\"sql-createsubscription-params-with-force-alter\">\n+ <term><literal>force_alter</literal> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies if the <link\nlinkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION</command>\n+ can be forced to proceed instead of giving an error. There is\n+ currently only one scenario where this parameter has any effect: When\n+ altering <literal>two_phase</literal> option from\n<literal>true</literal>\n+ to <literal>false</literal> it is possible for there to be incomplete\n+ prepared transactions done by the logical replication worker (from\n+ when <literal>two_phase</literal> parameter was still\n<literal>true</literal>).\n+ If <literal>force_alter</literal> is <literal>false</literal>, then\n+ this will give an error; if <literal>force_alter</literal> is\n+ <literal>true</literal>, then the incomplete prepared transactions\n+ are aborted and the alter will proceed.\n+ The default is <literal>false</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nIMO this will be better broken into multiple paragraphs.\n\n1. Specifies...\n2. There is...\n3. The default is...\n\n======\nsrc/test/subscription/t/099_twophase_added.pl\n\n(Let's change all the on|off to true|false like you already did in patch 0002.\n\n4.3.\n+# Try altering the two_phase option to \"off.\" The command will fail since there\n+# is a prepared transaction and the 'force_alter' option is not specified as\n+# true.\n+my $stdout;\n+my $stderr;\n\n/off./false/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 15 May 2024 15:02:36 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, May 14, 2024 at 10:03 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n...\n> > 4.11.\n> >\n> > +# Alter the two_phase with the force_alter option. Apart from the the last\n> > +# ALTER SUBSCRIPTION command, the command will abort the prepared\n> > transaction\n> > +# and succeed.\n> >\n> > There is typo \"the the\" and the wording is a bit strange. Why not just say:\n> >\n> > SUGGESTION\n> > Alter the two_phase true to false with the force_alter option enabled.\n> > This command will succeed after aborting the prepared transaction.\n>\n> Fixed.\n>\n\nYou wrote \"Fixed\" for that patch v9-0004 suggestion but I don't think\nanything was changed at all. Accidentally missed?\n\n======\nKind Regards,\nPeter Smith.\nFutjisu Australia\n\n\n",
"msg_date": "Wed, 15 May 2024 15:12:50 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! Here are new patches.\r\n\r\n > \r\n> //////////\r\n> patch v10-0002\r\n> //////////\r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 2.1.\r\n> Regarding the false->true case, the backend process alters the subtwophase to\r\n> LOGICALREP_TWOPHASE_STATE_PENDING once. After the subscription is\r\n> enabled, a new\r\n> logical replication worker requests to change the two_phase option of its slot\r\n> from pending to true after the initial data synchronization is done. The code\r\n> path is the same as the case in which two_phase is initially set to true, so\r\n> there is no need to do something remarkable. However, for the true->false case,\r\n> the backend must connect to the publisher and expressly change the parameter\r\n> because the apply worker does not alter the option to false. The\r\n> operation cannot\r\n> be rolled back, and altering the parameter from \"true\" to \"false\" within a\r\n> transaction is prohibited.\r\n> \r\n> ~\r\n> \r\n> BEFORE\r\n> The operation cannot be rolled back, and altering the parameter from\r\n> \"true\" to \"false\" within a transaction is prohibited.\r\n> \r\n> SUGGESTION\r\n> Because this operation cannot be rolled back, altering the two_phase\r\n> parameter from \"true\" to \"false\" within a transaction is prohibited.\r\n\r\nFixed.\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 2.2.\r\n> <command>ALTER SUBSCRIPTION ... SET (failover = on|off)</command>\r\n> and\r\n> - <command>ALTER SUBSCRIPTION ... SET (two_phase =\r\n> on|off)</command>\r\n> + <command>ALTER SUBSCRIPTION ... SET (two_phase = off)</command>\r\n> \r\n> I wasn't sure why you chose to keep on|off here instead of true|false,\r\n> since in subsequence patch 0003 you changed it true/false everywhere\r\n> as discussed in previous reviews.\r\n> \r\n> OTOH if you only did this to be consistent with the \"failover=on|off\"\r\n> then that is OK; but in that case I might raise a separate hackers\r\n> thread to propose those should also be changed to true|false for\r\n> consistency with the parameer listed on the CREATE SUBSCRIPTION page.\r\n> What do you think?\r\n\r\nYeah, I did not change here, because other parameters were notated as\r\non/off. I found you started the forked thread [1] so I will revise the patch\r\nafter it was accepted.\r\n\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 2.3.\r\n> /*\r\n> - * The changed two_phase option of the slot can't be rolled\r\n> - * back.\r\n> + * Altering the parameter from \"true\" to \"false\" within a\r\n> + * transaction is prohibited. Since the apply worker does\r\n> + * not alter the slot option to false, the backend must\r\n> + * connect to the publisher and expressly change the\r\n> + * parameter.\r\n> + *\r\n> + * There is no need to do something remarkable regarding\r\n> + * the \"false\" to \"true\" case; the backend process alters\r\n> + * subtwophase to LOGICALREP_TWOPHASE_STATE_PENDING once.\r\n> + * After the subscription is enabled, a new logical\r\n> + * replication worker requests to change the two_phase\r\n> + * option of its slot when the initial data synchronization\r\n> + * is done. The code path is the same as the case in which\r\n> + * two_phase is initially set to true.\r\n> */\r\n> \r\n> BEFORE\r\n> ...worker requests to change the two_phase option of its slot when...\r\n> \r\n> SUGGESTION\r\n> ...worker requests to change the two_phase option of its slot from\r\n> pending to true when...\r\n\r\nFixed.\r\n\r\n> \r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> 2.4.\r\n> +#####################\r\n> +# Check the case that prepared transactions exist on the publisher node.\r\n> +#\r\n> +# Since the two_phase is \"off\", then normally, this PREPARE will do nothing\r\n> +# until the COMMIT PREPARED, but in this test, we toggle the two_phase to\r\n> +# \"true\" again before the COMMIT PREPARED happens.\r\n> \r\n> /Since the two_phase is \"off\"/Since the two_phase is \"false\"/\r\n\r\nFixed.\r\n\r\n> \r\n> //////////\r\n> patch v10-0003\r\n> //////////\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3.1. AlterSubscription\r\n> \r\n> + * If two_phase was enabled, there is a possibility that\r\n> + * transactions have already been PREPARE'd. They must be\r\n> + * checked and rolled back.\r\n> */\r\n> if (!opts.twophase)\r\n> \r\n> I think it will less ambiguous if you modify this to say \"If two_phase\r\n> was previously enabled\"\r\n\r\nFixed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 3.2.\r\n> if (!opts.twophase)\r\n> {\r\n> List *prepared_xacts;\r\n> \r\n> /*\r\n> * Altering the parameter from \"true\" to \"false\" within\r\n> * a transaction is prohibited. Since the apply worker\r\n> * does not alter the slot option to false, the backend\r\n> * must connect to the publisher and expressly change\r\n> * the parameter.\r\n> *\r\n> * There is no need to do something remarkable\r\n> * regarding the \"false\" to \"true\" case; the backend\r\n> * process alters subtwophase to\r\n> * LOGICALREP_TWOPHASE_STATE_PENDING once. After the\r\n> * subscription is enabled, a new logical replication\r\n> * worker requests to change the two_phase option of\r\n> * its slot when the initial data synchronization is\r\n> * done. The code path is the same as the case in which\r\n> * two_phase is initially set to true.\r\n> */\r\n> if (!opts.twophase)\r\n> PreventInTransactionBlock(isTopLevel,\r\n> \"ALTER SUBSCRIPTION ... SET (two_phase = false)\");\r\n> \r\n> /*\r\n> * To prevent prepared transactions from being\r\n> * isolated, they must manually be aborted.\r\n> */\r\n> if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED &&\r\n> (prepared_xacts = GetGidListBySubid(subid)) != NIL)\r\n> {\r\n> /* Abort all listed transactions */\r\n> foreach_ptr(char, gid, prepared_xacts)\r\n> FinishPreparedTransaction(gid, false);\r\n> \r\n> list_free_deep(prepared_xacts);\r\n> }\r\n> }\r\n> \r\n> /* Change system catalog acoordingly */\r\n> values[Anum_pg_subscription_subtwophasestate - 1] =\r\n> CharGetDatum(opts.twophase ?\r\n> LOGICALREP_TWOPHASE_STATE_PENDING :\r\n> LOGICALREP_TWOPHASE_STATE_DISABLED);\r\n> replaces[Anum_pg_subscription_subtwophasestate - 1] = true;\r\n> }\r\n> \r\n> ~\r\n> \r\n> Why is \"if (!opts.twophase)\" being checked at the top and then\r\n> immediately being checed again here:\r\n> + if (!opts.twophase)\r\n> + PreventInTransactionBlock(isTopLevel,\r\n> + \"ALTER SUBSCRIPTION ... SET (two_phase = false)\");\r\n\r\nOh, this was caused by wrong git operations.\r\n\r\n> And then again here:\r\n> CharGetDatum(opts.twophase ?\r\n> LOGICALREP_TWOPHASE_STATE_PENDING :\r\n> LOGICALREP_TWOPHASE_STATE_DISABLED);\r\n> \r\n> There is no need to re-check a flag that was already checked, so\r\n> clearly some of this logic/code is either wrong or redundant.\r\n\r\nRight. I added a new variable to store the value to be changed. Thouth?\r\n\r\n > \r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> (Let's change these on|off to true|false to match what you did already\r\n> in patch 0002).\r\n> \r\n> 3.3.\r\n> +#####################\r\n> +# Check the case that prepared transactions exist on the subscriber node\r\n> +#\r\n> +# If the two_phase is altering from \"on\" to \"off\" and there are prepared\r\n> +# transactions on the subscriber, they must be aborted. This test checks it.\r\n> \r\n> \r\n> /off/false/\r\n> \r\n> /on/true/\r\n\r\nFixed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 3.4.\r\n> +# Verify the prepared transaction has been replicated to the subscriber because\r\n> +# two_phase is set to \"on\".\r\n> \r\n> /on/true/\r\n\r\nFixed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 3.5.\r\n> +# Toggle the two_phase to \"off\" before the COMMIT PREPARED\r\n> +$node_subscriber->safe_psql(\r\n> + 'postgres', \"\r\n> + ALTER SUBSCRIPTION regress_sub DISABLE;\r\n> + ALTER SUBSCRIPTION regress_sub SET (two_phase = off);\r\n> + ALTER SUBSCRIPTION regress_sub ENABLE;\");\r\n> \r\n> /off/false/\r\n> \r\n> /two_phase = off/two_phase = false/\r\n\r\nFixed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 3.6.\r\n> +# Verify any prepared transactions are aborted because two_phase is changed\r\n> to\r\n> +# \"off\".\r\n> \r\n> /off/false/\r\n\r\nFixed.\r\n\r\n> \r\n> //////////\r\n> patch v10-0004\r\n> //////////\r\n> \r\n> ======\r\n> 4.g1. GENERAL - document rendering fails\r\n> \r\n> FYI - The document failed to build after I apply patch 0003. Did you try it?\r\n> \r\n> In my environment it reported some unbalanced tags:\r\n> \r\n> ref/create_subscription.sgml:448: parser error : Opening and ending\r\n> tag mismatch: link line 436 and para\r\n> </para>\r\n> ^\r\n> ref/create_subscription.sgml:449: parser error : Opening and ending\r\n> tag mismatch: para line 435 and listitem\r\n> </listitem>\r\n> \r\n> etc.\r\n\r\nOh, I forgot to run `make check`. Sorry. It seemed that I missed to close <link> tag.\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 4.1.\r\n> <para>\r\n> The <literal>two_phase</literal> parameter can only be altered when\r\n> the\r\n> - subscription is disabled. When altering the parameter from\r\n> <literal>true</literal>\r\n> - to <literal>false</literal>, the backend process checks for any\r\n> incomplete\r\n> - prepared transactions done by the logical replication worker (from when\r\n> - <literal>two_phase</literal> parameter was still\r\n> <literal>true</literal>)\r\n> - and, if any are found, those are aborted.\r\n> + subscription is disabled. Altering the parameter from\r\n> <literal>true</literal>\r\n> + to <literal>false</literal> will give an error when when there are\r\n> + prepared transactions done by the logical replication worker. If you want\r\n> + to alter the parameter forcibly in this case,\r\n> + <link\r\n> linkend=\"sql-createsubscription-params-with-force-alter\"><literal>force_alter\r\n> </literal></link>\r\n> + option must be set to <literal>true</literal> at the same time.\r\n> </para>\r\n> \r\n> TYPO: \"when when\"\r\n\r\nRemoved.\r\n\r\n> Why is necessary to say \"at the same time\"?\r\n\r\nNot needed. Fixed.\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 4.2.\r\n> + <varlistentry id=\"sql-createsubscription-params-with-force-alter\">\r\n> + <term><literal>force_alter</literal>\r\n> (<type>boolean</type>)</term>\r\n> + <listitem>\r\n> + <para>\r\n> + Specifies if the <link\r\n> linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION</command>\r\n> + can be forced to proceed instead of giving an error. There is\r\n> + currently only one scenario where this parameter has any effect:\r\n> When\r\n> + altering <literal>two_phase</literal> option from\r\n> <literal>true</literal>\r\n> + to <literal>false</literal> it is possible for there to be incomplete\r\n> + prepared transactions done by the logical replication worker (from\r\n> + when <literal>two_phase</literal> parameter was still\r\n> <literal>true</literal>).\r\n> + If <literal>force_alter</literal> is <literal>false</literal>, then\r\n> + this will give an error; if <literal>force_alter</literal> is\r\n> + <literal>true</literal>, then the incomplete prepared transactions\r\n> + are aborted and the alter will proceed.\r\n> + The default is <literal>false</literal>.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> \r\n> IMO this will be better broken into multiple paragraphs.\r\n> \r\n> 1. Specifies...\r\n> 2. There is...\r\n> 3. The default is...\r\n\r\nSeparated.\r\n\r\n> \r\n> ======\r\n> src/test/subscription/t/099_twophase_added.pl\r\n> \r\n> (Let's change all the on|off to true|false like you already did in patch 0002.\r\n> \r\n> 4.3.\r\n> +# Try altering the two_phase option to \"off.\" The command will fail since there\r\n> +# is a prepared transaction and the 'force_alter' option is not specified as\r\n> +# true.\r\n> +my $stdout;\r\n> +my $stderr;\r\n> \r\n> /off./false/\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPs-RqrggaJU5w85BbeQzw9CLmmLgADVJoJ%3Dxx_4D5CWvw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Thu, 16 May 2024 05:02:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> You wrote \"Fixed\" for that patch v9-0004 suggestion but I don't think\r\n> anything was changed at all. Accidentally missed?\r\n\r\nSorry, I missed to do `git add` after the revision.\r\nThe change was also included in new patch [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/OSBPR01MB25522052F9F3E3AAD3BA2A8CF5ED2%40OSBPR01MB2552.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Thu, 16 May 2024 05:04:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Here are my remaining review comments for the latest v11* patches.\n\n//////////\nv11-0001\n//////////\n\nNo changes. No comments.\n\n//////////\nv11-0002\n//////////\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.1.\n <command>ALTER SUBSCRIPTION ... SET (failover = on|off)</command> and\n- <command>ALTER SUBSCRIPTION ... SET (two_phase = on|off)</command>\n+ <command>ALTER SUBSCRIPTION ... SET (two_phase = off)</command>\n\nMy other thread patch has already been pushed [1], so now you can\nmodify this to say \"true|false\" as previously suggested.\n\n\n//////////\nv11-0003\n//////////\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3.1. AlterSubscription\n\n+ subtwophase = LOGICALREP_TWOPHASE_STATE_DISABLED;\n+ }\n+ else\n+ subtwophase = LOGICALREP_TWOPHASE_STATE_PENDING;\n+\n+\n /* Change system catalog acoordingly */\n values[Anum_pg_subscription_subtwophasestate - 1] =\n- CharGetDatum(opts.twophase ?\n- LOGICALREP_TWOPHASE_STATE_PENDING :\n- LOGICALREP_TWOPHASE_STATE_DISABLED);\n+ CharGetDatum(subtwophase);\n replaces[Anum_pg_subscription_subtwophasestate - 1] = true;\n\nSorry, I don't think that 'subtwophase' change is an improvement --\nIMO the existing ternary code was fine as-is.\n\nI only reported the excessive flag checking in the previous v10-0003\nreview because of some extra \"if (!opts.twophase)\" code but that was\ncaused by what you called \"wrong git operations.\" and is already fixed\nnow.\n\n//////////\nv11-0004\n//////////\n\n======\nsrc/sgml/catalogs.sgml\n\n4.1.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subforcealter</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, then the <link\nlinkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION</command></link>\n+ can disable <literal>two_phase</literal> option, even if there are\n+ uncommitted prepared transactions from when <literal>two_phase</literal>\n+ was enabled\n+ </para></entry>\n+ </row>\n+\n\nI think this description should be changed to say what it *really*\ndoes. IMO, the stuff about 'two_phase' is more like a side-effect.\n\nSUGGESTION (this is just pseudo-code. You can add the CREATE\nSUBSCRIPTION 'force_alter' link appropriately)\n\nIf true, then the <command>ALTER SUBSCRIPTION</command> command can\nsometimes be forced to proceed instead of giving an error. See\n<link>force_alter</link> parameter for details about when this might\nbe useful.\n\n======\n[1] https://github.com/postgres/postgres/commit/fa65a022db26bc446fb67ce1d7ac543fa4bb72e4\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 May 2024 12:20:55 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! Here are new patches.\r\n\r\n> ======\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> 2.1.\r\n> <command>ALTER SUBSCRIPTION ... SET (failover = on|off)</command>\r\n> and\r\n> - <command>ALTER SUBSCRIPTION ... SET (two_phase =\r\n> on|off)</command>\r\n> + <command>ALTER SUBSCRIPTION ... SET (two_phase = off)</command>\r\n> \r\n> My other thread patch has already been pushed [1], so now you can\r\n> modify this to say \"true|false\" as previously suggested.\r\n\r\nFixed accordingly.\r\n\r\n> //////////\r\n> v11-0003\r\n> //////////\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3.1. AlterSubscription\r\n> \r\n> + subtwophase = LOGICALREP_TWOPHASE_STATE_DISABLED;\r\n> + }\r\n> + else\r\n> + subtwophase = LOGICALREP_TWOPHASE_STATE_PENDING;\r\n> +\r\n> +\r\n> /* Change system catalog acoordingly */\r\n> values[Anum_pg_subscription_subtwophasestate - 1] =\r\n> - CharGetDatum(opts.twophase ?\r\n> - LOGICALREP_TWOPHASE_STATE_PENDING :\r\n> - LOGICALREP_TWOPHASE_STATE_DISABLED);\r\n> + CharGetDatum(subtwophase);\r\n> replaces[Anum_pg_subscription_subtwophasestate - 1] = true;\r\n> \r\n> Sorry, I don't think that 'subtwophase' change is an improvement --\r\n> IMO the existing ternary code was fine as-is.\r\n> \r\n> I only reported the excessive flag checking in the previous v10-0003\r\n> review because of some extra \"if (!opts.twophase)\" code but that was\r\n> caused by what you called \"wrong git operations.\" and is already fixed\r\n> now.\r\n\r\nOk, the part was reverted.\r\n\r\n> //////////\r\n> v11-0004\r\n> //////////\r\n> \r\n> ======\r\n> src/sgml/catalogs.sgml\r\n> \r\n> 4.1.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subforcealter</structfield> <type>bool</type>\r\n> + </para>\r\n> + <para>\r\n> + If true, then the <link\r\n> linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION</command></link>\r\n> + can disable <literal>two_phase</literal> option, even if there are\r\n> + uncommitted prepared transactions from when\r\n> <literal>two_phase</literal>\r\n> + was enabled\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> \r\n> I think this description should be changed to say what it *really*\r\n> does. IMO, the stuff about 'two_phase' is more like a side-effect.\r\n> \r\n> SUGGESTION (this is just pseudo-code. You can add the CREATE\r\n> SUBSCRIPTION 'force_alter' link appropriately)\r\n> \r\n> If true, then the <command>ALTER SUBSCRIPTION</command> command can\r\n> sometimes be forced to proceed instead of giving an error. See\r\n> <link>force_alter</link> parameter for details about when this might\r\n> be useful.\r\n>\r\n\r\nFixed. But One point, the word \"command\" was removed. I checked other parts and\r\nit seemed not to be needed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Fri, 17 May 2024 03:06:34 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san,\n\nI did not apply these v12* patches, but I have diff'ed all of them\nwith the previous v11* patches and confirmed that all of my previous\nreview comments now seem to be addressed.\n\nI don't have any more comments to make at this time. Thanks!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 May 2024 14:08:15 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI found that v12 patch set could not be accepted by the cfbot. PSA new version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 25 Jun 2024 08:06:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "> Dear hackers,\r\n> \r\n> I found that v12 patch set could not be accepted by the cfbot. PSA new version.\r\n\r\nTo make others more trackable, I shared changes just in case. All failures were occurred\r\non the pg_dump code. I added an attribute in pg_subscription and modified pg_dump code,\r\nbut it was wrong. A constructed SQL became incomplete. I.e., in [1]: \r\n\r\n```\r\npg_dump: error: query failed: ERROR: syntax error at or near \".\"\r\nLINE 15: s.subforcealter\r\n ^\r\npg_dump: detail: Query was: SELECT s.tableoid, s.oid, s.subname,\r\n s.subowner,\r\n s.subconninfo, s.subslotname, s.subsynccommit,\r\n s.subpublications,\r\n s.subbinary,\r\n s.substream,\r\n s.subtwophasestate,\r\n s.subdisableonerr,\r\n s.subpasswordrequired,\r\n s.subrunasowner,\r\n s.suborigin,\r\n NULL AS suboriginremotelsn,\r\n false AS subenabled,\r\n s.subfailover\r\n s.subforcealter\r\nFROM pg_subscription s\r\nWHERE s.subdbid = (SELECT oid FROM pg_database\r\n WHERE datname = current_database())\r\n```\r\n\r\nBased on that I just added a comma in 0004 patch.\r\n\r\n[1]: https://cirrus-ci.com/task/6710166165389312\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 25 Jun 2024 09:11:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 2:26 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n ```\n>\n> It succeeds if force_alter is also expressly set. Prepared transactions will be\n> aborted at that time.\n>\n> ```\n> subscriber=# ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter = on);\n> ALTER SUBSCRIPTION\n>\n\nIsn't it better to give a Notice when force_alter option leads to the\nrollback of already prepared transactions?\n\nI have another question on the latest 0001 patch:\n+ /*\n+ * Stop all the subscription workers, just in case.\n+ * Workers may still survive even if the subscription is\n+ * disabled.\n+ */\n+ logicalrep_workers_stop(subid);\n\nIn which case the workers will survive when the subscription is disabled?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Jul 2024 12:25:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> >\r\n> > It succeeds if force_alter is also expressly set. Prepared transactions will be\r\n> > aborted at that time.\r\n> >\r\n> > ```\r\n> > subscriber=# ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter =\r\n> on);\r\n> > ALTER SUBSCRIPTION\r\n> >\r\n> \r\n> Isn't it better to give a Notice when force_alter option leads to the\r\n> rollback of already prepared transactions?\r\n\r\nIndeed. I think this can be added for 0003. For now, it says like:\r\n\r\n```\r\npostgres=# ALTER SUBSCRIPTION sub SET (TWO_PHASE = off, FORCE_ALTER = on);\r\nWARNING: requested altering to two_phase = false but there are prepared transactions done by the subscription\r\nDETAIL: Such transactions are being rollbacked.\r\nALTER SUBSCRIPTION\r\n```\r\n\r\n> I have another question on the latest 0001 patch:\r\n> + /*\r\n> + * Stop all the subscription workers, just in case.\r\n> + * Workers may still survive even if the subscription is\r\n> + * disabled.\r\n> + */\r\n> + logicalrep_workers_stop(subid);\r\n> \r\n> In which case the workers will survive when the subscription is disabled?\r\n\r\nI think both normal and tablesync worker can survive, because ALTER SUBSCRIPTION\r\nDISABLE command does not send signal to workers. It just change the system catalog.\r\nlogicalrep_workers_stop() is added to ensure all workers are stopped.\r\n\r\nActually, earlier version (-v3) did not have a mechanism but they sometimes got\r\nassertion failures in maybe_reread_subscription(). This was because the survived\r\nworkers read pg_subscription catalog and failed below assertion:\r\n\r\n```\r\n\t/* two-phase cannot be altered while the worker exists */\r\n\tAssert(newsub->twophasestate == MySubscription->twophasestate);\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Thu, 4 Jul 2024 08:04:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Jul 4, 2024 at 1:34 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > >\n> > > It succeeds if force_alter is also expressly set. Prepared transactions will be\n> > > aborted at that time.\n> > >\n> > > ```\n> > > subscriber=# ALTER SUBSCRIPTION sub SET (two_phase = off, force_alter =\n> > on);\n> > > ALTER SUBSCRIPTION\n> > >\n> >\n> > Isn't it better to give a Notice when force_alter option leads to the\n> > rollback of already prepared transactions?\n>\n> Indeed. I think this can be added for 0003. For now, it says like:\n>\n> ```\n> postgres=# ALTER SUBSCRIPTION sub SET (TWO_PHASE = off, FORCE_ALTER = on);\n> WARNING: requested altering to two_phase = false but there are prepared transactions done by the subscription\n> DETAIL: Such transactions are being rollbacked.\n> ALTER SUBSCRIPTION\n>\n\nIs it possible to get a NOTICE instead of a WARNING?\n\n>\n> > I have another question on the latest 0001 patch:\n> > + /*\n> > + * Stop all the subscription workers, just in case.\n> > + * Workers may still survive even if the subscription is\n> > + * disabled.\n> > + */\n> > + logicalrep_workers_stop(subid);\n> >\n> > In which case the workers will survive when the subscription is disabled?\n>\n> I think both normal and tablesync worker can survive, because ALTER SUBSCRIPTION\n> DISABLE command does not send signal to workers. It just change the system catalog.\n> logicalrep_workers_stop() is added to ensure all workers are stopped.\n>\n> Actually, earlier version (-v3) did not have a mechanism but they sometimes got\n> assertion failures in maybe_reread_subscription(). This was because the survived\n> workers read pg_subscription catalog and failed below assertion:\n>\n> ```\n> /* two-phase cannot be altered while the worker exists */\n> Assert(newsub->twophasestate == MySubscription->twophasestate);\n> ```\n>\n\nBut that is not a good reason for this operation to stop workers\nfirst. Instead, we should prohibit this operation if any worker is\npresent. The reason is that there is always a chance that if any\nworker is alive, it can prepare a new transaction after we have\nchecked for the presence of any prepared transactions.\n\nComments:\n=========\n1.\nThere is no need to do something remarkable regarding\n+ * the \"false\" to \"true\" case; the backend process alters\n+ * subtwophase <funny_char> to LOGICALREP_TWOPHASE_STATE_PENDING once.\n+ * After the subscription is enabled, a new logical\n+ * replication worker requests to change the two_phase\n+ * option of its slot from pending to true when the\n+ * initial data synchronization is done. The code path is\n+ * the same as the case in which two_phase <funny_char> is initially\n+ * set <funny_char> to true.\n\nThe patch has some funny characters in the above comment at the places\nhighlighted by me. It seems you have copied from some editor that has\ninserted such characters.\n\n2.\n/*\n* Do not allow toggling of two_phase option. Doing so could cause\n* missing of transactions and lead to an inconsistent replica.\n* See comments atop worker.c\n*\n* Note: Unsupported twophase indicates that this call originated\n* from AlterSubscription.\n*/\nif (!IsSet(supported_opts, SUBOPT_TWOPHASE_COMMIT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"unrecognized subscription parameter: \\\"%s\\\"\", defel->defname)));\n\nThis part of the code must either be removed or converted to an assert.\n\n3. The tests added in 099_twophase_added.pl should be part of 021_twophase.pl\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Jul 2024 17:01:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for giving comments. I hope all comments have been addressed.\r\nPSA new version.\r\n\r\n> > Actually, earlier version (-v3) did not have a mechanism but they sometimes got\r\n> > assertion failures in maybe_reread_subscription(). This was because the\r\n> survived\r\n> > workers read pg_subscription catalog and failed below assertion:\r\n> >\r\n> > ```\r\n> > /* two-phase cannot be altered while the worker exists */\r\n> > Assert(newsub->twophasestate ==\r\n> MySubscription->twophasestate);\r\n> > ```\r\n> >\r\n> \r\n> But that is not a good reason for this operation to stop workers\r\n> first. Instead, we should prohibit this operation if any worker is\r\n> present. The reason is that there is always a chance that if any\r\n> worker is alive, it can prepare a new transaction after we have\r\n> checked for the presence of any prepared transactions.\r\n\r\nI used the function because it internally waits until all workers are exited.\r\nBut OK, I modified like you suggested (logicalrep_workers_find() is used).\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Fri, 5 Jul 2024 05:39:05 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\nSorry, I forgot to say one content.\r\n\r\n> > But that is not a good reason for this operation to stop workers\r\n> > first. Instead, we should prohibit this operation if any worker is\r\n> > present. The reason is that there is always a chance that if any\r\n> > worker is alive, it can prepare a new transaction after we have\r\n> > checked for the presence of any prepared transactions.\r\n> \r\n> I used the function because it internally waits until all workers are exited.\r\n> But OK, I modified like you suggested (logicalrep_workers_find() is used).\r\n\r\nBased on the reason, after the above modification, test codes prior to v14\r\nsometimes failed because backend could execute ALTER SUBSCRIPTION ... SET (two_phase).\r\nSo I added lines in test codes to poll until workers are exited, e.g.,\r\n\r\n```\r\n+# Alter subscription two_phase to false\r\n+$node_subscriber->safe_psql('postgres',\r\n+ \"ALTER SUBSCRIPTION tap_sub_copy DISABLE;\");\r\n+$node_subscriber->poll_query_until('postgres',\r\n+ \"SELECT count(*) = 0 FROM pg_stat_activity WHERE backend_type = 'logical replication worker'\"\r\n+);\r\n+$node_subscriber->safe_psql(\r\n+ 'postgres', \"\r\n+ ALTER SUBSCRIPTION tap_sub_copy SET (two_phase = false);\r\n+ ALTER SUBSCRIPTION tap_sub_copy ENABLE;\");\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Fri, 5 Jul 2024 09:08:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-san,\n\nThank you very much for the patch. In general, it seem to work well for me, but there seems to be a memory access problem in libpqrcv_alter_slot -> quote_identifier in case of NULL slot_name. It happens, if the two_phase option is altered on a subscription without slot. I think, a simple check for NULL may fix the problem. I guess, the same problem may be for failover option.\n\nAnother possible problem is related to my use case. I haven't reproduced this case, just some thoughts. I guess, when two_phase is ON, the PREPARE statement may be truncated from the WAL at checkpoint, but COMMIT PREPARED is still kept in the WAL. On catchup, I would ask the master to send transactions from some restart LSN. I would like to get all such transactions competely, with theirs bodies, not only COMMIT PREPARED messages. One of the solutions is to have an option for the slot to keep the WAL like with two_phase = OFF independently on its two_phase option. It is just an idea.\n\nWith best regards,\nVitaly\n\nHi Kuroda-san,Thank you very much for the patch. In general, it seem to work well for me, but there seems to be a memory access problem in libpqrcv_alter_slot -> quote_identifier in case of NULL slot_name. It happens, if the two_phase option is altered on a subscription without slot. I think, a simple check for NULL may fix the problem. I guess, the same problem may be for failover option.Another possible problem is related to my use case. I haven't reproduced this case, just some thoughts. I guess, when two_phase is ON, the PREPARE statement may be truncated from the WAL at checkpoint, but COMMIT PREPARED is still kept in the WAL. On catchup, I would ask the master to send transactions from some restart LSN. I would like to get all such transactions competely, with theirs bodies, not only COMMIT PREPARED messages. One of the solutions is to have an option for the slot to keep the WAL like with two_phase = OFF independently on its two_phase option. It is just an idea.With best regards,Vitaly",
"msg_date": "Fri, 05 Jul 2024 17:06:19 +0300",
"msg_from": "\"Vitaly Davydov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?utf-8?q?RE=3A?= Slow catchup of 2PC (twophase) transactions on\n replica in\n LR"
},
{
"msg_contents": "Dear Vitaly,\r\n\r\nThanks for giving comments! PSA new version patch.\r\n\r\n> Thank you very much for the patch. In general, it seem to work well for me, but\r\n> there seems to be a memory access problem in libpqrcv_alter_slot ->\r\n> quote_identifier in case of NULL slot_name. It happens, if the two_phase option\r\n> is altered on a subscription without slot. I think, a simple check for NULL may\r\n> fix the problem. I guess, the same problem may be for failover option.\r\n\r\nYou are right. Regarding the failover option, it requires that slot_name is valid.\r\nIn case of two_phase, we must connect to the publisher only when altering \"true\"\r\nto \"false\", slot_name must be there only at that time. Updated.\r\n\r\n> Another possible problem is related to my use case. I haven't reproduced this\r\n> case, just some thoughts. I guess, when two_phase is ON, the PREPARE statement\r\n> may be truncated from the WAL at checkpoint, but COMMIT PREPARED is still kept\r\n> in the WAL. On catchup, I would ask the master to send transactions from some\r\n> restart LSN. I would like to get all such transactions competely, with theirs\r\n> bodies, not only COMMIT PREPARED messages.\r\n\r\nI don't think it is a real issue. WALs for prepared transactions will retain\r\nuntil they are committed/aborted.\r\nWhen the two_phase is on and transactions are PREPAREd, they will not be\r\ncleaned up from the memory (See ReorderBufferProcessTXN()). Then, RUNNING_XACT\r\nrecord leads to update the restart_lsn of the slot but it cannot be move forward\r\nbecause ReorderBufferGetOldestTXN() returns the prepared transaction (See\r\nSnapBuildProcessRunningXacts()). restart_decoding_lsn of each transaction, which\r\nis a candidate of restart_lsn of the slot. is always behind the startpoint of\r\nits txn.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/global/",
"msg_date": "Mon, 8 Jul 2024 07:04:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Jul 8, 2024 at 12:34 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > Another possible problem is related to my use case. I haven't reproduced this\n> > case, just some thoughts. I guess, when two_phase is ON, the PREPARE statement\n> > may be truncated from the WAL at checkpoint, but COMMIT PREPARED is still kept\n> > in the WAL. On catchup, I would ask the master to send transactions from some\n> > restart LSN. I would like to get all such transactions competely, with theirs\n> > bodies, not only COMMIT PREPARED messages.\n>\n> I don't think it is a real issue. WALs for prepared transactions will retain\n> until they are committed/aborted.\n> When the two_phase is on and transactions are PREPAREd, they will not be\n> cleaned up from the memory (See ReorderBufferProcessTXN()). Then, RUNNING_XACT\n> record leads to update the restart_lsn of the slot but it cannot be move forward\n> because ReorderBufferGetOldestTXN() returns the prepared transaction (See\n> SnapBuildProcessRunningXacts()). restart_decoding_lsn of each transaction, which\n> is a candidate of restart_lsn of the slot. is always behind the startpoint of\n> its txn.\n>\n\nI see that in 0003/0004, the patch first aborts pending prepared\ntransactions, update's catalog, and then change slot's property via\nwalrcv_alter_slot. What if there is any ERROR (say the remote node is\nnot reachable or there is an error while updating the catalog) after\nwe abort the pending prepared transaction? Won't we end up with lost\nprepared transactions in such a case?\n\nFew other comments:\n=================\nThe code to handle SUBOPT_TWOPHASE_COMMIT should be after failover\noption handling for the sake of code symmetry. Also, the checks should\nbe in same order like first for slot_name, then enabled, then for\nPreventInTransactionBlock(), after those, we can have other checks for\ntwo_phase. If possible, we can move common checks in both failover and\ntwo_phase options into a common function.\n\nWhat should be the behavior if one tries to set slot_name to NONE and\nalso tries to toggle two_pahse option? I feel both options together\ndon't makes sense because there is no use in changing two_phase for\nsome slot which we are disassociating the subscription from. The same\ncould be said for the failover option as well, so if we agree with\nsome different behavior here, we can follow the same for failover\noption as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 8 Jul 2024 17:25:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Jul 8, 2024 at 5:25 PM Amit Kapila <[email protected]> wrote:\n>\n>\n> I see that in 0003/0004, the patch first aborts pending prepared\n> transactions, update's catalog, and then change slot's property via\n> walrcv_alter_slot. What if there is any ERROR (say the remote node is\n> not reachable or there is an error while updating the catalog) after\n> we abort the pending prepared transaction? Won't we end up with lost\n> prepared transactions in such a case?\n>\n\nConsidering the above is a problem the other possibility I thought of\nis to change the order like abort prepared xacts after slot update.\nThat is also dangerous because any failure while aborting could make a\nslot change permanent whereas the subscription option will still be\nold value. Now, because the slot's two_phase property is off, at\ncommit, it can resend the entire transaction which can create a\nproblem because the corresponding prepared transaction will already be\npresent.\n\nOne more thing to think about in this regard is what if we fail after\naborting a few prepared transactions and not all?\n\nAt this stage, I am not able to think of a good solution for these\nproblems. So, if we don't get a solution for these, we can document\nthat users can first manually abort prepared transactions and then\nswitch off the two_phase option using Alter Subscription command.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Jul 2024 10:35:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for giving comments! Here I wanted to reply one of comments.\r\n\r\n> What should be the behavior if one tries to set slot_name to NONE and\r\n> also tries to toggle two_pahse option?\r\n\r\nYou mentioned like below case, right?\r\n\r\n```\r\nALTER SUBSCRIPTION sub SET (two_phase = false, slot_name = NONE);\r\n```\r\n\r\nFor now, we accept such a command. The replication slot which previously specified\r\nis altered. As you know, this behavior is same as failover's one.\r\n\r\n> I feel both options together\r\n> don't makes sense because there is no use in changing two_phase for\r\n> some slot which we are disassociating the subscription from. The same\r\n> could be said for the failover option as well, so if we agree with\r\n> some different behavior here, we can follow the same for failover\r\n> option as well.\r\n\r\nWhile considering more, I started to think the combination of slot_name and\r\ntwo_phase should not be allowed. Even if both of them are altered at the same time,\r\nthe *old* slot will be modified by the backend process. I feel this inconsistency\r\nshould not be happened. In next patch, this check will be added. I also think\r\nfailover option should be also fixed, but not touched here. Let's make the scope\r\nnarrower.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 9 Jul 2024 11:42:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > I see that in 0003/0004, the patch first aborts pending prepared\r\n> > transactions, update's catalog, and then change slot's property via\r\n> > walrcv_alter_slot. What if there is any ERROR (say the remote node is\r\n> > not reachable or there is an error while updating the catalog) after\r\n> > we abort the pending prepared transaction? Won't we end up with lost\r\n> > prepared transactions in such a case?\r\n\r\nYes, v16 could happen the case, becasue FinishPreparedTransaction() itself is not\r\nthe transactional operation. In below example, the subscription was altered after\r\nstopping the publisher. You could see that prepared transaction were rollbacked.\r\n\r\n```\r\nsubscriber=# SELECT gid FROM pg_prepared_xacts ;\r\n gid \r\n------------------\r\n pg_gid_16390_741\r\n pg_gid_16390_742\r\n(2 rows)\r\nsubscriber=# ALTER SUBSCRIPTION sub SET (TWO_PHASE = off, FORCE_ALTER = on);\r\nNOTICE: requested altering to two_phase = false but there are prepared transactions done by the subscription\r\nDETAIL: Such transactions are being rollbacked.\r\nERROR: could not connect to the publisher: connection to server on socket \"/tmp/.s.PGSQL.5431\" failed: No such file or directory\r\n Is the server running locally and accepting connections on that socket?\r\nsubscriber=# SELECT gid FROM pg_prepared_xacts ;\r\n gid \r\n-----\r\n(0 rows)\r\n```\r\n\r\n> Considering the above is a problem the other possibility I thought of\r\n> is to change the order like abort prepared xacts after slot update.\r\n> That is also dangerous because any failure while aborting could make a\r\n> slot change permanent whereas the subscription option will still be\r\n> old value. Now, because the slot's two_phase property is off, at\r\n> commit, it can resend the entire transaction which can create a\r\n> problem because the corresponding prepared transaction will already be\r\n> present.\r\n\r\nI feel it is rare case but still possible. E.g., race condition by TwoPhaseStateLock\r\nlocking, oom, disk failures and so on.\r\nAnd since prepared transactions hold locks, duplicated arrival of transactions\r\nmay cause table-lock failures. \r\n\r\n> One more thing to think about in this regard is what if we fail after\r\n> aborting a few prepared transactions and not all?\r\n\r\nIt's bit hard to emulate, but I imagine part of prepared transactions remains.\r\n\r\n> At this stage, I am not able to think of a good solution for these\r\n> problems. So, if we don't get a solution for these, we can document\r\n> that users can first manually abort prepared transactions and then\r\n> switch off the two_phase option using Alter Subscription command.\r\n\r\nI'm also not sure what should we do. Ideally, it may be happy to make\r\nFinishPreparedTransaction() transactional, but not sure it is realistic. So\r\nchanges for aborting prepared txns are removed, documentation patch was added\r\ninstead.\r\n\r\nHere is a summary of updates for patches. Dropping-prepared-transaction patch\r\nwas removed for now.\r\n\r\n0001 - Codes for SUBOPT_TWOPHASE_COMMIT are moved per requirement [1].\r\n Also, checks for failover and two_phase are unified into one function.\r\n0002 - updated accordingly. An argument for the check function is added.\r\n0003 - this contains documentation changes required in [2].\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2BFRrL_fLWLsWQGHZRESg39ixzDX_S9hU8D7aFtU%2Ba8uQ%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1Khy_YWFoQ1HOF_tGtiixD8YoTg86coX1-ckxt8vK3U%3DQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 9 Jul 2024 11:49:38 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "> 0001 - Codes for SUBOPT_TWOPHASE_COMMIT are moved per requirement [1].\r\n> Also, checks for failover and two_phase are unified into one function.\r\n> 0002 - updated accordingly. An argument for the check function is added.\r\n> 0003 - this contains documentation changes required in [2].\r\n\r\nPrevious patch set could not be accepted due to the initialization miss.\r\nPSA new version. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Tue, 9 Jul 2024 12:52:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tuesday, July 9, 2024 8:53 PM Hayato Kuroda (Fujitsu) <[email protected]> wrote:\r\n> \r\n> > 0001 - Codes for SUBOPT_TWOPHASE_COMMIT are moved per requirement\r\n> [1].\r\n> > Also, checks for failover and two_phase are unified into one function.\r\n> > 0002 - updated accordingly. An argument for the check function is added.\r\n> > 0003 - this contains documentation changes required in [2].\r\n> \r\n> Previous patch set could not be accepted due to the initialization miss.\r\n> PSA new version.\r\n\r\nThanks for the patches ! I initially reviewed the 0001 and found that\r\nthe implementation of ALTER_REPLICATION_SLOT has a issue, e.g.\r\nit doesn't handle the case when there is only one specified option\r\nin the replication command:\r\n\r\nALTER_REPLICATION_SLOT slot (two_phase)\r\n\r\nIn this case, it always overwrites the un-specified option(failover) to false even\r\nwhen the failover was set to true. I tried to make a small fix which is on\r\ntop of 0001 (please see the attachment).\r\n\r\nI also added the doc of the new two_phase option of the replication command\r\nand a missing period of errhint in the topup patch.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Sat, 13 Jul 2024 10:48:55 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Jul 9, 2024 at 6:23 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Previous patch set could not be accepted due to the initialization miss.\n> PSA new version.\n>\n\nFew minor comments:\n=================\n0001-patch\n1.\n.git/rebase-apply/patch:253: space before tab in indent.\n\nerrmsg(\"slot_name and two_phase cannot be altered at the same\ntime\")));\nwarning: 1 line adds whitespace errors.\n\nWhite space issue as shown by git am command.\n\n2.\n+/*\n+ * Common checks for altering failover and two_phase option\n+ */\n+static void\n+CommonChecksForFailoverAndTwophase(Subscription *sub, const char *option,\n+ bool isTopLevel)\n\nThe function name looks odd to me. How about something along the lines\nof CheckAlterSubOption()?\n\n3.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot disable two_phase when uncommitted prepared\ntransactions present\"),\n\nWe can slightly change the above error message to: \"cannot disable\ntwo_phase when prepared transactions are present\".\n\n0003-patch\nAlter the altering from\n+ <literal>true</literal> to <literal>false</literal>, the publisher will\n+ replicate transactions again when they are committed.\n\nThe beginning of the sentence sounds awkward.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 15 Jul 2024 17:09:10 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit, Hou,\r\n\r\nThanks for giving comments! PSA new versions.\r\nWhat's new:\r\n\r\n0001: included Hou's patch [1] not to overwrite slot options.\r\n Some other comments were also addressed.\r\n0002: not so changed, just rebased.\r\n0003: Typo was fixed, s/Alter/After/.\r\n\r\n[1]: https://www.postgresql.org/message-id/OS3PR01MB57184E0995521300AC06CB4B94A72%40OS3PR01MB5718.jpnprd01.prod.outlook.com \r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 16 Jul 2024 05:17:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, here are some review comments for patch v18-0001.\n\n======\ndoc/src/sgml/protocol.sgml\n\nnitpick - Although it is no fault of your patch, IMO it would be nicer for\nthe TWO_PHASE description (of CREATE REPLICATION SLOT) to also be in the\nsame consistent order as what you have (e.g. below FAILOVER). So I moved it.\n\n======\nsrc/backend/access/transam/twophase.c\n\nLookupGXactBySubid:\nnitpick - add a blank line before return\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCommonChecksForFailoverAndTwophase:\nnitpick - added Assert for the generic-looking \"option\" parameter name\nnitpick - modified comment about transaction block\n\n~~~\n\n1. AlterSubscription\n+ * Workers may still survive even if the subscription has\n+ * been disabled. They may read the pg_subscription\n+ * catalog and detect that the twophase parameter is\n+ * updated, which causes the assertion failure. Ensure\n+ * workers have already been exited to avoid it.\n\n\"which causes the assertion failure\" -- what assertion failure is that? The\ncomment is not very clear.\n\n~\n\nnitpick - in comment /twophase/two_phase/\nnitpick - typo /acoordingly/accordingly/\n\n======\nsrc/backend/replication/logical/launcher.c\n\nlogicalrep_workers_find:\nnitpick - /require_lock/acquire_lock/\nnitpick - take the Assert out of the else.\n\n======\nsrc/backend/replication/slot.c\n\nnitpick - refactor the code to check (failover) only one time. See the\nnitpicks attachment.\n\n~\n\n2. ParseAlterReplSlotOptions\n\nnitpick -- IMO the ParseAlterReplSlotOptions(). function does more harm\nthan good here by adding the unnecessary complexity of messing around with\nmultiple parameters that are passed-by-reference. All this would be simpler\nif it was just coded inline in the AlterReplicationSlot() function, which\nis the only caller. I've refactored all this to demonstrate (see nitpicks\nattachment)\n\n======\nsrc/include/replication/worker_internal.h\n\nnitpick - /require_lock/acquire_lock/\n\n======\nsrc/test/regress/sql/subscription.sql\n\nnitpick - tweak comments\n\n======\nsrc/test/subscription/t/021_twophase.pl\n\nnitpick - change comment style to indicate each test part better.\n\n======\n99.\nPlease also see the attached diffs patch which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 16 Jul 2024 20:02:21 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tuesday, July 16, 2024 1:17 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote\r\n> \r\n> Dear Amit, Hou,\r\n> \r\n> Thanks for giving comments! PSA new versions.\r\n> What's new:\r\n> \r\n> 0001: included Hou's patch [1] not to overwrite slot options.\r\n> Some other comments were also addressed.\r\n\r\nThanks for the patch!\r\n\r\nOne more issue I found is that:\r\n\r\n+IsTwoPhaseTransactionGidForSubid(Oid subid, char *gid)\r\n+{\r\n+\tint\t\t\tret;\r\n+\tOid\t\t\tsubid_written;\r\n+\tTransactionId xid;\r\n+\r\n+\tret = sscanf(gid, \"pg_gid_%u_%u\", &subid_written, &xid);\r\n+\r\n+\treturn (ret == 2 && subid == subid_written);\r\n\r\nI think it's not correct to use sscanf here, because it will return the same value\r\neven if the gid is \"pg_gid_123_123_123_123...\" which isn't a\r\ngid created by the apply worker. I think we should use TwoPhaseTransactionGid\r\nto build the gid string and compare it with each existing gid(strcmp).\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 17 Jul 2024 00:58:51 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Here are some review comments for patch v18-0002.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n1. CheckAlterSubOption\n\n1a.\nIt's not obvious why we are only checking the 'slot name' when\nneeds_update==true, but OTOH is always checking the 'enabled' state.\n\n~\n\n1b.\nParam 'needs_update' is a vague name. It needs more explanatory comments or\na better name. e.g. First impression was \"Why are we calling 'Alter'\nfunction if needs_update is false?\". I know it encapsulates some common\ncode, but if special cases cause the logic to be more confusing then that\ncost may outweigh the benefit of this function.\n\n~\n\n1c.\nIf the error checks can be moved to be done up-front, then all the\n'needs_update' can be combined. Avoiding multiple checks to 'needs_update'\nwill make this function simpler.\n\n~~~\n\nAlterSubscription:\nnitpick - typo /needs/need/\nnitpick - typo /wo_phase/two_phase/\nnitpick - The comment wording \"the later part...\", was confusing. I've\nreworded the whole comment. But this belongs in patch 0001.\n\n======\nsrc/test/subscription/t/021_twophase.pl\n\nnitpick - Use the same \"###############################\" comment style as\nin patch 0001 to indicate each main TEST scenario.\n\n======\n99.\nPlease refer to the diffs attachment patch, which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 17 Jul 2024 13:31:14 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, here is my review of the v18-0003 patch.\n\n======\nsgml/ref/alter_subscription.sgml\n\nnitpick - some minor tweaks to the documentation text. I also added a link\nback to the two_phase parameter. Please see the attached diffs file.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 17 Jul 2024 14:28:20 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Hou, Peter,\r\n\r\nThanks for giving comments! PSA new version.\r\nAlmost comments were addressed.\r\nWhat's new:\r\n0001 - IsTwoPhaseTransactionGidForSubid() was updated per comment from Hou-san [1].\r\n Some nitpicks were accepted.\r\n0002 - An argument in CheckAlterSubOption() was renamed to \" slot_needs_update \"\r\n Some nitpicks were accepted.\r\n0003 - Some nitpicks were accepted.\r\n\r\nBelow part contains the reason why I rejected some comments.\r\n\r\n> CommonChecksForFailoverAndTwophase:\r\n> nitpick - added Assert for the generic-looking \"option\" parameter name\r\n\r\nThe style looks strange for me, using multiple strcmp() is more straightforward.\r\nAdded like that.\r\n\r\n> 1c.\r\n> If the error checks can be moved to be done up-front, then all the 'needs_update'\r\n> can be combined. Avoiding multiple checks to 'needs_update' will make this function simpler.\r\n\r\nThis style was needed to preserve error condition for failover option. Not changed.\r\n\r\n[1]: https://www.postgresql.org/message-id/OS3PR01MB571834FBD3E6D3804484038F94A32%40OS3PR01MB5718.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 17 Jul 2024 05:13:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, here are my review comments for v19-0001.\n\n======\ndoc/src/sgml/protocol.sgml\n\nnitpick - Now there is >1 option. /The following option is supported:/The\nfollowing options are supported:/\n\n======\nsrc/backend/access/transam/twophase.c\n\nTwoPhaseTransactionGid:\nnitpick - renamed parameter /gid/gid_res/ to emphasize that this is\nreturned by reference\n\n~~~\n\n1.\nIsTwoPhaseTransactionGidForSubid\n+ /* Construct the format GID based on the got xid */\n+ TwoPhaseTransactionGid(subid, xid, gid_generated, sizeof(gid));\n\nI think the wrong size is being passed here. It should be the buffer size\n-- e.g. sizeof(gid_generated).\n\n~\n\nnitpick - renamed a couple of vars for readability\nnitpick - expanded some comments.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n2. AlterSubscription\n+ /*\n+ * slot_name and two_phase cannot be altered\n+ * simultaneously. The latter part refers to the pre-set\n+ * slot name and tries to modify the slot option, so\n+ * changing both does not make sense.\n+ */\n\nI had given a v18-0002 nitpick suggestion to re-word all of this comment.\nBut, as I wrote before [1], that fix belongs here in patch 0001 where the\ncomment was first added.\n\n======\n[1]\nhttps://www.postgresql.org/message-id/CAHut%2BPsqMRS3dcijo5jsTSbgV1-9So-YBC7PH7xg0%2BZ8hA7fDQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 17 Jul 2024 17:53:28 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, here are my review comments for patch v19-0002.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCheckAlterSubOption:\nnitpick - tweak some comment wording\n\n~\n\nOn Wed, Jul 17, 2024 at 3:13 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 1c.\n> > If the error checks can be moved to be done up-front, then all the 'needs_update'\n> > can be combined. Avoiding multiple checks to 'needs_update' will make this function simpler.\n>\n> This style was needed to preserve error condition for failover option. Not changed.\n>\n\nnitpick - Hmm. I think you might be trying to preserve the ordering of\nerrors when that order is of no consequence. AFAICT which error comes\nfirst here is neither documented nor regression tested. e.g.\nreordering them gives no problem for testing, but OTOH reordering them\ndoes simplify the code. Anyway, I have modified the code in my\nattached nitpicks diffs to demonstrate this suggestion in case you\nchange your mind.\n\n~~~\n\nAlterSubscription:\nnitpick - let's keep all the variables called 'update_xxx' together.\nnitpick - comment typo /needs/need/\nnitpick - tweak some comment wording\n\n======\nsrc/test/subscription/t/021_twophase.pl\n\nnitpick - didn't quite understand the \"Since we are...\" comment. I\nreworded it according to what I thought was the intention.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 18 Jul 2024 09:24:29 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi Kuroda-San, here are some review comment for patch v19-00001\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\nThe previous patches have common failover/two_phase code checking for\n\"Do not allow changing the option if the subscription is enabled\", but\nit seems the docs were mentioning that only for \"two_phase\" and not\nfor \"failover\".\n\nI'm not 100% sure if mentioning about disabled was necessary, but\ncertainly it should be all-or-nothing, not just saying it for one of\nthe parameters. Anyway, I chose to add the missing info. Please see\nthe attached nitpicks diff.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Thu, 18 Jul 2024 10:09:05 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! PSA new version.\r\nI think most of comments were addressed, and I ran pgindent/pgperltidy again.\r\n\r\nRegarding the CheckAlterSubOption(), the ordering is still preserved\r\nbecause I preferred to keep some specs. But I can agree that yours\r\nmake codes simpler.\r\n\r\nBTW, I started to think patches can be merged in future versions because\r\nthey must be included at once and codes are not so long. Thought?\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 18 Jul 2024 02:10:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thursday, July 18, 2024 10:11 AM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> \n> Dear Peter,\n> \n> Thanks for giving comments! PSA new version.\n\nI did a few more tests and analysis and didn't find issues. Just share the\ncases I tested:\n\n1. After manually rolling back xacts for two_pc and switch two_pc option from\n true to false, does the prepared transaction again get replicated again when\n COMMIT PREPARED happens.\n\nIt work as expected in this case. E.g. the transaction will be sent to\nsubscriber after disabling two_pc.\n\nAnd I think there wouldn't be race conditions between the ALTER command\nand apply worker because user needs to disable the subscription(the apply\nworker will stop) before altering the two_phase the option.\n \nAnd the WALs for the prepared transaction is retained until the COMMIT\nPREPARED, because we don't advance the slot's restart_lsn over the ongoing\ntransactions(e.g. the prepared transaction in this case):\n \nSnapBuildProcessRunningXacts\n...\n txn = ReorderBufferGetOldestTXN(builder->reorder);\n ...\n /*\n * oldest ongoing txn might have started when we didn't yet serialize\n * anything because we hadn't reached a consistent state yet.\n */\n if (txn != NULL && txn->restart_decoding_lsn != InvalidXLogRecPtr)\n LogicalIncreaseRestartDecodingForSlot(lsn, txn->restart_decoding_lsn);\n\nSo, the data of the prepared transaction is safe.\n\n2. Test when prepare is already processed but we alter the option false to\n true.\n\nThis case works as expected as well e.g. the whole transaction will be sent to the\nsubscriber on COMMIT PREPARE using two_pc flow:\n\n\"begin prepare\" -> \"txn data\" -> \"prepare\" -> \"commit prepare\"\n\nDue to the same reason in case 1, there is no concurrency issue and the\ndata of the transaction will be retained until COMMIT PREPARED.\n\nBest Regards,\nHou zj\n\n\n\n",
"msg_date": "Thu, 18 Jul 2024 03:22:18 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 7:40 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Regarding the CheckAlterSubOption(), the ordering is still preserved\n> because I preferred to keep some specs. But I can agree that yours\n> make codes simpler.\n>\n\nIt is better to simplify the code in this case. I have taken care of\nthis in the attached.\n\n> BTW, I started to think patches can be merged in future versions because\n> they must be included at once and codes are not so long. Thought?\n>\n\nI agree and have done that in the attached. I have made some\nadditional changes: (a) removed the unrelated change of two_phase in\nprotocol.sgml, (b) tried to make the two_phase change before failover\noption wherever it makes sense to keep the code consistent, (c)\nchanged/added comments and doc changes at various places.\n\nI'll continue my review and testing of the patch but I thought of\nsharing what I have done till now.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 18 Jul 2024 17:12:46 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 9:42 PM Amit Kapila <[email protected]> wrote:\n>\n...\n> I agree and have done that in the attached. I have made some\n> additional changes: (a) removed the unrelated change of two_phase in\n> protocol.sgml, (b) tried to make the two_phase change before failover\n> option wherever it makes sense to keep the code consistent, (c)\n> changed/added comments and doc changes at various places.\n>\n> I'll continue my review and testing of the patch but I thought of\n> sharing what I have done till now.\n>\n\nHere some minor comments for patch v21\n\n======\nYou wrote \"tried to make the two_phase change before failover option\nwherever it makes sense to keep the code consistent\". But, still\nfailover is coded first in lots of places:\n- libpqrcv_alter_slot\n- ReplicationSlotAlter\n- AlterReplicationSlot\netc.\n\nQ. Why not change those ones?\n\n======\nsrc/backend/access/transam/twophase.c\n\nIsTwoPhaseTransactionGidForSubid:\nnitpick - nicer to rename the temporary gid variable: /gid_generated/gid_tmp/\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCheckAlterSubOption:\nnitpick = function comment period/plural.\nnitpick - typo /Samilar/Similar/\n\n======\nsrc/include/replication/slot.h\n\n1.\n-extern void ReplicationSlotAlter(const char *name, bool failover);\n+extern void ReplicationSlotAlter(const char *name, bool *failover,\n+ bool *two_phase);\n\nUse const?\n\n======\n99.\nPlease see attached diffs implementing the nitpicks mentioned above\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 19 Jul 2024 12:36:04 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 8:06 AM Peter Smith <[email protected]> wrote:\n>\n> ======\n> You wrote \"tried to make the two_phase change before failover option\n> wherever it makes sense to keep the code consistent\". But, still\n> failover is coded first in lots of places:\n> - libpqrcv_alter_slot\n> - ReplicationSlotAlter\n> - AlterReplicationSlot\n> etc.\n>\n\nIn ReplicationSlotAlter(), there are error conditions related to\nstandby and failover slots which are better checked before setting\ntwo_phase property. The main reason for keeping two_phase before the\nfailover option in subscriptioncmds.c is that SUBOPT_TWOPHASE_COMMIT\nwas introduced before the equivalent failover option. We can do at\nother places as you pointed but I didn't see any compelling reason to\nnot do what we normally do which is to add the new options at the end.\n\n> ======\n> src/include/replication/slot.h\n>\n> 1.\n> -extern void ReplicationSlotAlter(const char *name, bool failover);\n> +extern void ReplicationSlotAlter(const char *name, bool *failover,\n> + bool *two_phase);\n>\n> Use const?\n>\n\nIf so, we need to use const both for failover and two_phase but not\nsure if that is required here. We can evaluate that separately if\nrequired by comparing it with similar instances.\n\n> ======\n> 99.\n> Please see attached diffs implementing the nitpicks mentioned above\n>\n\nThese look good to me, so will incorporate them in the next patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Jul 2024 10:45:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 10:45 AM Amit Kapila <[email protected]> wrote:\n>\n> > ======\n> > src/include/replication/slot.h\n> >\n> > 1.\n> > -extern void ReplicationSlotAlter(const char *name, bool failover);\n> > +extern void ReplicationSlotAlter(const char *name, bool *failover,\n> > + bool *two_phase);\n> >\n> > Use const?\n> >\n>\n> If so, we need to use const both for failover and two_phase but not\n> sure if that is required here. We can evaluate that separately if\n> required by comparing it with similar instances.\n>\n\nI checked and found that the patch uses const in walrcv_alter_slot_fn,\nso agree that we can change to const here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Jul 2024 11:54:56 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 5:12 PM Amit Kapila <[email protected]> wrote:\n>\n> I'll continue my review and testing of the patch but I thought of\n> sharing what I have done till now.\n>\n\n+ /*\n+ * Do not allow changing the option if the subscription is enabled. This\n+ * is because both failover and two_phase options of the slot on the\n+ * publisher cannot be modified if the slot is currently acquired by the\n+ * existing walsender.\n+ */\n+ if (sub->enabled)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot set %s for enabled subscription\",\n+ option)));\n\nAs per my understanding, the above comment is not true when we are\nchanging 'two_phase' option from 'false' to 'true' because in that\ncase, the existing walsender will only change it. So, ideally, we can\nallow toggling two_phase from 'false' to 'true' without the above\nrestriction.\n\nIf this is correct then we don't even need to error for the case\n\"cannot alter two_phase when logical replication worker is still\nrunning\" when 'two_phase' option is changed from 'false' to 'true'.\n\nNow, assuming the above observations are correct, we may still want to\nhave the same behavior when toggling two_phase option but we can at\nleast note down that in the comments so that if required the same can\nbe changed when toggling 'two_phase' option from 'false' to 'true' in\nfuture.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Jul 2024 18:14:55 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, 18 Jul 2024 at 07:41, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thanks for giving comments! PSA new version.\n\nCouple of suggestions:\n1) How will user know which all transactions should be rolled back\nsince the prepared transaction name will be different in subscriber\nlike pg_gid_16398_750, can we mention some info on how user can\nidentify these prepared transactions that should be rolled back in the\nsubscriber or if this information is already available can we point it\nfrom here:\n+ When altering <link\nlinkend=\"sql-createsubscription-params-with-two-phase\"><literal>two_phase</literal></link>\n+ from <literal>true</literal> to <literal>false</literal>, the backend\n+ process reports and an error if any prepared transactions done by the\n+ logical replication worker (from when <literal>two_phase</literal>\n+ parameter was still <literal>true</literal>) are found. You can resolve\n+ prepared transactions on the publisher node, or manually roll back them\n+ on the subscriber, and then try again.\n\n2) I'm not sure if InvalidRepOriginId is correct here, how about\nusing OidIsValid in the below:\n+void\n+TwoPhaseTransactionGid(Oid subid, TransactionId xid, char *gid, int szgid)\n+{\n+ Assert(subid != InvalidRepOriginId);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jul 2024 21:31:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> + /*\r\n> + * Do not allow changing the option if the subscription is enabled. This\r\n> + * is because both failover and two_phase options of the slot on the\r\n> + * publisher cannot be modified if the slot is currently acquired by the\r\n> + * existing walsender.\r\n> + */\r\n> + if (sub->enabled)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot set %s for enabled subscription\",\r\n> + option)));\r\n> \r\n> As per my understanding, the above comment is not true when we are\r\n> changing 'two_phase' option from 'false' to 'true' because in that\r\n> case, the existing walsender will only change it. So, ideally, we can\r\n> allow toggling two_phase from 'false' to 'true' without the above\r\n> restriction.\r\n\r\nHmm, yes. In \"false\" -> \"true\" case, the parameter of the slot is not changed by\r\nthe backend process. In this case, the subtwophasestate is changed to PENDING\r\nonce, then the walsender will change to ENABLED based on the worker requests.\r\n\r\n> If this is correct then we don't even need to error for the case\r\n> \"cannot alter two_phase when logical replication worker is still\r\n> running\" when 'two_phase' option is changed from 'false' to 'true'.\r\n\r\nBasically right, one note is that there is an Assert in maybe_reread_subscription(),\r\nit should be also modified.\r\n\r\n> Now, assuming the above observations are correct, we may still want to\r\n> have the same behavior when toggling two_phase option but we can at\r\n> least note down that in the comments so that if required the same can\r\n> be changed when toggling 'two_phase' option from 'false' to 'true' in\r\n> future.\r\n> \r\n> Thoughts?\r\n\r\n+1 to add comments in CheckAlterSubOption(). How about the below draft?\r\n\r\n```\r\n@@ -1089,6 +1089,12 @@ CheckAlterSubOption(Subscription *sub, const char *option,\r\n * is because both failover and two_phase options of the slot on the\r\n * publisher cannot be modified if the slot is currently acquired by the\r\n * existing walsender.\r\n+ *\r\n+ * XXX: when toggling two_phase from \"false\" to \"true\", the slot parameter\r\n+ * is not modified by the backend process so that the lock conflict won't\r\n+ * occur. The restarted walsender will do the alternation. Therefore, we\r\n+ * can allow to switch without the restriction. This can be changed in\r\n+ * the future based on the requirement.\r\n```\r\n\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 22 Jul 2024 02:56:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Sat, Jul 20, 2024 at 9:31 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 18 Jul 2024 at 07:41, Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Peter,\n> >\n> > Thanks for giving comments! PSA new version.\n>\n> Couple of suggestions:\n> 1) How will user know which all transactions should be rolled back\n> since the prepared transaction name will be different in subscriber\n> like pg_gid_16398_750, can we mention some info on how user can\n> identify these prepared transactions that should be rolled back in the\n> subscriber or if this information is already available can we point it\n> from here:\n> + When altering <link\n> linkend=\"sql-createsubscription-params-with-two-phase\"><literal>two_phase</literal></link>\n> + from <literal>true</literal> to <literal>false</literal>, the backend\n> + process reports and an error if any prepared transactions done by the\n> + logical replication worker (from when <literal>two_phase</literal>\n> + parameter was still <literal>true</literal>) are found. You can resolve\n> + prepared transactions on the publisher node, or manually roll back them\n> + on the subscriber, and then try again.\n>\n\nI agree it is better to add information about this.\n\n> 2) I'm not sure if InvalidRepOriginId is correct here, how about\n> using OidIsValid in the below:\n> +void\n> +TwoPhaseTransactionGid(Oid subid, TransactionId xid, char *gid, int szgid)\n> +{\n> + Assert(subid != InvalidRepOriginId);\n>\n\nI agree with this point but please note that this patch moves this\nfunction so that it can be used from other places. Also, I think it is\nwrong to use InvalidRepOriginId as we are passing here\nsubscription_oid, so, ideally, we should use InvalidOid but I would\nrather prefer OidIsValid() as you suggested.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Jul 2024 08:45:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 8:26 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> ```\n> @@ -1089,6 +1089,12 @@ CheckAlterSubOption(Subscription *sub, const char *option,\n> * is because both failover and two_phase options of the slot on the\n> * publisher cannot be modified if the slot is currently acquired by the\n> * existing walsender.\n> + *\n> + * XXX: when toggling two_phase from \"false\" to \"true\", the slot parameter\n> + * is not modified by the backend process so that the lock conflict won't\n> + * occur. The restarted walsender will do the alternation. Therefore, we\n> + * can allow to switch without the restriction. This can be changed in\n> + * the future based on the requirement.\n> ```\n>\n>\n\nI used a slightly different comment in the attached. Apart from this,\nI also addressed comments by Vignesh and Peter. Let me know if I\nmissed anything.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 22 Jul 2024 12:27:26 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Hi, Patch v22-0001 LGTM apart from the following nitpicks\n\n======\nsrc/sgml/ref/alter_subscription.sgml\n\nnitpick - /one needs to/you need to/\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nCheckAlterSubOption:\nnitpick = \"ideally we could have...\" doesn't make sense because the\ncode uses a more consistent/simpler way. So other option was not ideal\nafter all.\n\nAlterSubscription\nnitpick - typo /syncronization/synchronization/\nnipick - plural fix\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Mon, 22 Jul 2024 19:17:44 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 2:48 PM Peter Smith <[email protected]> wrote:\n>\n> Hi, Patch v22-0001 LGTM apart from the following nitpicks\n>\n\nI have included these in the attached. The patch looks good to me. I\nam planning to push this tomorrow unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 23 Jul 2024 16:55:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 4:55 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 2:48 PM Peter Smith <[email protected]> wrote:\n> >\n> > Hi, Patch v22-0001 LGTM apart from the following nitpicks\n> >\n>\n> I have included these in the attached. The patch looks good to me. I\n> am planning to push this tomorrow unless there are more comments.\n>\n\nI merged these changes, made a few other cosmetic changes, and pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:56:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "Amit Kapila <[email protected]> writes:\n> I merged these changes, made a few other cosmetic changes, and pushed the patch.\n\nThere is a CF entry pointing at this thread [1]. Should it be closed?\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/48/4867/\n\n\n",
"msg_date": "Wed, 24 Jul 2024 11:43:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 9:13 PM Tom Lane <[email protected]> wrote:\n>\n> Amit Kapila <[email protected]> writes:\n> > I merged these changes, made a few other cosmetic changes, and pushed the patch.\n>\n> There is a CF entry pointing at this thread [1]. Should it be closed?\n>\n\nYes, closed now. Thanks for the reminder.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:38:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Thu, 25 Jul 2024 at 08:39, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 9:13 PM Tom Lane <[email protected]> wrote:\n> >\n> > Amit Kapila <[email protected]> writes:\n> > > I merged these changes, made a few other cosmetic changes, and pushed the patch.\n> >\n> > There is a CF entry pointing at this thread [1]. Should it be closed?\n> >\n>\n> Yes, closed now. Thanks for the reminder.\n\nI noticed one random test failure in my environment with 021_twophase test.\n[10:37:01.131](0.053s) ok 24 - should be no prepared transactions on subscriber\nerror running SQL: 'psql:<stdin>:2: ERROR: cannot alter two_phase\nwhen logical replication worker is still running\nHINT: Try again after some time.'\n\nWe can reproduce the issue by adding a delay at apply_worker_exit like\nin the attached Reproduce_random_021_twophase_test_failure.patch\npatch.\n\nThis is happening because the check here is wrong:\n+$node_subscriber->poll_query_until('postgres',\n+ \"SELECT count(*) = 0 FROM pg_stat_activity WHERE backend_type =\n'logical replication worker'\"\n\nHere \"logical replication worker\" should be \"logical replication apply worker\".\n\nAttached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 30 Jul 2024 16:02:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 4:02 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 25 Jul 2024 at 08:39, Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 24, 2024 at 9:13 PM Tom Lane <[email protected]> wrote:\n> > >\n> > > Amit Kapila <[email protected]> writes:\n> > > > I merged these changes, made a few other cosmetic changes, and pushed the patch.\n> > >\n> > > There is a CF entry pointing at this thread [1]. Should it be closed?\n> > >\n> >\n> > Yes, closed now. Thanks for the reminder.\n>\n> I noticed one random test failure in my environment with 021_twophase test.\n> [10:37:01.131](0.053s) ok 24 - should be no prepared transactions on subscriber\n> error running SQL: 'psql:<stdin>:2: ERROR: cannot alter two_phase\n> when logical replication worker is still running\n> HINT: Try again after some time.'\n>\n> We can reproduce the issue by adding a delay at apply_worker_exit like\n> in the attached Reproduce_random_021_twophase_test_failure.patch\n> patch.\n>\n> This is happening because the check here is wrong:\n> +$node_subscriber->poll_query_until('postgres',\n> + \"SELECT count(*) = 0 FROM pg_stat_activity WHERE backend_type =\n> 'logical replication worker'\"\n>\n> Here \"logical replication worker\" should be \"logical replication apply worker\".\n>\n> Attached patch has the changes for the same.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 30 Jul 2024 16:28:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow catchup of 2PC (twophase) transactions on replica in LR"
}
] |
[
{
"msg_contents": "Hi, everyone!\n\nI found a potential bug in dectoint() and dectolong() functions from\ninformix.c. \"Informix Compatibility Mode\" doc chapter says that\nECPG_INFORMIX_NUM_OVERFLOW is returned if an overflow occurred. But\ncheck this line in dectoint() or dectolong() (it is present in both):\nif (ret == PGTYPES_NUM_OVERFLOW) - condition is always\nfalse because PGTYPESnumeric_to_int() and PGTYPESnumeric_to_long()\nfunctions return only 0 or -1. So ECPG_INFORMIX_NUM_OVERFLOW can never\nbe returned.\n\nI think dectoint(), dectolong() and PGTYPESnumeric_to_int() functions\nshould be a little bit different like in proposing patch.\nWhat do you think?\n\nThe flaw was catched with the help of Svace static analyzer.\nhttps://svace.pages.ispras.ru/svace-website/en/\n\nThank you!",
"msg_date": "Thu, 22 Feb 2024 19:54:37 +0300",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "> On 22 Feb 2024, at 17:54, [email protected] wrote:\n\n> PGTYPESnumeric_to_int() and PGTYPESnumeric_to_long()\n> functions return only 0 or -1. So ECPG_INFORMIX_NUM_OVERFLOW can never\n> be returned.\n\nIndeed, this looks like an oversight.\n\n> I think dectoint(), dectolong() and PGTYPESnumeric_to_int() functions\n> should be a little bit different like in proposing patch.\n> What do you think?\n\n- Convert a variable to type decimal to an integer.\n+ Convert a variable of type decimal to an integer.\nWhile related, this should be committed and backpatched regardless.\n\n+ int errnum = 0;\nStylistic nit, we typically don't initialize a variable which cannot be\naccessed before being set.\n\nOverall the patch looks sane, please register it for the next commitfest to\nmake it's not missed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 11:44:24 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "Daniel Gustafsson писал(а) 2024-02-23 13:44:\n>> On 22 Feb 2024, at 17:54, [email protected] wrote:\n> \n>> PGTYPESnumeric_to_int() and PGTYPESnumeric_to_long()\n>> functions return only 0 or -1. So ECPG_INFORMIX_NUM_OVERFLOW can never\n>> be returned.\n> \n> Indeed, this looks like an oversight.\n> \n>> I think dectoint(), dectolong() and PGTYPESnumeric_to_int() functions\n>> should be a little bit different like in proposing patch.\n>> What do you think?\n> \n> - Convert a variable to type decimal to an integer.\n> + Convert a variable of type decimal to an integer.\n> While related, this should be committed and backpatched regardless.\n> \n> + int errnum = 0;\n> Stylistic nit, we typically don't initialize a variable which cannot be\n> accessed before being set.\n> \n> Overall the patch looks sane, please register it for the next \n> commitfest to\n> make it's not missed.\n> \n> --\n> Daniel Gustafsson\n\nThank you for feedback,\n\n- Convert a variable to type decimal to an integer.\n+ Convert a variable of type decimal to an integer.\nI removed this from the patch and proposed to \[email protected]\n\n+ int errnum = 0;\nfixed\n\nThank's for advice, the patch will be registered for the next \ncommitfest.\n\n--\nAidar Imamov",
"msg_date": "Fri, 23 Feb 2024 18:03:41 +0300",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 06:03:41PM +0300, [email protected] wrote:\n> Thank's for advice, the patch will be registered for the next commitfest.\n\nThe risk looks really minimal to me, but playing with error codes\nwhile the logic of the function is unchanged does not strike me as\nsomething to backpatch as it could slightly break applications. On\nHEAD, I'm OK with that.\n--\nMichael",
"msg_date": "Sat, 24 Feb 2024 10:15:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "> On 24 Feb 2024, at 02:15, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Feb 23, 2024 at 06:03:41PM +0300, [email protected] wrote:\n>> Thank's for advice, the patch will be registered for the next commitfest.\n> \n> The risk looks really minimal to me, but playing with error codes\n> while the logic of the function is unchanged does not strike me as\n> something to backpatch as it could slightly break applications. On\n> HEAD, I'm OK with that.\n\nYeah, I think this is for HEAD only, especially given the lack of complaints\nagainst backbranches.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 00:28:51 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 12:28:51AM +0100, Daniel Gustafsson wrote:\n> Yeah, I think this is for HEAD only, especially given the lack of complaints\n> against backbranches.\n\nDaniel, are you planning to look at that? I haven't done any detailed\nlookup, but would be happy to do so it that helps.\n--\nMichael",
"msg_date": "Tue, 27 Feb 2024 14:08:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "> On 27 Feb 2024, at 06:08, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Feb 26, 2024 at 12:28:51AM +0100, Daniel Gustafsson wrote:\n>> Yeah, I think this is for HEAD only, especially given the lack of complaints\n>> against backbranches.\n> \n> Daniel, are you planning to look at that? I haven't done any detailed\n> lookup, but would be happy to do so it that helps.\n\nI have it on my TODO for the upcoming CF.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 09:24:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 09:24:25AM +0100, Daniel Gustafsson wrote:\n> I have it on my TODO for the upcoming CF.\n\nOkay, thanks.\n--\nMichael",
"msg_date": "Wed, 28 Feb 2024 08:14:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "Michael Paquier писал(а) 2024-02-28 02:14:\n> On Tue, Feb 27, 2024 at 09:24:25AM +0100, Daniel Gustafsson wrote:\n>> I have it on my TODO for the upcoming CF.\n> \n> Okay, thanks.\n> --\n> Michael\n\nGreetings!\n\nSorry, I had been waiting for a few days for my cool-off period to end.\nThe patch now is registered to CF in the 'Refactoring' topic.\n\n--\nAidar\n\n\n",
"msg_date": "Thu, 29 Feb 2024 12:43:25 +0300",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "> On 27 Feb 2024, at 06:08, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Feb 26, 2024 at 12:28:51AM +0100, Daniel Gustafsson wrote:\n>> Yeah, I think this is for HEAD only, especially given the lack of complaints\n>> against backbranches.\n> \n> Daniel, are you planning to look at that? I haven't done any detailed\n> lookup, but would be happy to do so it that helps.\n\nI had a look at this today and opted for trimming back the patch a bit.\nReading the informix docs the functions we are mimicking for compatibility here\ndoes not have an underflow returnvalue, so adding one doesn't seem right (or\nhelpful). The attached fixes the return of overflow and leaves it at that,\nwhich makes it possible to backpatch since it's fixing the code to match the\ndocumented behavior.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 6 Mar 2024 16:03:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "Daniel Gustafsson писал(а) 2024-03-06 18:03:\n>> On 27 Feb 2024, at 06:08, Michael Paquier <[email protected]> wrote:\n>> \n>> On Mon, Feb 26, 2024 at 12:28:51AM +0100, Daniel Gustafsson wrote:\n>>> Yeah, I think this is for HEAD only, especially given the lack of \n>>> complaints\n>>> against backbranches.\n>> \n>> Daniel, are you planning to look at that? I haven't done any detailed\n>> lookup, but would be happy to do so it that helps.\n> \n> I had a look at this today and opted for trimming back the patch a bit.\n> Reading the informix docs the functions we are mimicking for \n> compatibility here\n> does not have an underflow returnvalue, so adding one doesn't seem \n> right (or\n> helpful). The attached fixes the return of overflow and leaves it at \n> that,\n> which makes it possible to backpatch since it's fixing the code to \n> match the\n> documented behavior.\n> \n> --\n> Daniel Gustafsson\n\nI agree with the proposed changes in favor of backward compatibility.\nAlso, is it a big deal that the PGTYPESnumeric_to_long() function \ndoesn't\nexactly match the documentation, compared to PGTYPESnumeric_to_int()? It\nhandles underflow case separately and sets errno to \nPGTYPES_NUM_UNDERFLOW\nadditionally.\n\n\n",
"msg_date": "Wed, 06 Mar 2024 22:12:46 +0300",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
},
{
"msg_contents": "> On 6 Mar 2024, at 20:12, [email protected] wrote:\n\n> I agree with the proposed changes in favor of backward compatibility.\n\nI went ahead to pushed this after another look. I'm a bit hesitant to\nbackpatch this since there are no reports against it, and I don't have good\nsense for how ECPG code is tested and maintained across minor version upgrades.\nIf we want to I will of course do so, so please chime in in case there are\ndifferent and more informed opinions.\n\n> Also, is it a big deal that the PGTYPESnumeric_to_long() function doesn't\n> exactly match the documentation, compared to PGTYPESnumeric_to_int()? It\n> handles underflow case separately and sets errno to PGTYPES_NUM_UNDERFLOW\n> additionally.\n\nFixed as well.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:43:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential issue in ecpg-informix decimal converting functions"
}
] |
[
{
"msg_contents": "Dear pgsql hackers,\n\nI am developing custom storage for pgsql tables. I am using md* functions\nand smgrsw[] structure to switch between different magnetic disk\naccess methods.\n\nI want to add some custom options while table created\npsql# create table t(...) with (my_option='value');\n\nAnd thus I want to set \"reln->smgr_which\" conditionally during smgropen().\nIf myoption='value' i would use another smgr_which\n\nI am really stuck at this point.\n\nsmgr.c:\nSMgrRelation\nsmgropen(RelFileNode rnode, BackendId backend){\n...\n if ( HasOption(rnode, \"my_option\",\"value\")){ //<< how to implement this\ncheck ?\n reln->smgr_which = 1; //new access method\n }else{\n reln->smgr_which = 0; //old access method\n }\n...\n}\n\n\nThe question is --- can I read table options while the table is\nidentified by \"RelFileNode rnode\" ??\n\nThe only available information is\ntypedef struct RelFileNode\n{\n Oid spcNode; /* tablespace */\n Oid dbNode; /* database */\n Oid relNode; /* relation */\n} RelFileNode;\n\nBut there are no table options available directly from this structure.\nWhat is the best way to implement HasOption(rnode, \"my_option\",\"value\")\n\nThank you in advance for any ideas.\nSincerely,\nDmitry R\n\nDear pgsql hackers,I am developing custom storage for pgsql tables. I am using md* functions and smgrsw[] structure to switch between different magnetic disk access methods. I want to add some custom options while table created psql# create table t(...) with (my_option='value');And thus I want to set \"reln->smgr_which\" conditionally during smgropen(). If myoption='value' i would use another smgr_which I am really stuck at this point.smgr.c:SMgrRelationsmgropen(RelFileNode rnode, BackendId backend){... if ( HasOption(rnode, \"my_option\",\"value\")){ //<< how to implement this check ? reln->smgr_which = 1; //new access method }else{ reln->smgr_which = 0; //old access method }...}The question is --- can I read table options while the table is identified by \"RelFileNode rnode\" ??The only available information is typedef struct RelFileNode{ Oid\t\t\tspcNode;\t\t/* tablespace */ Oid\t\t\tdbNode;\t\t\t/* database */ Oid\t\t\trelNode;\t\t/* relation */} RelFileNode;But there are no table options available directly from this structure.What is the best way to implement HasOption(rnode, \"my_option\",\"value\")Thank you in advance for any ideas.Sincerely,Dmitry R",
"msg_date": "Thu, 22 Feb 2024 22:22:03 +0400",
"msg_from": "\"Dima Rybakov (Tlt)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to read table options during smgropen()"
},
{
"msg_contents": "On 22/02/2024 20:22, Dima Rybakov (Tlt) wrote:\n> Dear pgsql hackers,\n> \n> I am developing custom storage for pgsql tables. I am using md* \n> functions and smgrsw[] structure to switch between different magnetic \n> disk access methods.\n> \n> I want to add some custom options while table created\n> psql# create table t(...) with (my_option='value');\n> \n> And thus I want to set \"reln->smgr_which\" conditionally during \n> smgropen(). If myoption='value' i would use another smgr_which\n> \n> I am really stuck at this point.\n> \n> smgr.c:\n> SMgrRelation\n> smgropen(RelFileNode rnode, BackendId backend){\n> ...\n> if ( HasOption(rnode, \"my_option\",\"value\")){ //<< how to implement \n> this check ?\n> reln->smgr_which = 1; //new access method\n> }else{\n> reln->smgr_which = 0; //old access method\n> }\n> ...\n> }\n> \n> \n> The question is --- can I read table options while the table is \n> identified by \"RelFileNode rnode\" ??\n\nThe short answer is that you can not. smgropen() operates at a lower \nlevel, and doesn't have access to the catalogs. smgropen() can be called \nby different backends connected to different databases, and even WAL \nrecovery when the system is not in a consistent state yet.\n\nTake a look at the table AM interface. It sounds like it might be a \nbetter fit for what you're doing.\n\nThere have been a few threads here on pgsql-hackers on making the smgr \ninterface extensible, see \nhttps://www.postgresql.org/message-id/CAEze2WgMySu2suO_TLvFyGY3URa4mAx22WeoEicnK%3DPCNWEMrA%40mail.gmail.com \none recent patch. That thread concluded that it's difficult to make it a \nper-tablespace option, let alone per-table.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 11:40:35 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to read table options during smgropen()"
}
] |
[
{
"msg_contents": "Hi,\n\nOn 2024-02-17 17:48:23 +0100, Laurenz Albe wrote:\n> As a test case, I created a table with 10000 rows, each of which\n> had an array of 10000 uuids. The table resided in shared buffers.\nCan you share exactly script used to create a table?\n\nbest regards,\n\nRanier Vilela\n\nHi,\nOn 2024-02-17 17:48:23 +0100, Laurenz Albe wrote: > As a test case, I created a table with 10000 rows, each of which> had an array of 10000 uuids. The table resided in shared buffers. Can you share exactly script used to create a table?best regards,Ranier Vilela",
"msg_date": "Thu, 22 Feb 2024 16:42:37 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "On Thu, Feb 22, 2024 at 04:42:37PM -0300, Ranier Vilela wrote:\n> Can you share exactly script used to create a table?\n\nStressing the internals of array_out() for the area of the patch is\nnot that difficult, as we want to quote each element that's returned\nin output.\n\nThe trick is to have the following to stress the second quoting loop a\nmaximum:\n- a high number of rows.\n- a high number of items in the arrays.\n- a *minimum* number of characters in each element of the array, with\ncharacters that require quoting.\n\nThe best test case I can think of to demonstrate the patch would be\nsomething like that (adjust rows and elts as you see fit):\n-- Number of rows\n\\set rows 6\n-- Number of elements\n\\set elts 4\ncreate table tab as\n with data as (\n select array_agg(a) as array\n from (\n select '{'::text\n from generate_series(1, :elts) as int(a)) as index(a))\n select data.array from data, generate_series(1,:rows);\n\nThen I get:\n array\n-------------------\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n(6 rows)\n\nWith \"\\set rows 100000\" and \"\\set elts 10000\", giving 100MB of data\nwith 100k rows with 10k elements each, I get for HEAD when data is in\nshared buffers:\n=# copy tab to '/dev/null';\nCOPY 100000\nTime: 48620.927 ms (00:48.621)\nAnd with v3:\n=# copy tab to '/dev/null';\nCOPY 100000\nTime: 47993.183 ms (00:47.993)\n\nProfiles don't fundamentally change much, array_out() gets a 30.76% ->\n29.72% in self runtime, with what looks like a limited impact to me.\n\nWith 1k rows and 1M elements, COPY TO gets reduced from 54338.436 ms\nto 54129.978 ms, and a 29.51% -> 29.12% increase (looks like noise).\n\nPerhaps I've hit some noise while running this set of tests, but the\nimpact of the proposed patch looks very limited to me. If you have a\nbetter set of tests and/or ideas, feel free of course.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 14:28:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
},
{
"msg_contents": "Em seg., 26 de fev. de 2024 às 02:28, Michael Paquier <[email protected]>\nescreveu:\n\n> On Thu, Feb 22, 2024 at 04:42:37PM -0300, Ranier Vilela wrote:\n> > Can you share exactly script used to create a table?\n>\n> Stressing the internals of array_out() for the area of the patch is\n> not that difficult, as we want to quote each element that's returned\n> in output.\n>\n> The trick is to have the following to stress the second quoting loop a\n> maximum:\n> - a high number of rows.\n> - a high number of items in the arrays.\n> - a *minimum* number of characters in each element of the array, with\n> characters that require quoting.\n>\n> The best test case I can think of to demonstrate the patch would be\n> something like that (adjust rows and elts as you see fit):\n> -- Number of rows\n> \\set rows 6\n> -- Number of elements\n> \\set elts 4\n> create table tab as\n> with data as (\n> select array_agg(a) as array\n> from (\n> select '{'::text\n> from generate_series(1, :elts) as int(a)) as index(a))\n> select data.array from data, generate_series(1,:rows);\n>\n> Then I get:\n> array\n> -------------------\n> {\"{\",\"{\",\"{\",\"{\"}\n> {\"{\",\"{\",\"{\",\"{\"}\n> {\"{\",\"{\",\"{\",\"{\"}\n> {\"{\",\"{\",\"{\",\"{\"}\n> {\"{\",\"{\",\"{\",\"{\"}\n> {\"{\",\"{\",\"{\",\"{\"}\n> (6 rows)\n>\n> With \"\\set rows 100000\" and \"\\set elts 10000\", giving 100MB of data\n> with 100k rows with 10k elements each, I get for HEAD when data is in\n> shared buffers:\n> =# copy tab to '/dev/null';\n> COPY 100000\n> Time: 48620.927 ms (00:48.621)\n> And with v3:\n> =# copy tab to '/dev/null';\n> COPY 100000\n> Time: 47993.183 ms (00:47.993)\n>\nThanks Michael, for the script.\n\nIt is easier to make comparisons, using the exact same script.\n\nbest regards,\nRanier Vilela\n\nEm seg., 26 de fev. de 2024 às 02:28, Michael Paquier <[email protected]> escreveu:On Thu, Feb 22, 2024 at 04:42:37PM -0300, Ranier Vilela wrote:\n> Can you share exactly script used to create a table?\n\nStressing the internals of array_out() for the area of the patch is\nnot that difficult, as we want to quote each element that's returned\nin output.\n\nThe trick is to have the following to stress the second quoting loop a\nmaximum:\n- a high number of rows.\n- a high number of items in the arrays.\n- a *minimum* number of characters in each element of the array, with\ncharacters that require quoting.\n\nThe best test case I can think of to demonstrate the patch would be\nsomething like that (adjust rows and elts as you see fit):\n-- Number of rows\n\\set rows 6\n-- Number of elements\n\\set elts 4\ncreate table tab as\n with data as (\n select array_agg(a) as array\n from (\n select '{'::text\n from generate_series(1, :elts) as int(a)) as index(a))\n select data.array from data, generate_series(1,:rows);\n\nThen I get:\n array\n-------------------\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n {\"{\",\"{\",\"{\",\"{\"}\n(6 rows)\n\nWith \"\\set rows 100000\" and \"\\set elts 10000\", giving 100MB of data\nwith 100k rows with 10k elements each, I get for HEAD when data is in\nshared buffers:\n=# copy tab to '/dev/null';\nCOPY 100000\nTime: 48620.927 ms (00:48.621)\nAnd with v3:\n=# copy tab to '/dev/null';\nCOPY 100000\nTime: 47993.183 ms (00:47.993)Thanks Michael, for the script. It is easier to make comparisons, using the exact same script.best regards,Ranier Vilela",
"msg_date": "Mon, 26 Feb 2024 11:26:27 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up COPY TO for uuids and arrays"
}
] |
[
{
"msg_contents": "Hi.\n\nRecent commit 555276f8594087ba15e0d58e38cd2186b9f39f6d introduced final \ncleanup of node->as_eventset in ExecAppendAsyncEventWait().\nUnfortunately, now this function can return in the middle of TRY/FINALLY \nblock, without restoring PG_exception_stack.\n\nWe found this while working on our FDW. Unfortunately, I couldn't \nreproduce the issue with postgres_fdw, but it seems it is also affected.\n\nThe following patch heals the issue.\n\n-- l\nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Fri, 23 Feb 2024 13:21:14 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 01:21:14PM +0300, Alexander Pyhalov wrote:\n> Recent commit 555276f8594087ba15e0d58e38cd2186b9f39f6d introduced final\n> cleanup of node->as_eventset in ExecAppendAsyncEventWait().\n> Unfortunately, now this function can return in the middle of TRY/FINALLY\n> block, without restoring PG_exception_stack.\n> \n> We found this while working on our FDW. Unfortunately, I couldn't reproduce\n> the issue with postgres_fdw, but it seems it is also affected.\n\nUgh, yes, you are obviously right that the early return is wrong.\nI'll look into fixing that where appropriate. Thanks for the report,\nAlexander!\n--\nMichael",
"msg_date": "Sat, 24 Feb 2024 10:06:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "Hi,\n\nOn Sat, Feb 24, 2024 at 10:06 AM Michael Paquier <[email protected]> wrote:\n> On Fri, Feb 23, 2024 at 01:21:14PM +0300, Alexander Pyhalov wrote:\n> > Recent commit 555276f8594087ba15e0d58e38cd2186b9f39f6d introduced final\n> > cleanup of node->as_eventset in ExecAppendAsyncEventWait().\n> > Unfortunately, now this function can return in the middle of TRY/FINALLY\n> > block, without restoring PG_exception_stack.\n> >\n> > We found this while working on our FDW. Unfortunately, I couldn't reproduce\n> > the issue with postgres_fdw, but it seems it is also affected.\n\nI think this would happen when FDWs configure no events; IIRC I think\nwhile the core allows them to do so, postgres_fdw does not do so, so\nthis would never happen with it. Anyway, thanks for the report and\npatch, Alexander!\n\n> Ugh, yes, you are obviously right that the early return is wrong.\n> I'll look into fixing that where appropriate.\n\nThanks for taking care of this, Michael-san! This would result\noriginally from my fault, so If you don't mind, could you let me do\nthat?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 25 Feb 2024 18:34:30 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 06:34:30PM +0900, Etsuro Fujita wrote:\n> I think this would happen when FDWs configure no events; IIRC I think\n> while the core allows them to do so, postgres_fdw does not do so, so\n> this would never happen with it. Anyway, thanks for the report and\n> patch, Alexander!\n\nI don't see how that's directly your fault as this is a thinko in the\nset of commits 481d7d1c01, 555276f859 and 501cfd07da that have hit\n14~16, ignoring entirely the TRY/CATCH block.\n\nAnyway, if you want to address it yourself, feel free to go ahead,\nthanks! I would have done it but I've been busy with life stuff for\nthe last couple of days.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 08:37:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 8:37 AM Michael Paquier <[email protected]> wrote:\n> I don't see how that's directly your fault as this is a thinko in the\n> set of commits 481d7d1c01, 555276f859 and 501cfd07da that have hit\n> 14~16, ignoring entirely the TRY/CATCH block.\n\nThe set of commits is actually a fix for resource leaks in my commit 27e1f1456.\n\n> Anyway, if you want to address it yourself, feel free to go ahead,\n> thanks! I would have done it but I've been busy with life stuff for\n> the last couple of days.\n\nWill do. (I was thinking you would get busy from now on.)\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:29:44 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 04:29:44PM +0900, Etsuro Fujita wrote:\n> Will do. (I was thinking you would get busy from now on.)\n\nFujita-san, have you been able to look at this thread?\n--\nMichael",
"msg_date": "Mon, 11 Mar 2024 08:12:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "Hi Michael-san,\n\nOn Mon, Mar 11, 2024 at 8:12 AM Michael Paquier <[email protected]> wrote:\n> On Mon, Feb 26, 2024 at 04:29:44PM +0900, Etsuro Fujita wrote:\n> > Will do. (I was thinking you would get busy from now on.)\n>\n> Fujita-san, have you been able to look at this thread?\n\nYeah, I took a look; the patch looks good to me, but I am thiking to\nupdate some comments in a related function in postgres_fdw.c. I will\nhave time to work on this later this week, so I would like to propose\nan updated patch then.\n\nThanks for taking care of this!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:56:58 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 6:34 PM Etsuro Fujita <[email protected]> wrote:\n\n> > On Fri, Feb 23, 2024 at 01:21:14PM +0300, Alexander Pyhalov wrote:\n> > > Recent commit 555276f8594087ba15e0d58e38cd2186b9f39f6d introduced final\n> > > cleanup of node->as_eventset in ExecAppendAsyncEventWait().\n> > > Unfortunately, now this function can return in the middle of TRY/FINALLY\n> > > block, without restoring PG_exception_stack.\n> > >\n> > > We found this while working on our FDW. Unfortunately, I couldn't reproduce\n> > > the issue with postgres_fdw, but it seems it is also affected.\n>\n> I think this would happen when FDWs configure no events; IIRC I think\n> while the core allows them to do so, postgres_fdw does not do so, so\n> this would never happen with it.\n\nI was wrong; as you pointed out, this would affect postgres_fdw as\nwell. See commit 1ec7fca85, which is my commit, but I forgot it\ncompletely. :-(\n\nAs I said before, the patch looks good to me. I tweaked comments in\nExecAppendAsyncEventWait() a bit. Attached is an updated patch. In\nthe patch I also fixed a confusing comment in a related function in\npostgres_fdw.c about handling of the in-process request that might be\nuseless to process.\n\nSorry, it took more time than expected to get back to this thread.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 21 Mar 2024 19:59:50 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "Etsuro Fujita писал(а) 2024-03-21 13:59:\n> On Sun, Feb 25, 2024 at 6:34 PM Etsuro Fujita <[email protected]> \n> wrote:\n> \n>> > On Fri, Feb 23, 2024 at 01:21:14PM +0300, Alexander Pyhalov wrote:\n>> > > Recent commit 555276f8594087ba15e0d58e38cd2186b9f39f6d introduced final\n>> > > cleanup of node->as_eventset in ExecAppendAsyncEventWait().\n>> > > Unfortunately, now this function can return in the middle of TRY/FINALLY\n>> > > block, without restoring PG_exception_stack.\n>> > >\n>> > > We found this while working on our FDW. Unfortunately, I couldn't reproduce\n>> > > the issue with postgres_fdw, but it seems it is also affected.\n>> \n>> I think this would happen when FDWs configure no events; IIRC I think\n>> while the core allows them to do so, postgres_fdw does not do so, so\n>> this would never happen with it.\n> \n> I was wrong; as you pointed out, this would affect postgres_fdw as\n> well. See commit 1ec7fca85, which is my commit, but I forgot it\n> completely. :-(\n> \n> As I said before, the patch looks good to me. I tweaked comments in\n> ExecAppendAsyncEventWait() a bit. Attached is an updated patch. In\n> the patch I also fixed a confusing comment in a related function in\n> postgres_fdw.c about handling of the in-process request that might be\n> useless to process.\n> \n> Sorry, it took more time than expected to get back to this thread.\n> \n\nHi. The updated patch still looks good to me.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Fri, 22 Mar 2024 13:23:45 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "Hi Alexander,\n\nOn Fri, Mar 22, 2024 at 7:23 PM Alexander Pyhalov\n<[email protected]> wrote:\n> The updated patch still looks good to me.\n\nGreat! I am planning to apply the patch to the back branches next week.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 22 Mar 2024 21:09:08 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 9:09 PM Etsuro Fujita <[email protected]> wrote:\n> On Fri, Mar 22, 2024 at 7:23 PM Alexander Pyhalov\n> <[email protected]> wrote:\n> > The updated patch still looks good to me.\n>\n> I am planning to apply the patch to the back branches next week.\n\nPushed. Sorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 4 Apr 2024 18:08:40 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 06:08:40PM +0900, Etsuro Fujita wrote:\n> Pushed. Sorry for the delay.\n\nThanks!\n--\nMichael",
"msg_date": "Mon, 8 Apr 2024 14:12:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ExecAppendAsyncEventWait() in REL_14_STABLE can corrupt\n PG_exception_stack"
}
] |
[
{
"msg_contents": "The attached two patches are smaller refactorings to the SASL exchange and init\ncodepaths which are required for the OAuthbearer work [0]. Regardless of the\nfuture of that patchset, these refactorings are nice cleanups and can be\nconsidered in isolation. Another goal is of course to reduce scope of the\nOAuth patchset to make it easier to review.\n\nThe first patch change state return from the exchange call to use a tri-state\nreturn value instead of the current output parameters. This makes it possible\nto introduce async flows, but it also makes the code a lot more readable due to\nusing descriptve names IMHO.\n\nThe second patch sets password_needed during SASL init on the SCRAM exchanges.\nThis was implicit in the code but since not all future exchanges may require\npassword, do it explicitly per mechanism instead.\n\n--\nDaniel Gustafsson\n\n[0] [email protected]",
"msg_date": "Fri, 23 Feb 2024 11:30:19 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Refactor SASL exchange in preparation for OAuth Bearer"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 2:30 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> The attached two patches are smaller refactorings to the SASL exchange and init\n> codepaths which are required for the OAuthbearer work [0]. Regardless of the\n> future of that patchset, these refactorings are nice cleanups and can be\n> considered in isolation. Another goal is of course to reduce scope of the\n> OAuth patchset to make it easier to review.\n\nThanks for pulling these out! They look good overall, just a few notes below.\n\nIn 0001:\n\n> + * SASL_FAILED: The exchance has failed and the connection should be\n\ns/exchance/exchange/\n\n> - if (final && !done)\n> + if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))\n\nSince there's not yet a SASL_ASYNC, I wonder if this would be more\nreadable if it were changed to\n if (final && status == SASL_CONTINUE)\nto match the if condition shortly after it.\n\nIn 0002, at the beginning of pg_SASL_init, the `password` variable now\nhas an uninitialized code path. The OAuth patchset initializes it to\nNULL:\n\n> +++ b/src/interfaces/libpq/fe-auth.c\n> @@ -425,7 +425,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)\n> int initialresponselen;\n> const char *selected_mechanism;\n> PQExpBufferData mechanism_buf;\n> - char *password;\n> + char *password = NULL;\n> SASLStatus status;\n>\n> initPQExpBuffer(&mechanism_buf);\n\nI'll base the next version of the OAuth patchset on top of these.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Mon, 26 Feb 2024 10:56:39 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactor SASL exchange in preparation for OAuth Bearer"
},
{
"msg_contents": "> On 26 Feb 2024, at 19:56, Jacob Champion <[email protected]> wrote:\n\n>> + * SASL_FAILED: The exchance has failed and the connection should be\n> \n> s/exchance/exchange/\n\nI rank that as one of my better typos actually. Fixed though.\n\n>> - if (final && !done)\n>> + if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))\n> \n> Since there's not yet a SASL_ASYNC, I wonder if this would be more\n> readable if it were changed to\n> if (final && status == SASL_CONTINUE)\n> to match the if condition shortly after it.\n\nFair point, that's more readable in this commit.\n\n> In 0002, at the beginning of pg_SASL_init, the `password` variable now\n> has an uninitialized code path. The OAuth patchset initializes it to\n> NULL:\n\nNice catch, fixed.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 28 Feb 2024 23:54:03 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SASL exchange in preparation for OAuth Bearer"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 2:54 PM Daniel Gustafsson <[email protected]> wrote:\n> I rank that as one of my better typos actually. Fixed though.\n\nLGTM!\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:58:46 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactor SASL exchange in preparation for OAuth Bearer"
},
{
"msg_contents": "> On 29 Feb 2024, at 20:58, Jacob Champion <[email protected]> wrote:\n> \n> On Wed, Feb 28, 2024 at 2:54 PM Daniel Gustafsson <[email protected]> wrote:\n>> I rank that as one of my better typos actually. Fixed though.\n> \n> LGTM!\n\nThanks for review, and since Heikki marked it ready for committer I assume that\ncounting as a +1 as well. Attached is a rebase on top of HEAD to get a fresh\nrun from the CFBot before applying this.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 20 Mar 2024 15:28:10 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SASL exchange in preparation for OAuth Bearer"
},
{
"msg_contents": "> On 20 Mar 2024, at 15:28, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 29 Feb 2024, at 20:58, Jacob Champion <[email protected]> wrote:\n>> \n>> On Wed, Feb 28, 2024 at 2:54 PM Daniel Gustafsson <[email protected]> wrote:\n>>> I rank that as one of my better typos actually. Fixed though.\n>> \n>> LGTM!\n> \n> Thanks for review, and since Heikki marked it ready for committer I assume that\n> counting as a +1 as well. Attached is a rebase on top of HEAD to get a fresh\n> run from the CFBot before applying this.\n\nAnd after another pass over it I ended up pushing it today.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 19:57:46 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SASL exchange in preparation for OAuth Bearer"
}
] |
[
{
"msg_contents": "Various code comments say that the RangeTblEntry field inh may only be \nset for entries of kind RTE_RELATION.\n\nFor example\n\n * inh is true for relation references that should be expanded to \ninclude\n * inheritance children, if the rel has any. This *must* be false for\n * RTEs other than RTE_RELATION entries.\n\nand various comments in other files.\n\n(Confusingly, it is also listed under \"Fields valid in all RTEs:\", but \nthat definitely seems wrong.)\n\nI have been deploying some assertions to see if the claims in the \nRangeTblEntry comments are all correct, and I tripped over something.\n\nThe function pull_up_simple_union_all() in prepjointree.c sets ->inh to \ntrue for RTE_SUBQUERY entries:\n\n /*\n * Mark the parent as an append relation.\n */\n rte->inh = true;\n\nWhatever this is doing appears to be some undocumented magic. If I \nremove the line, then regression tests fail with plan differences, so it \ndefinitely seems to do something.\n\nIs this something we should explain the RangeTblEntry comments?\n\n\n",
"msg_date": "Fri, 23 Feb 2024 15:34:56 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "On Fri, 23 Feb 2024 at 14:35, Peter Eisentraut <[email protected]> wrote:\n>\n> Various code comments say that the RangeTblEntry field inh may only be\n> set for entries of kind RTE_RELATION.\n>\n> The function pull_up_simple_union_all() in prepjointree.c sets ->inh to\n> true for RTE_SUBQUERY entries:\n>\n> /*\n> * Mark the parent as an append relation.\n> */\n> rte->inh = true;\n>\n> Whatever this is doing appears to be some undocumented magic.\n\nYes, it's explained a bit more clearly/accurately in expand_inherited_rtentry():\n\n/*\n * expand_inherited_rtentry\n * Expand a rangetable entry that has the \"inh\" bit set.\n *\n * \"inh\" is only allowed in two cases: RELATION and SUBQUERY RTEs.\n *\n * \"inh\" on a plain RELATION RTE means that it is a partitioned table or the\n * parent of a traditional-inheritance set. In this case we must add entries\n * for all the interesting child tables to the query's rangetable, and build\n * additional planner data structures for them, including RelOptInfos,\n * AppendRelInfos, and possibly PlanRowMarks.\n *\n * Note that the original RTE is considered to represent the whole inheritance\n * set. In the case of traditional inheritance, the first of the generated\n * RTEs is an RTE for the same table, but with inh = false, to represent the\n * parent table in its role as a simple member of the inheritance set. For\n * partitioning, we don't need a second RTE because the partitioned table\n * itself has no data and need not be scanned.\n *\n * \"inh\" on a SUBQUERY RTE means that it's the parent of a UNION ALL group,\n * which is treated as an appendrel similarly to inheritance cases; however,\n * we already made RTEs and AppendRelInfos for the subqueries. We only need\n * to build RelOptInfos for them, which is done by expand_appendrel_subquery.\n */\n\n> Is this something we should explain the RangeTblEntry comments?\n>\n\n+1\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 23 Feb 2024 14:52:20 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Fri, 23 Feb 2024 at 14:35, Peter Eisentraut <[email protected]> wrote:\n>> Various code comments say that the RangeTblEntry field inh may only be\n>> set for entries of kind RTE_RELATION.\n\n> Yes, it's explained a bit more clearly/accurately in expand_inherited_rtentry():\n\n> * \"inh\" is only allowed in two cases: RELATION and SUBQUERY RTEs.\n\nYes. The latter has been accurate for a very long time, so I'm\nsurprised that there are any places that think otherwise. We need\nto fix them --- where did you see this exactly?\n\n(Note that RELATION-only is accurate within the parser and rewriter,\nso maybe clarifications about context are in order.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:19:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "On 23.02.24 16:19, Tom Lane wrote:\n> Dean Rasheed <[email protected]> writes:\n>> On Fri, 23 Feb 2024 at 14:35, Peter Eisentraut <[email protected]> wrote:\n>>> Various code comments say that the RangeTblEntry field inh may only be\n>>> set for entries of kind RTE_RELATION.\n> \n>> Yes, it's explained a bit more clearly/accurately in expand_inherited_rtentry():\n> \n>> * \"inh\" is only allowed in two cases: RELATION and SUBQUERY RTEs.\n> \n> Yes. The latter has been accurate for a very long time, so I'm\n> surprised that there are any places that think otherwise. We need\n> to fix them --- where did you see this exactly?\n\nIn nodes/parsenodes.h, it says both\n\n This *must* be false for RTEs other than RTE_RELATION entries.\n\nand also puts it under\n\n Fields valid in all RTEs:\n\nwhich are both wrong on opposite ends of the spectrum.\n\nI think it would make more sense to group inh under \"Fields valid for a \nplain relation RTE\" and then explain the exception for subqueries, like \nit is done for several other fields.\n\nSee attached patch for a proposal. (I also shuffled a few fields around \nto make the order a bit more logical.)",
"msg_date": "Thu, 29 Feb 2024 13:58:21 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In nodes/parsenodes.h, it says both\n> This *must* be false for RTEs other than RTE_RELATION entries.\n\nWell, that's true in the parser ...\n\n> and also puts it under\n> Fields valid in all RTEs:\n> which are both wrong on opposite ends of the spectrum.\n> I think it would make more sense to group inh under \"Fields valid for a \n> plain relation RTE\" and then explain the exception for subqueries, like \n> it is done for several other fields.\n\nDunno. The adjacent \"lateral\" field is also used for only selected\nRTE kinds.\n\nI'd be inclined to leave it where it is and just improve the\ncommentary. That could read like\n\n * inh is true for relation references that should be expanded to include\n * inheritance children, if the rel has any. In the parser this\n * will only be true for RTE_RELATION entries. The planner also uses\n * this field to mark RTE_SUBQUERY entries that contain UNION ALL\n * queries that it has flattened into pulled-up subqueries\n * (creating a structure much like the effects of inheritance).\n\nIf you do insist on moving it, please put it next to relkind so it\npacks better.\n\nI agree that perminfoindex seems to have suffered from add-at-the-end\nsyndrome, and if we do touch the field order you made an improvement\nthere. (BTW, who thought they needn't bother with a comment for\nperminfoindex?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Feb 2024 13:14:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "On 2024-Feb-29, Tom Lane wrote:\n\n> I agree that perminfoindex seems to have suffered from add-at-the-end\n> syndrome, and if we do touch the field order you made an improvement\n> there. (BTW, who thought they needn't bother with a comment for\n> perminfoindex?)\n\nThere is a comment for it, or at least a61b1f74823c added one, though\nnot immediately adjacent. I do see that it's now further away than it\nwas. Perhaps we could add /* index of RTEPermissionInfo entry, or 0 */\nto the line.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The ability of users to misuse tools is, of course, legendary\" (David Steele)\nhttps://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Thu, 29 Feb 2024 20:07:20 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Feb-29, Tom Lane wrote:\n>> I agree that perminfoindex seems to have suffered from add-at-the-end\n>> syndrome, and if we do touch the field order you made an improvement\n>> there. (BTW, who thought they needn't bother with a comment for\n>> perminfoindex?)\n\n> There is a comment for it, or at least a61b1f74823c added one, though\n> not immediately adjacent. I do see that it's now further away than it\n> was. Perhaps we could add /* index of RTEPermissionInfo entry, or 0 */\n> to the line.\n\nThat'd be enough for me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Feb 2024 14:47:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "On 29.02.24 19:14, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> In nodes/parsenodes.h, it says both\n>> This *must* be false for RTEs other than RTE_RELATION entries.\n> \n> Well, that's true in the parser ...\n> \n>> and also puts it under\n>> Fields valid in all RTEs:\n>> which are both wrong on opposite ends of the spectrum.\n>> I think it would make more sense to group inh under \"Fields valid for a\n>> plain relation RTE\" and then explain the exception for subqueries, like\n>> it is done for several other fields.\n> \n> Dunno. The adjacent \"lateral\" field is also used for only selected\n> RTE kinds.\n\nThe section is\n\n /*\n * Fields valid in all RTEs:\n */\n Alias *alias; /* user-written alias clause, if any */\n Alias *eref; /* expanded reference names */\n bool lateral; /* subquery, function, or values is \nLATERAL? */\n bool inh; /* inheritance requested? */\n bool inFromCl; /* present in FROM clause? */\n List *securityQuals; /* security barrier quals to apply, if \nany */\n\nAccording to my testing, lateral is used for RTE_RELATION, RTE_SUBQUERY, \nRTE_FUNCTION, RTE_TABLEFUNC, RTE_VALUES, which is 5 out of 9 possible. \nSo I think it might be okay to relabel that section (in actuality or \nmentally) as \"valid in several/many/most RTEs\".\n\nBut I'm not sure what reason there would be for having inh there, which \nis better described as \"valid for RTE_RELATION, but also borrowed by \nRTE_SUBQUERY\", which is pretty much exactly what is the case for relid, \nrelkind, etc.\n\n\n\n",
"msg_date": "Sun, 3 Mar 2024 11:02:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
},
{
"msg_contents": "On 03.03.24 11:02, Peter Eisentraut wrote:\n> On 29.02.24 19:14, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> In nodes/parsenodes.h, it says both\n>>> This *must* be false for RTEs other than RTE_RELATION entries.\n>>\n>> Well, that's true in the parser ...\n>>\n>>> and also puts it under\n>>> Fields valid in all RTEs:\n>>> which are both wrong on opposite ends of the spectrum.\n>>> I think it would make more sense to group inh under \"Fields valid for a\n>>> plain relation RTE\" and then explain the exception for subqueries, like\n>>> it is done for several other fields.\n>>\n>> Dunno. The adjacent \"lateral\" field is also used for only selected\n>> RTE kinds.\n> \n> The section is\n> \n> /*\n> * Fields valid in all RTEs:\n> */\n> Alias *alias; /* user-written alias clause, if any */\n> Alias *eref; /* expanded reference names */\n> bool lateral; /* subquery, function, or values is \n> LATERAL? */\n> bool inh; /* inheritance requested? */\n> bool inFromCl; /* present in FROM clause? */\n> List *securityQuals; /* security barrier quals to apply, if \n> any */\n> \n> According to my testing, lateral is used for RTE_RELATION, RTE_SUBQUERY, \n> RTE_FUNCTION, RTE_TABLEFUNC, RTE_VALUES, which is 5 out of 9 possible. \n> So I think it might be okay to relabel that section (in actuality or \n> mentally) as \"valid in several/many/most RTEs\".\n> \n> But I'm not sure what reason there would be for having inh there, which \n> is better described as \"valid for RTE_RELATION, but also borrowed by \n> RTE_SUBQUERY\", which is pretty much exactly what is the case for relid, \n> relkind, etc.\n\nI have committed the patches for this discussion.\n\nRelated discussion will continue at \nhttps://www.postgresql.org/message-id/flat/[email protected] \n/ https://commitfest.postgresql.org/47/4697/ .\n\n\n",
"msg_date": "Thu, 7 Mar 2024 16:56:13 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RangeTblEntry.inh vs. RTE_SUBQUERY"
}
] |
[
{
"msg_contents": "If XLOG_DBASE_CREATE_FILE_COPY occurs between an incremental backup\nand its reference backup, every relation whose DB OID and tablespace\nOID match the corresponding values in that record should be backed up\nin full. Currently that's not happening, because the WAL summarizer\ndoesn't see the XLOG_DBASE_CREATE_FILE_COPY as referencing any\nparticular relfilenode and so basically ignores it. The same happens\nfor XLOG_DBASE_CREATE_WAL_LOG, but that case is OK because that only\ncovers creating the directory itself, not anything underneath it, and\nthere will be separate WAL records telling us the relfilenodes created\nbelow the new directory and the pages modified therein.\n\nAFAICS, fixing this requires some way of noting in the WAL summary\nfile that an entire directory got blown away. I chose to do that by\nsetting the limit block to 0 for a fake relation with the given DB OID\nand TS OID and relfilenumber 0, which seems natural. Patch with test\ncase attached. The test case in brief is:\n\ninitdb -c summarize_wal=on\n# start the server in $PGDATA\npsql -c 'create database lakh oid = 100000 strategy = file_copy' postgres\npsql -c 'create table t1 (a int)' lakh\npg_basebackup -cfast -Dt1\ndropdb lakh\npsql -c 'create database lakh oid = 100000 strategy = file_copy' postgres\npg_basebackup -cfast -Dt2 --incremental t1/backup_manifest\npg_combinebackup t1 t2 -o result\n# stop the server, restart from the result directory\npsql -c 'select * from t1' lakh\n\nWithout this patch, you get something like:\n\nERROR: could not open file \"base/100000/16388\": No such file or directory\n\n...because the catalog entries from before the database is dropped and\nrecreated manage to end up in pg_combinebackup's output directory,\nwhich they should not.\n\nWith the patch, you correctly get an error about t1 not existing.\n\nI thought about whether there were any other WAL records that have\nsimilar problems to XLOG_DBASE_CREATE_FILE_COPY and didn't come up\nwith anything. If anyone knows of any similar cases, please let me\nknow.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Feb 2024 20:47:52 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "incremental backup mishandles XLOG_DBASE_CREATE_FILE_COPY"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 08:47:52PM +0530, Robert Haas wrote:\n> If XLOG_DBASE_CREATE_FILE_COPY occurs between an incremental backup\n> and its reference backup, every relation whose DB OID and tablespace\n> OID match the corresponding values in that record should be backed up\n> in full. Currently that's not happening, because the WAL summarizer\n> doesn't see the XLOG_DBASE_CREATE_FILE_COPY as referencing any\n> particular relfilenode and so basically ignores it. The same happens\n> for XLOG_DBASE_CREATE_WAL_LOG, but that case is OK because that only\n> covers creating the directory itself, not anything underneath it, and\n> there will be separate WAL records telling us the relfilenodes created\n> below the new directory and the pages modified therein.\n\nXLOG_DBASE_CREATE_WAL_LOG creates PG_VERSION in addition to creating the\ndirectory. I see your patch covers it.\n\n> I thought about whether there were any other WAL records that have\n> similar problems to XLOG_DBASE_CREATE_FILE_COPY and didn't come up\n> with anything. If anyone knows of any similar cases, please let me\n> know.\n\nRegarding records the summarizer potentially can't ignore that don't deal in\nrelfilenodes, these come to mind:\n\nXLOG_DBASE_DROP - covered in this thread's patch\nXLOG_RELMAP_UPDATE\nXLOG_TBLSPC_CREATE\nXLOG_TBLSPC_DROP\nXLOG_XACT_PREPARE\n\nAlso, any record that writes XIDs needs to update nextXid; likewise for other\nID spaces. See the comment at \"XLOG stuff\" in heap_lock_tuple(). Perhaps you\ndon't summarize past a checkpoint, making that irrelevant.\n\nIf walsummarizer.c handles any of the above, my brief look missed it. I also\ndidn't find the string \"clog\" or \"slru\" anywhere in dc21234 \"Add support for\nincremental backup\", 174c480 \"Add a new WAL summarizer process.\", or thread\nhttps://postgr.es/m/flat/CA%2BTgmoYOYZfMCyOXFyC-P%2B-mdrZqm5pP2N7S-r0z3_402h9rsA%40mail.gmail.com\n\"trying again to get incremental backup\". I wouldn't be surprised if you\ntreat clog, pg_filenode.map, and/or 2PC state files as unconditionally\nnon-incremental, in which case some of the above doesn't need explicit\nsummarization code. I stopped looking for that logic, though.\n\n> --- a/src/backend/postmaster/walsummarizer.c\n> +++ b/src/backend/postmaster/walsummarizer.c\n\n> +\t * Technically, this special handling is only needed in the case of\n> +\t * XLOG_DBASE_CREATE_FILE_COPY, because that can create a whole bunch\n> +\t * of relation files in a directory without logging anything\n> +\t * specific to each one. If we didn't mark the whole DB OID/TS OID\n> +\t * combination in some way, then a tablespace that was dropped after\ns/tablespace/database/ I suspect.\n> +\t * the reference backup and recreated using the FILE_COPY method prior\n> +\t * to the incremental backup would look just like one that was never\n> +\t * touched at all, which would be catastrophic.\n\n\n",
"msg_date": "Fri, 23 Feb 2024 20:35:13 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incremental backup mishandles XLOG_DBASE_CREATE_FILE_COPY"
},
{
"msg_contents": "On Sat, Feb 24, 2024 at 10:05 AM Noah Misch <[email protected]> wrote:\n> Regarding records the summarizer potentially can't ignore that don't deal in\n> relfilenodes, these come to mind:\n>\n> XLOG_DBASE_DROP - covered in this thread's patch\n> XLOG_RELMAP_UPDATE\n> XLOG_TBLSPC_CREATE\n> XLOG_TBLSPC_DROP\n> XLOG_XACT_PREPARE\n\nAt present, only relation data files are ever sent incrementally; I\ndon't think any of these touch those.\n\n> Also, any record that writes XIDs needs to update nextXid; likewise for other\n> ID spaces. See the comment at \"XLOG stuff\" in heap_lock_tuple(). Perhaps you\n> don't summarize past a checkpoint, making that irrelevant.\n\nI'm not quite following this. It's true that we summarize from one\nredo pointer to the next; but also, our summary is only trying to\nascertain which relation data blocks have been modified. Therefore, I\ndon't understand the relevance of nextXid here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 24 Feb 2024 16:16:24 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incremental backup mishandles XLOG_DBASE_CREATE_FILE_COPY"
},
{
"msg_contents": "On Sat, Feb 24, 2024 at 04:16:24PM +0530, Robert Haas wrote:\n> On Sat, Feb 24, 2024 at 10:05 AM Noah Misch <[email protected]> wrote:\n> > On Fri, Feb 23, 2024 at 08:47:52PM +0530, Robert Haas wrote:\n> > > I thought about whether there were any other WAL records that have\n> > > similar problems to XLOG_DBASE_CREATE_FILE_COPY and didn't come up\n> > > with anything. If anyone knows of any similar cases, please let me\n> > > know.\n> >\n> > Regarding records the summarizer potentially can't ignore that don't deal in\n> > relfilenodes, these come to mind:\n> >\n> > XLOG_DBASE_DROP - covered in this thread's patch\n> > XLOG_RELMAP_UPDATE\n> > XLOG_TBLSPC_CREATE\n> > XLOG_TBLSPC_DROP\n> > XLOG_XACT_PREPARE\n> \n> At present, only relation data files are ever sent incrementally; I\n> don't think any of these touch those.\n\nAgreed, those don't touch relation data files. I think you've got all\nrelation data file mutations. XLOG_DBASE_CREATE_FILE_COPY and XLOG_DBASE_DROP\nare the only record types that touch a relation data file without mentioning\nit in XLogRecordBlockHeader, XACT_XINFO_HAS_RELFILELOCATORS, or an RM_SMGR_ID\nrlocator field.\n\n> > Also, any record that writes XIDs needs to update nextXid; likewise for other\n> > ID spaces. See the comment at \"XLOG stuff\" in heap_lock_tuple(). Perhaps you\n> > don't summarize past a checkpoint, making that irrelevant.\n> \n> I'm not quite following this. It's true that we summarize from one\n> redo pointer to the next; but also, our summary is only trying to\n> ascertain which relation data blocks have been modified. Therefore, I\n> don't understand the relevance of nextXid here.\n\nNo relevance, given incremental backup is incremental with respect to relation\ndata blocks only.\n\n\n",
"msg_date": "Sat, 24 Feb 2024 09:10:12 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incremental backup mishandles XLOG_DBASE_CREATE_FILE_COPY"
},
{
"msg_contents": "On Sat, Feb 24, 2024 at 12:10 PM Noah Misch <[email protected]> wrote:\n> Agreed, those don't touch relation data files. I think you've got all\n> relation data file mutations. XLOG_DBASE_CREATE_FILE_COPY and XLOG_DBASE_DROP\n> are the only record types that touch a relation data file without mentioning\n> it in XLogRecordBlockHeader, XACT_XINFO_HAS_RELFILELOCATORS, or an RM_SMGR_ID\n> rlocator field.\n\nThanks for the review. I have committed this.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 13:37:39 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incremental backup mishandles XLOG_DBASE_CREATE_FILE_COPY"
}
] |
[
{
"msg_contents": "I think there are some fields from the RangeTblEntry struct missing in \nthe jumble (function _jumbleRangeTblEntry()). Probably, some of these \nwere really just forgotten, in other cases this might be an intentional \ndecision, but then it might be good to document it. This has come up in \nthread [0] and there is a patch [1], but I figured I'd start a new \nthread here to get the attention of those who know more about \npg_stat_statements.\n\nI think the following fields are missing. (See also attached patch.)\n\n- alias\n\nCurrently, two queries like\n\nSELECT * FROM t1 AS foo\nSELECT * FROM t1 AS bar\n\nare counted together by pg_stat_statements -- that might be ok, but they \nboth get listed under whichever one is run first, so here if you are \nlooking for the \"AS bar\" query, you won't find it.\n\n- join_using_alias\n\nSimilar situation, currently\n\nSELECT * FROM t1 JOIN t2 USING (a, b)\nSELECT * FROM t1 JOIN t2 USING (a, b) AS x\n\nare counted together.\n\n- funcordinality\n\nThis was probably just forgotten. It should be included because the \nWITH ORDINALITY clause changes the query result.\n\n- lateral\n\nAlso probably forgotten. A query specifying LATERAL is clearly \ndifferent from one without it.\n\nThoughts? Anything else missing perhaps?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/[email protected]\n[1]: \nhttps://www.postgresql.org/message-id/attachment/154249/v2-0002-Remove-custom-_jumbleRangeTblEntry.patch",
"msg_date": "Fri, 23 Feb 2024 16:26:53 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "RangeTblEntry jumble omissions"
},
{
"msg_contents": "On 2024-Feb-23, Peter Eisentraut wrote:\n\n> - alias\n> \n> Currently, two queries like\n> \n> SELECT * FROM t1 AS foo\n> SELECT * FROM t1 AS bar\n> \n> are counted together by pg_stat_statements -- that might be ok, but they\n> both get listed under whichever one is run first, so here if you are looking\n> for the \"AS bar\" query, you won't find it.\n\nAnother, similar but not quite: if you do\n\nSET search_path TO foo;\nSELECT * FROM t1;\nSET search_path TO bar;\nSELECT * FROM t1;\n\nand you have both foo.t1 and bar.t1, you'll get two identical-looking\nqueries in pg_stat_statements with different jumble IDs, with no way to\nknow which is which. Not sure if the jumbling of the RTE (which\nincludes the OID of the table in question) itself is to blame, or\nwhether we want to store the relevant schemas with the entry somehow, or\nwhat. Obviously, failing to differentiate them would not be an\nimprovement.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 23 Feb 2024 23:00:41 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry jumble omissions"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 23, 2024 at 04:26:53PM +0100, Peter Eisentraut wrote:\n>\n> - alias\n>\n> Currently, two queries like\n>\n> SELECT * FROM t1 AS foo\n> SELECT * FROM t1 AS bar\n>\n> are counted together by pg_stat_statements -- that might be ok, but they\n> both get listed under whichever one is run first, so here if you are looking\n> for the \"AS bar\" query, you won't find it.\n\nI think this one is intentional. This alias won't change the query behavior or\nthe field names so it's good to avoid extraneous entries. It's true that you\nthen won't find something matching \"AS bar\", but it's not something you can\nrely on anyway.\n\nIf you first execute \"select * from t1 as foo\" and then \"SELECT * FROM t1 AS\nfoo\" then you won't find anything matching \"AS foo\" either. There isn't even\nany guarantee that the stored query text will be jumbled.\n\n> - join_using_alias\n>\n> Similar situation, currently\n>\n> SELECT * FROM t1 JOIN t2 USING (a, b)\n> SELECT * FROM t1 JOIN t2 USING (a, b) AS x\n>\n> are counted together.\n\nIMHO same as above.\n\n> - funcordinality\n>\n> This was probably just forgotten. It should be included because the WITH\n> ORDINALITY clause changes the query result.\n\nAgreed.\n\n> - lateral\n>\n> Also probably forgotten. A query specifying LATERAL is clearly different\n> from one without it.\n\nAgreed.\n\n\n",
"msg_date": "Sat, 24 Feb 2024 07:29:54 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry jumble omissions"
},
{
"msg_contents": "Julien Rouhaud <[email protected]> writes:\n> On Fri, Feb 23, 2024 at 04:26:53PM +0100, Peter Eisentraut wrote:\n>> - funcordinality\n>> This was probably just forgotten. It should be included because the WITH\n>> ORDINALITY clause changes the query result.\n\n> Agreed.\n\nSeems OK.\n\n>> - lateral\n>> Also probably forgotten. A query specifying LATERAL is clearly different\n>> from one without it.\n\n> Agreed.\n\nNah ... I think that LATERAL should be ignored on essentially the\nsame grounds on which you argue for ignoring aliases. If it\naffects the query's semantics, it's because there is a lateral\nreference in the subject subquery or function, and that reference\nalready contributes to the query hash. If there is no such\nreference, then LATERAL is a noise word. It doesn't help any that\nLATERAL is actually optional for functions, making it certainly a\nnoise word there.\n\nIIRC, the parser+planner cooperatively fix things so that the final\nstate of an RTE's lateral field reflects reality. But if we are\nhashing before that's happened, it's not worth all that much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Feb 2024 18:52:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry jumble omissions"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 11:00:41PM +0100, Alvaro Herrera wrote:\n>\n> Another, similar but not quite: if you do\n>\n> SET search_path TO foo;\n> SELECT * FROM t1;\n> SET search_path TO bar;\n> SELECT * FROM t1;\n>\n> and you have both foo.t1 and bar.t1, you'll get two identical-looking\n> queries in pg_stat_statements with different jumble IDs, with no way to\n> know which is which. Not sure if the jumbling of the RTE (which\n> includes the OID of the table in question) itself is to blame, or\n> whether we want to store the relevant schemas with the entry somehow, or\n> what. Obviously, failing to differentiate them would not be an\n> improvement.\n\nYeah that's also a very old known problem. This has been raised multiple times\n(on top of my head [1], [2], [3]). At this point I'm not exactly holding my\nbreath.\n\n[1]: https://www.postgresql.org/message-id/flat/8f54c609-17c6-945b-fe13-8b07c0866420%40dalibo.com\n[2]: https://www.postgresql.org/message-id/flat/9baf5c06-d6ab-c688-010c-843348e3d98c%40gmail.com\n[3]: https://www.postgresql.org/message-id/flat/3aa097d7-7c47-187b-5913-db8366cd4491%40gmail.com\n\n\n",
"msg_date": "Sun, 25 Feb 2024 20:48:59 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry jumble omissions"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 06:52:54PM -0500, Tom Lane wrote:\n> Julien Rouhaud <[email protected]> writes:\n>> On Fri, Feb 23, 2024 at 04:26:53PM +0100, Peter Eisentraut wrote:\n>>> - funcordinality\n>>> This was probably just forgotten. It should be included because the WITH\n>>> ORDINALITY clause changes the query result.\n> \n>> Agreed.\n> \n> Seems OK.\n\n+1.\n\n>>> - lateral\n>>> Also probably forgotten. A query specifying LATERAL is clearly different\n>>> from one without it.\n> \n>> Agreed.\n> \n> Nah ... I think that LATERAL should be ignored on essentially the\n> same grounds on which you argue for ignoring aliases. If it\n> affects the query's semantics, it's because there is a lateral\n> reference in the subject subquery or function, and that reference\n> already contributes to the query hash. If there is no such\n> reference, then LATERAL is a noise word. It doesn't help any that\n> LATERAL is actually optional for functions, making it certainly a\n> noise word there.\n\nSounds like a fair argument to me.\n\nBtw, I think that you should add a few queries to the tests of\npg_stat_statements to track the change of behavior when you have\naliases, as an effect of the fields added in the jumbling.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 10:08:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RangeTblEntry jumble omissions"
},
{
"msg_contents": "On 26.02.24 02:08, Michael Paquier wrote:\n> On Fri, Feb 23, 2024 at 06:52:54PM -0500, Tom Lane wrote:\n>> Julien Rouhaud <[email protected]> writes:\n>>> On Fri, Feb 23, 2024 at 04:26:53PM +0100, Peter Eisentraut wrote:\n>>>> - funcordinality\n>>>> This was probably just forgotten. It should be included because the WITH\n>>>> ORDINALITY clause changes the query result.\n>>\n>>> Agreed.\n>>\n>> Seems OK.\n> \n> +1.\n\nOk, I have added funcordinality for the RTE_FUNCTION case, and left the \nothers alone.\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 14:14:20 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RangeTblEntry jumble omissions"
}
] |
[
{
"msg_contents": "Hi!\n\nThis one comes from C++'s `std::string_view`: being just a `const char*` \nplus the `size`, it's a very convenient type to use in interfaces which \ndon't need an ownership of the data passed in.\n\nUnfortunately, `PQfnumber` expects a null-terminated string, which \n`std::string_view` can not guarantee, and this limitations affects the \ninterfaces built on top of libpq.\n\nWould you be willing to review a patch that adds an `PQfnumber` overload \nthat takes a `field_name` size as well?\n\n\n",
"msg_date": "Sun, 25 Feb 2024 14:33:01 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "Ivan Trofimov <[email protected]> writes:\n> Would you be willing to review a patch that adds an `PQfnumber` overload \n> that takes a `field_name` size as well?\n\nI'm a little skeptical of this idea. If you need counted strings\nfor PQfnumber, wouldn't you need them for every single other\nstring-based API in libpq as well? That's not a lift that I think\nwe'd want to undertake.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:46:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "Thanks for the quick reply.\n\n> If you need counted strings\n> for PQfnumber, wouldn't you need them for every single other\n> string-based API in libpq as well?\n\nNo, not really.\n\nThing is, out of all the functions listed in \"34.3.2. Retrieving Query \nResult Information\" and \"34.3.3. Retrieving Other Result Information\" \nPQfnumber is the only (well, technically also PQprint) one that takes a \nstring as an input.\n\nAs far as I know PQfnumber is the only portable way to implement \"give \nme the the value at this row with this 'column_name'\", which is an \nessential feature for any kind of client-side parsing.\nRight now as a library writer in a higher-level language I'm forced to \neither\n* Sacrifice performance to ensure 'column_name' is null-terminated \n(that's what some bindings in Rust do)\n* Sacrifice interface quality by requiring a null-terminated string, \nwhich is not necessary idiomatic (that's what we do)\n* Sacrifice usability by requiring a user to guarantee that the \n'string_view' provided is null-terminated (that's what libpqxx does, for \nexample)\n\nI don't think it's _that_ big of a deal, but could it be QoL improvement \nnice enough to be worth of a tiny addition into libpq interface?\n\n\n",
"msg_date": "Mon, 26 Feb 2024 22:12:30 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "Ivan Trofimov <[email protected]> writes:\n>> If you need counted strings\n>> for PQfnumber, wouldn't you need them for every single other\n>> string-based API in libpq as well?\n\n> No, not really.\n\n> Thing is, out of all the functions listed in \"34.3.2. Retrieving Query \n> Result Information\" and \"34.3.3. Retrieving Other Result Information\" \n> PQfnumber is the only (well, technically also PQprint) one that takes a \n> string as an input.\n\nI think that's a mighty myopic definition of which APIs would need\ncounted-string support if we were to make that a thing in libpq.\nJust for starters, why are you only concerned with processing a\nquery result, and not with the functions needed to send the query?\n\n> Right now as a library writer in a higher-level language I'm forced to \n> either\n> * Sacrifice performance to ensure 'column_name' is null-terminated \n> (that's what some bindings in Rust do)\n\nI'd go with that. You would have a very hard time convincing me that\nthe per-query overhead that building a null-terminated string adds\nis even measurable compared to the time needed to send, process, and\nreceive the query.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Feb 2024 15:14:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": ">> Right now as a library writer in a higher-level language I'm forced to\n>> either\n>> * Sacrifice performance to ensure 'column_name' is null-terminated\n>> (that's what some bindings in Rust do)\n> \n> I'd go with that. You would have a very hard time convincing me that\n> the per-query overhead\n\nI see now that I failed to express myself clearly: it's not a per-query \noverhead, but rather a per-result-field one.\n\n\nGiven a code like this (in pseudo-code)\n\nresult = ExecuteQuery(some_query)\nfor (row in result):\n a = row[\"some_column_name\"]\n b = row[\"some_other_column_name\"]\n ...\n\na field-name string should be null-terminated for every field accessed.\n\n\nThere absolutely are ways to write the same in a more performant way and \navoid repeatedly calling PQfnumber altogether, but that I as a library \nwriter can't control.\n\nIn my quickly-hacked-together test just null-terminating a user-provided \nstring takes ~14% of total CPU time (and PQfnumber itself takes ~30%, \nbut oh well), please see the code and flamegraph attached.",
"msg_date": "Tue, 27 Feb 2024 02:31:02 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 00:31, Ivan Trofimov <[email protected]> wrote:\n> I see now that I failed to express myself clearly: it's not a per-query\n> overhead, but rather a per-result-field one.\n\nI'm fairly sympathetic to decreasing the overhead of any per-row\noperation. And looking at the code, it doesn't surprise me that\nPQfnumber shows up so big in your profile. I think it would probably\nmake sense to introduce a PQfnumber variant that does not do the\ndowncasing/quote handling (called e.g. PQfnumberRaw).\n\nHowever, I do think you could convert this per-row overhead in your\ncase to per-query overhead by caching the result of PQfnumber for each\nunique C++ string_view. Afaict that should solve your performance\nproblem.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:44:14 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "> However, I do think you could convert this per-row overhead in your\n> case to per-query overhead by caching the result of PQfnumber for each\n> unique C++ string_view. Afaict that should solve your performance\n> problem.\n\nAbsolutely, you're right.\n\nThe problem here is not that it's impossible to write it in a performant \nway, but rather that it's impossible to do so in a performant _and_ \nclean way given the convenient abstractions wrapper-libraries provide: \nhere's a `Result`, which consists of `Row`s, which in turn consist of \n`Field`s.\nThe most natural and straightforward way to iterate over a `Result` \nwould be in the lines of that loop, and people do write code like that \nbecause it's what they expect to just work given the abstractions (and \nit does, it's just slow).\nCaching the result of PQfnumber could be done, but would result in \nsomewhat of a mess on a call-site.\n\n\nI like your idea of 'PQfnumberRaw': initially i was only concerned about \na null-terminated string requirement affecting my interfaces (because \nusers complained about that to me, \nhttps://github.com/userver-framework/userver/issues/494), but I think \nPQfnumberRaw could solve both problems (PQfnumber being a bottleneck \nwhen called repeatedly and a null-terminated string requirement) \nsimultaneously.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 17:49:02 +0300",
"msg_from": "Ivan Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 15:49, Ivan Trofimov <[email protected]> wrote:\n> Caching the result of PQfnumber could be done, but would result in\n> somewhat of a mess on a call-site.\n\nTo be clear I meant your wrapper around libpq could internally cache\nthis, then the call sites of users of your wrapper would not need to\nbe changed. i.e. your Result could contain a cache of\ncolumnname->columnumber mapping that you know because of previous\ncalls to PQfnumber on the same Result.\n\n> I like your idea of 'PQfnumberRaw': initially i was only concerned about\n> a null-terminated string requirement affecting my interfaces (because\n> users complained about that to me,\n> https://github.com/userver-framework/userver/issues/494), but I think\n> PQfnumberRaw could solve both problems (PQfnumber being a bottleneck\n> when called repeatedly and a null-terminated string requirement)\n> simultaneously.\n\nFeel free to send a patch for this.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 19:57:39 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQfnumber overload for not null-terminated strings"
}
] |
[
{
"msg_contents": "Hi,\n\nI met Memoize node failed When I used sqlancer test postgres.\ndatabase0=# explain select t0.c0 from t0 join t5 on t0.c0 = (t5.c0 - t5.c0);\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Nested Loop (cost=0.17..21.20 rows=4 width=32)\n -> Seq Scan on t5 (cost=0.00..1.04 rows=4 width=14)\n -> Memoize (cost=0.17..6.18 rows=1 width=32)\n Cache Key: (t5.c0 - t5.c0)\n Cache Mode: logical\n -> Index Only Scan using t0_c0_key on t0 (cost=0.15..6.17 rows=1\nwidth=32)\n Index Cond: (c0 = (t5.c0 - t5.c0))\n(7 rows)\n\ndatabase0=# select t0.c0 from t0 join t5 on t0.c0 = (t5.c0 - t5.c0);\nERROR: type with OID 2139062143 does not exist\n\nHow to repeat:\nThe attached database0.log (created by sqlancer) included statements to\nrepeat this issue.\nFirstly, create database test;\nthen;\npsql postgres\n\\i /xxx/database0.log\n\nI analyzed aboved issue this weekend. And I found that\nAfter called ResetExprContext() in MemoizeHash_hash(), the data in\nmstate->probeslot was corrputed.\n\nin prepare_probe_slot: the data as below:\n(gdb) p *(DatumGetRangeTypeP(pslot->tts_values[0]))\n$1 = {vl_len_ = 36, rangetypid = 3904}\nafter called ResetExprContext() in MemoizeHash_hash:\n(gdb) p *(DatumGetRangeTypeP(pslot->tts_values[0]))\n$3 = {vl_len_ = 264, rangetypid = 2139062143}\n\nI think in prepare_probe_slot(), should called datumCopy as the attached\npatch does.\n\nAny thoughts? Thanks.\n--\nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Sun, 25 Feb 2024 21:32:43 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 25/2/2024 20:32, Tender Wang wrote:\n> I think in prepare_probe_slot(), should called datumCopy as the attached \n> patch does.\n> \n> Any thoughts? Thanks.\nThanks for the report.\nI think it is better to invent a Runtime Memory Context; likewise, it is \nalready designed in IndexScan and derivatives. Here, you just allocate \nthe value in some upper memory context.\nAlso, I'm curious why such a trivial error hasn't been found for a long time\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 09:52:59 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 26/2/2024 09:52, Andrei Lepikhov wrote:\n> On 25/2/2024 20:32, Tender Wang wrote:\n>> I think in prepare_probe_slot(), should called datumCopy as the \n>> attached patch does.\n>>\n>> Any thoughts? Thanks.\n> Thanks for the report.\n> I think it is better to invent a Runtime Memory Context; likewise, it is \n> already designed in IndexScan and derivatives. Here, you just allocate \n> the value in some upper memory context.\n> Also, I'm curious why such a trivial error hasn't been found for a long \n> time\nHmmm. I see the problem (test.sql in attachment for reproduction and \nresults). We only detect it by the number of Hits:\n Cache Key: t1.x, (t1.t)::numeric\n Cache Mode: logical\n Hits: 0 Misses: 30 Evictions: 0 Overflows: 0 Memory Usage: 8kB\n\nWe see no hits in logical mode and 100 hits in binary mode. We see 15 \nhits for both logical and binary mode if parameters are integer numbers \n- no problems with resetting expression context.\n\nYour patch resolves the issue for logical mode - I see 15 hits for \ninteger and complex keys. But I still see 100 hits in binary mode. Maybe \nwe still have a problem?\n\nWhat's more, why the Memoize node doesn't see the problem at all?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Mon, 26 Feb 2024 12:38:09 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 26/2/2024 12:44, Tender Wang wrote:\n> \n> \n> Andrei Lepikhov <[email protected] \n> <mailto:[email protected]>> 于2024年2月26日周一 10:57写道:\n> \n> On 25/2/2024 20:32, Tender Wang wrote:\n> > I think in prepare_probe_slot(), should called datumCopy as the\n> attached\n> > patch does.\n> >\n> > Any thoughts? Thanks.\n> Thanks for the report.\n> I think it is better to invent a Runtime Memory Context; likewise,\n> it is\n> already designed in IndexScan and derivatives. Here, you just allocate\n> the value in some upper memory context.\n> Also, I'm curious why such a trivial error hasn't been found for a\n> long time\n> \n> \n> Make sense. I found MemoizeState already has a MemoryContext, so I used it.\n> I update the patch.\nThis approach is better for me. In the next version of this patch, I \nincluded a test case. I am still unsure about the context chosen and the \nstability of the test case. Richard, you recently fixed some Memoize \nissues, could you look at this problem and patch?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Mon, 26 Feb 2024 14:54:21 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "Andrei Lepikhov <[email protected]> 于2024年2月26日周一 10:57写道:\n\n> On 25/2/2024 20:32, Tender Wang wrote:\n> > I think in prepare_probe_slot(), should called datumCopy as the attached\n> > patch does.\n> >\n> > Any thoughts? Thanks.\n> Thanks for the report.\n> I think it is better to invent a Runtime Memory Context; likewise, it is\n> already designed in IndexScan and derivatives. Here, you just allocate\n> the value in some upper memory context.\n>\n\n\n> Also, I'm curious why such a trivial error hasn't been found for a long\n> time\n>\n\n I analyze this issue again. I found that the forms of qual in\nMemoize.sql(regress) are all like this:\n\n table1.c0 OP table2.c0\nIf table2.c0 is the param value, the probeslot->tts_values[i] just store\nthe pointer. The memorycontext of this pointer is\nExecutorContext not ExprContext, Reset ExprContext doesn't change the data\nof probeslot->tts_values[i].\nSo such a trivial error hasn't been found before.\n\n-- \n> regards,\n> Andrei Lepikhov\n> Postgres Professional\n>\n>\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAndrei Lepikhov <[email protected]> 于2024年2月26日周一 10:57写道:On 25/2/2024 20:32, Tender Wang wrote:\n> I think in prepare_probe_slot(), should called datumCopy as the attached \n> patch does.\n> \n> Any thoughts? Thanks.\nThanks for the report.\nI think it is better to invent a Runtime Memory Context; likewise, it is \nalready designed in IndexScan and derivatives. Here, you just allocate \nthe value in some upper memory context. \nAlso, I'm curious why such a trivial error hasn't been found for a long time I analyze this issue again. I found that the forms of qual in Memoize.sql(regress) are all like this: table1.c0 OP table2.c0If table2.c0 is the param value, the probeslot->tts_values[i] just store the pointer. The memorycontext of this pointer isExecutorContext not ExprContext, Reset ExprContext doesn't change the data of probeslot->tts_values[i].So such a trivial error hasn't been found before.\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Mon, 26 Feb 2024 16:14:39 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 26/2/2024 15:14, Tender Wang wrote:\n> \n> \n> Andrei Lepikhov <[email protected] \n> <mailto:[email protected]>> 于2024年2月26日周一 10:57写道:\n> \n> On 25/2/2024 20:32, Tender Wang wrote:\n> > I think in prepare_probe_slot(), should called datumCopy as the\n> attached\n> > patch does.\n> >\n> > Any thoughts? Thanks.\n> Thanks for the report.\n> I think it is better to invent a Runtime Memory Context; likewise,\n> it is\n> already designed in IndexScan and derivatives. Here, you just allocate\n> the value in some upper memory context.\n> \n> Also, I'm curious why such a trivial error hasn't been found for a\n> long time\n> \n> I analyze this issue again. I found that the forms of qual in \n> Memoize.sql(regress) are all like this:\n> \n> table1.c0 OP table2.c0\n> If table2.c0 is the param value, the probeslot->tts_values[i] just store \n> the pointer. The memorycontext of this pointer is\n> ExecutorContext not ExprContext, Reset ExprContext doesn't change the \n> data of probeslot->tts_values[i].\n> So such a trivial error hasn't been found before.\nI'm not happy with using table context for the probeslot values. As I \nsee, in the case of a new entry, the cache_lookup copies data from this \nslot. If a match is detected, the allocated probeslot memory piece will \nnot be freed up to hash table reset. Taking this into account, should we \ninvent some new runtime context?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 16:30:50 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 3:54 PM Andrei Lepikhov <[email protected]>\nwrote:\n\n> On 26/2/2024 12:44, Tender Wang wrote:\n> > Make sense. I found MemoizeState already has a MemoryContext, so I used\n> it.\n> > I update the patch.\n> This approach is better for me. In the next version of this patch, I\n> included a test case. I am still unsure about the context chosen and the\n> stability of the test case. Richard, you recently fixed some Memoize\n> issues, could you look at this problem and patch?\n\n\nI looked at this issue a bit. It seems to me what happens is that at\nfirst the memory areas referenced by probeslot->tts_values[] are\nallocated in the per tuple context (see prepare_probe_slot). And then\nin MemoizeHash_hash, after we've calculated the hashkey, we will reset\nthe per tuple context. However, later in MemoizeHash_equal, we still\nneed to reference the values in probeslot->tts_values[], which have been\ncleared.\n\nActually the context would always be reset in MemoizeHash_equal, for\nboth binary and logical mode. So I kind of wonder if it's necessary to\nreset the context in MemoizeHash_hash.\n\nThe ResetExprContext call in MemoizeHash_hash was introduced in\n0b053e78b to fix a memory leak issue.\n\ncommit 0b053e78b5990cd01e7169ee5bd2bb8e4045deea\nAuthor: David Rowley <[email protected]>\nDate: Thu Oct 5 20:30:47 2023 +1300\n\n Fix memory leak in Memoize code\n\nIt seems to me that switching to the per-tuple memory context is\nsufficient to fix the memory leak. Calling ResetExprContext in\nMemoizeHash_hash each time seems too aggressive.\n\nI tried to remove the ResetExprContext call in MemoizeHash_hash and did\nnot see the memory leak with the repro query in [1].\n\ndiff --git a/src/backend/executor/nodeMemoize.c\nb/src/backend/executor/nodeMemoize.c\nindex 18870f10e1..f2f025520d 100644\n--- a/src/backend/executor/nodeMemoize.c\n+++ b/src/backend/executor/nodeMemoize.c\n@@ -207,7 +207,6 @@ MemoizeHash_hash(struct memoize_hash *tb, const\nMemoizeKey *key)\n }\n }\n\n- ResetExprContext(econtext);\n MemoryContextSwitchTo(oldcontext);\n return murmurhash32(hashkey);\n }\n\nLooping in David to have a look.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/83281eed63c74e4f940317186372abfd%40cft.ru\n\nThanks\nRichard\n\nOn Mon, Feb 26, 2024 at 3:54 PM Andrei Lepikhov <[email protected]> wrote:On 26/2/2024 12:44, Tender Wang wrote:\n> Make sense. I found MemoizeState already has a MemoryContext, so I used it.\n> I update the patch.\nThis approach is better for me. In the next version of this patch, I \nincluded a test case. I am still unsure about the context chosen and the \nstability of the test case. Richard, you recently fixed some Memoize \nissues, could you look at this problem and patch?I looked at this issue a bit. It seems to me what happens is that atfirst the memory areas referenced by probeslot->tts_values[] areallocated in the per tuple context (see prepare_probe_slot). And thenin MemoizeHash_hash, after we've calculated the hashkey, we will resetthe per tuple context. However, later in MemoizeHash_equal, we stillneed to reference the values in probeslot->tts_values[], which have beencleared.Actually the context would always be reset in MemoizeHash_equal, forboth binary and logical mode. So I kind of wonder if it's necessary toreset the context in MemoizeHash_hash.The ResetExprContext call in MemoizeHash_hash was introduced in0b053e78b to fix a memory leak issue.commit 0b053e78b5990cd01e7169ee5bd2bb8e4045deeaAuthor: David Rowley <[email protected]>Date: Thu Oct 5 20:30:47 2023 +1300 Fix memory leak in Memoize codeIt seems to me that switching to the per-tuple memory context issufficient to fix the memory leak. Calling ResetExprContext inMemoizeHash_hash each time seems too aggressive.I tried to remove the ResetExprContext call in MemoizeHash_hash and didnot see the memory leak with the repro query in [1].diff --git a/src/backend/executor/nodeMemoize.c b/src/backend/executor/nodeMemoize.cindex 18870f10e1..f2f025520d 100644--- a/src/backend/executor/nodeMemoize.c+++ b/src/backend/executor/nodeMemoize.c@@ -207,7 +207,6 @@ MemoizeHash_hash(struct memoize_hash *tb, const MemoizeKey *key) } }- ResetExprContext(econtext); MemoryContextSwitchTo(oldcontext); return murmurhash32(hashkey); }Looping in David to have a look.[1] https://www.postgresql.org/message-id/flat/83281eed63c74e4f940317186372abfd%40cft.ruThanksRichard",
"msg_date": "Mon, 26 Feb 2024 19:34:33 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 26/2/2024 18:34, Richard Guo wrote:\n> \n> On Mon, Feb 26, 2024 at 3:54 PM Andrei Lepikhov \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On 26/2/2024 12:44, Tender Wang wrote:\n> > Make sense. I found MemoizeState already has a MemoryContext, so\n> I used it.\n> > I update the patch.\n> This approach is better for me. In the next version of this patch, I\n> included a test case. I am still unsure about the context chosen and\n> the\n> stability of the test case. Richard, you recently fixed some Memoize\n> issues, could you look at this problem and patch?\n> \n> \n> I looked at this issue a bit. It seems to me what happens is that at\n> first the memory areas referenced by probeslot->tts_values[] are\n> allocated in the per tuple context (see prepare_probe_slot). And then\n> in MemoizeHash_hash, after we've calculated the hashkey, we will reset\n> the per tuple context. However, later in MemoizeHash_equal, we still\n> need to reference the values in probeslot->tts_values[], which have been\n> cleared.\nAgree\n> \n> Actually the context would always be reset in MemoizeHash_equal, for\n> both binary and logical mode. So I kind of wonder if it's necessary to\n> reset the context in MemoizeHash_hash.\nI can only provide one thought against this solution: what if we have a \nlot of unique hash values, maybe all of them? In that case, we still \nhave a kind of 'leak' David fixed by the commit 0b053e78b5.\nAlso, I have a segfault report of one client. As I see, it was caused by \ntoo long text column in the table slot. As I see, key value, stored in \nthe Memoize hash table, was corrupted, and the most plain reason is this \nbug. Should we add a test on this bug, and what do you think about the \none proposed in v3?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 19:29:12 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "The attached patch is a new version based on v3(not including Andrei's the\ntest case). There is no need to call datumCopy when\nisnull is true.\n\nI have not added a new runtime memoryContext so far. Continue to use\nmstate->tableContext, I'm not sure the memory used of probeslot will affect\nmstate->mem_limit.\nMaybe adding a new memoryContext is better. I think I should spend a little\ntime to learn nodeMemoize.c more deeply.\n\nAndrei Lepikhov <[email protected]> 于2024年2月26日周一 20:29写道:\n\n> On 26/2/2024 18:34, Richard Guo wrote:\n> >\n> > On Mon, Feb 26, 2024 at 3:54 PM Andrei Lepikhov\n> > <[email protected] <mailto:[email protected]>> wrote:\n> >\n> > On 26/2/2024 12:44, Tender Wang wrote:\n> > > Make sense. I found MemoizeState already has a MemoryContext, so\n> > I used it.\n> > > I update the patch.\n> > This approach is better for me. In the next version of this patch, I\n> > included a test case. I am still unsure about the context chosen and\n> > the\n> > stability of the test case. Richard, you recently fixed some Memoize\n> > issues, could you look at this problem and patch?\n> >\n> >\n> > I looked at this issue a bit. It seems to me what happens is that at\n> > first the memory areas referenced by probeslot->tts_values[] are\n> > allocated in the per tuple context (see prepare_probe_slot). And then\n> > in MemoizeHash_hash, after we've calculated the hashkey, we will reset\n> > the per tuple context. However, later in MemoizeHash_equal, we still\n> > need to reference the values in probeslot->tts_values[], which have been\n> > cleared.\n> Agree\n> >\n> > Actually the context would always be reset in MemoizeHash_equal, for\n> > both binary and logical mode. So I kind of wonder if it's necessary to\n> > reset the context in MemoizeHash_hash.\n> I can only provide one thought against this solution: what if we have a\n> lot of unique hash values, maybe all of them? In that case, we still\n> have a kind of 'leak' David fixed by the commit 0b053e78b5.\n> Also, I have a segfault report of one client. As I see, it was caused by\n> too long text column in the table slot. As I see, key value, stored in\n> the Memoize hash table, was corrupted, and the most plain reason is this\n> bug. Should we add a test on this bug, and what do you think about the\n> one proposed in v3?\n>\n> --\n> regards,\n> Andrei Lepikhov\n> Postgres Professional\n>\n>\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Wed, 28 Feb 2024 14:53:54 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 28/2/2024 13:53, Tender Wang wrote:\n> The attached patch is a new version based on v3(not including Andrei's \n> the test case). There is no need to call datumCopy when\n> isnull is true.\n> \n> I have not added a new runtime memoryContext so far. Continue to use \n> mstate->tableContext, I'm not sure the memory used of probeslot will \n> affect mstate->mem_limit.\n> Maybe adding a new memoryContext is better. I think I should spend a \n> little time to learn nodeMemoize.c more deeply.\nI am curious about your reasons to stay with tableContext. In terms of \nmemory allocation, Richard's approach looks better.\nAlso, You don't need to initialize tts_values[i] at all if tts_isnull[i] \nset to true.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:25:04 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "I read Memoize code and how other node use ResetExprContext() recently.\n\nThe comments about per_tuple_memory said that :\n\n * ecxt_per_tuple_memory is a short-term context for expression results.\n * As the name suggests, it will typically be reset once per tuple,\n * before we begin to evaluate expressions for that tuple. Each\n * ExprContext normally has its very own per-tuple memory context.\n\nSo ResetExprContext() should called once per tuple, but not in Hash and\nEqual function just as Richard said before.\nIn ExecResult() and ExecProjectSet(), they call ResetExprContext() once\nwhen enter these functions.\nSo I think ExecMemoize() can do the same way.\n\nThe attached patch includes below modifications:\n1.\nWhen I read the code in nodeMemoize.c, I found a typos: outer should be\ninner,\nif I don't misunderstand the intend of Memoize.\n\n2.\nI found that almost executor node call CHECK_FOR_INTERRUPTS(), so I add it.\nIs it right to add it for ExecMemoize()?\n\n3.\nI remove ResetExprContext() from Hash and Equal funciton. And I call it\nwhen enter\nExecMemoize() just like ExecPrejectSet() does.\nExecQualAndReset() is replaed with ExecQual().\n\n4.\nThis patch doesn't include test case. I use the Andrei's test case, but I\ndon't repeat the aboved issue.\nI may need to spend some more time to think about how to repeat this issue\neasily.\n\nSo, what do you think about the one proposed in v5? @Andrei Lepikhov\n<[email protected]> @Richard Guo <[email protected]> @David\nRowley <[email protected]> .\nI don't want to continue to do work based on v3 patch. As Andrei Lepikhov\nsaid, using mstate->tableContext for probeslot\nis not good. v5 looks more simple.",
"msg_date": "Thu, 29 Feb 2024 13:25:20 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "Hi,\n\nWhen I think about how to add a test case for v5 version patch, and I want\nto test if v5 version patch has memory leak.\nThis thread [1] provided a way how to repeat the memory leak, so I used it\nto test v5 patch. I didn't found memory leak on\nv5 patch.\n\nBut I found other interesting issue. When changed whereClause in [1], the\nquery reported below error:\n\n\"ERROR could not find memoization table entry\"\n\nthe query:\nEXPLAIN analyze\nselect sum(q.id_table1)\nfrom (\nSELECT t2.*\nFROM table1 t1\nJOIN table2 t2\nON (t2.id_table1 + t2.id_table1) = t1.id) q;\n\nBut on v5 patch, it didn't report error.\n\nI guess it is the same reason that data in probeslot was reset in Hash\nfunction.\n\nI debug the above query, and get this:\nbefore\n(gdb) p *(DatumGetNumeric(mstate->probeslot->tts_values[0]))\n$1 = {vl_len_ = 48, choice = {n_header = 32770, n_long = {n_sign_dscale =\n32770, n_weight = 60, n_data = 0x564632ebd708}, n_short = {n_header =\n32770, n_data = 0x564632ebd706}}}\nafter\n(gdb) p *(DatumGetNumeric(mstate->probeslot->tts_values[0]))\n$2 = {vl_len_ = 264, choice = {n_header = 32639, n_long = {n_sign_dscale =\n32639, n_weight = 32639, n_data = 0x564632ebd6a8}, n_short = {n_header =\n32639, n_data = 0x564632ebd6a6}}}\n\nSo after call ResetExprContext() in Hash function, the data in probeslot is\ncorrupted. It is not sure what error will happen when executing on\ncorrupted data.\n\nDuring debug, I learned that numeric_add doesn't have type check like\nrangetype, so aboved query will not report \"type with xxx does not exist\".\n\nAnd I realize that the test case added by Andrei Lepikhov in v3 is right.\nSo in v6 patch I add Andrei Lepikhov's test case. Thanks a lot.\n\nNow I think the v6 version patch seems to be complete now.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/83281eed63c74e4f940317186372abfd%40cft.ru\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Fri, 1 Mar 2024 15:18:11 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 1/3/2024 14:18, Tender Wang wrote:\n> During debug, I learned that numeric_add doesn't have type check like \n> rangetype, so aboved query will not report \"type with xxx does not exist\".\n> \n> And I realize that the test case added by Andrei Lepikhov in v3 is \n> right. So in v6 patch I add Andrei Lepikhov's test case. Thanks a lot.\n> \n> Now I think the v6 version patch seems to be complete now.\nI've passed through the patch, and it looks okay. Although I am afraid \nof the same problems that future changes can cause and how to detect \nthem, it works correctly.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 16:36:29 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "Andrei Lepikhov <[email protected]> 于2024年3月5日周二 17:36写道:\n\n> On 1/3/2024 14:18, Tender Wang wrote:\n> > During debug, I learned that numeric_add doesn't have type check like\n> > rangetype, so aboved query will not report \"type with xxx does not\n> exist\".\n> >\n> > And I realize that the test case added by Andrei Lepikhov in v3 is\n> > right. So in v6 patch I add Andrei Lepikhov's test case. Thanks a lot.\n> >\n> > Now I think the v6 version patch seems to be complete now.\n> I've passed through the patch, and it looks okay. Although I am afraid\n> of the same problems that future changes can cause and how to detect\n> them, it works correctly.\n>\n\nThanks for reviewing it, and I add it to commitfest 2024-07.\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAndrei Lepikhov <[email protected]> 于2024年3月5日周二 17:36写道:On 1/3/2024 14:18, Tender Wang wrote:\n> During debug, I learned that numeric_add doesn't have type check like \n> rangetype, so aboved query will not report \"type with xxx does not exist\".\n> \n> And I realize that the test case added by Andrei Lepikhov in v3 is \n> right. So in v6 patch I add Andrei Lepikhov's test case. Thanks a lot.\n> \n> Now I think the v6 version patch seems to be complete now.\nI've passed through the patch, and it looks okay. Although I am afraid \nof the same problems that future changes can cause and how to detect \nthem, it works correctly.Thanks for reviewing it, and I add it to commitfest 2024-07.-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Wed, 6 Mar 2024 11:10:49 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On 6/3/2024 10:10, Tender Wang wrote:\n> \n> \n> Andrei Lepikhov <[email protected] \n> <mailto:[email protected]>> 于2024年3月5日周二 17:36写道:\n> \n> On 1/3/2024 14:18, Tender Wang wrote:\n> > During debug, I learned that numeric_add doesn't have type check\n> like\n> > rangetype, so aboved query will not report \"type with xxx does\n> not exist\".\n> >\n> > And I realize that the test case added by Andrei Lepikhov in v3 is\n> > right. So in v6 patch I add Andrei Lepikhov's test case. Thanks\n> a lot.\n> >\n> > Now I think the v6 version patch seems to be complete now.\n> I've passed through the patch, and it looks okay. Although I am afraid\n> of the same problems that future changes can cause and how to detect\n> them, it works correctly.\n> \n> \n> Thanks for reviewing it, and I add it to commitfest 2024-07.\nI think, it is a bug. Should it be fixed (and back-patched) earlier?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:37:11 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "Andrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\n\n> I think, it is a bug. Should it be fixed (and back-patched) earlier?\n>\n\nAgreed. Need David to review it as he knows this area best.\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAndrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\nI think, it is a bug. Should it be fixed (and back-patched) earlier?Agreed. Need David to review it as he knows this area best.-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Thu, 7 Mar 2024 10:24:00 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 15:24, Tender Wang <[email protected]> wrote:\n>\n> Andrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\n>> I think, it is a bug. Should it be fixed (and back-patched) earlier?\n>\n> Agreed. Need David to review it as he knows this area best.\n\nThis is on my list of things to do. Just not at the top yet.\n\nDavid\n\n\n",
"msg_date": "Thu, 7 Mar 2024 22:50:54 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 22:50, David Rowley <[email protected]> wrote:\n>\n> On Thu, 7 Mar 2024 at 15:24, Tender Wang <[email protected]> wrote:\n> >\n> > Andrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\n> >> I think, it is a bug. Should it be fixed (and back-patched) earlier?\n> >\n> > Agreed. Need David to review it as he knows this area best.\n>\n> This is on my list of things to do. Just not at the top yet.\n\nI've gone over this patch and I'm happy with the changes to\nnodeMemoize.c. The thing I did change was the newly added test. The\nproblem there was the test was passing for me with and without the\ncode fix. I ended up changing the test so the cache hits and misses\nare reported. That required moving the test to above where the\nwork_mem is set to 64KB so we can be certain the values will all be\ncached and the cache hits are predictable.\n\nMy other changes were just cosmetic.\n\nThanks for working on this fix. I've pushed the patch.\n\nDavid\n\n\n",
"msg_date": "Mon, 11 Mar 2024 18:25:43 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
},
{
"msg_contents": "David Rowley <[email protected]> 于2024年3月11日周一 13:25写道:\n\n> On Thu, 7 Mar 2024 at 22:50, David Rowley <[email protected]> wrote:\n> >\n> > On Thu, 7 Mar 2024 at 15:24, Tender Wang <[email protected]> wrote:\n> > >\n> > > Andrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\n> > >> I think, it is a bug. Should it be fixed (and back-patched) earlier?\n> > >\n> > > Agreed. Need David to review it as he knows this area best.\n> >\n> > This is on my list of things to do. Just not at the top yet.\n>\n> I've gone over this patch and I'm happy with the changes to\n> nodeMemoize.c. The thing I did change was the newly added test. The\n> problem there was the test was passing for me with and without the\n> code fix. I ended up changing the test so the cache hits and misses\n> are reported. That required moving the test to above where the\n> work_mem is set to 64KB so we can be certain the values will all be\n> cached and the cache hits are predictable.\n>\n> My other changes were just cosmetic.\n>\n> Thanks for working on this fix. I've pushed the patch.\n>\n> David\n>\n\nThanks for pushing the patch.\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nDavid Rowley <[email protected]> 于2024年3月11日周一 13:25写道:On Thu, 7 Mar 2024 at 22:50, David Rowley <[email protected]> wrote:\n>\n> On Thu, 7 Mar 2024 at 15:24, Tender Wang <[email protected]> wrote:\n> >\n> > Andrei Lepikhov <[email protected]> 于2024年3月6日周三 11:37写道:\n> >> I think, it is a bug. Should it be fixed (and back-patched) earlier?\n> >\n> > Agreed. Need David to review it as he knows this area best.\n>\n> This is on my list of things to do. Just not at the top yet.\n\nI've gone over this patch and I'm happy with the changes to\nnodeMemoize.c. The thing I did change was the newly added test. The\nproblem there was the test was passing for me with and without the\ncode fix. I ended up changing the test so the cache hits and misses\nare reported. That required moving the test to above where the\nwork_mem is set to 64KB so we can be certain the values will all be\ncached and the cache hits are predictable.\n\nMy other changes were just cosmetic.\n\nThanks for working on this fix. I've pushed the patch.\n\nDavid\nThanks for pushing the patch.-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Mon, 11 Mar 2024 13:31:26 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"type with xxxx does not exist\" when doing ExecMemoize()"
}
] |
[
{
"msg_contents": "Hi,\n\nRecently we have supported upgrade of subscriptions,but currently\nsubscription OIDs can be changed when a cluster is upgraded using\npg_upgrade. It will be better to preserve them as it will be easier to\ncompare subscription related objects in pg_subscription and\npg_subscription_rel in the old and new clusters.\n\nAttached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 25 Feb 2024 21:48:31 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Preserve subscription OIDs during pg_upgrade"
},
{
"msg_contents": "vignesh C <[email protected]> writes:\n> Recently we have supported upgrade of subscriptions,but currently\n> subscription OIDs can be changed when a cluster is upgraded using\n> pg_upgrade. It will be better to preserve them as it will be easier to\n> compare subscription related objects in pg_subscription and\n> pg_subscription_rel in the old and new clusters.\n\nI do not think that's a sufficient argument. For other object types,\nwe only go through these pushups if we *have to* do so because the\nOIDs may appear in user tables or file names. I don't see a reason\nthat subscriptions deserve special treatment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:34:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preserve subscription OIDs during pg_upgrade"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 11:34:35AM -0500, Tom Lane wrote:\n> vignesh C <[email protected]> writes:\n>> Recently we have supported upgrade of subscriptions,but currently\n>> subscription OIDs can be changed when a cluster is upgraded using\n>> pg_upgrade. It will be better to preserve them as it will be easier to\n>> compare subscription related objects in pg_subscription and\n>> pg_subscription_rel in the old and new clusters.\n> \n> I do not think that's a sufficient argument. For other object types,\n> we only go through these pushups if we *have to* do so because the\n> OIDs may appear in user tables or file names. I don't see a reason\n> that subscriptions deserve special treatment.\n\nI think that the idea behind that it that it would then become\npossible to relax the restrictions related to the states of the\nrelations stored in pg_subscription_rel, which can now be only a\n\"ready\" or \"init\" state (see check_old_cluster_subscription_state)\nwhen we begin the upgrade.\n\nI am not sure that it is a good idea to relax that for PG17 at this\nstage of the development cycle, though, as we have already done a lot\nin this area for pg_upgrade and it may require more tweaks during the\nbeta period depending on the feedback received, so I would suggest to\ndo more improvements for the 18 cycle instead once we have a cleaner\npicture of the whole.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 09:36:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preserve subscription OIDs during pg_upgrade"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 6:07 AM Michael Paquier <[email protected]> wrote:\n> I think that the idea behind that it that it would then become\n> possible to relax the restrictions related to the states of the\n> relations stored in pg_subscription_rel, which can now be only a\n> \"ready\" or \"init\" state (see check_old_cluster_subscription_state)\n> when we begin the upgrade.\n\nHow would it help with that?\n\n> I am not sure that it is a good idea to relax that for PG17 at this\n> stage of the development cycle, though, as we have already done a lot\n> in this area for pg_upgrade and it may require more tweaks during the\n> beta period depending on the feedback received, so I would suggest to\n> do more improvements for the 18 cycle instead once we have a cleaner\n> picture of the whole.\n\nThat's fair.\n\nI want to say that, unlike Tom, I'm basically in favor of preserving\nOIDs in more places across updates. It seems to have little downside\nand improve the understandability of the outcome. But that's separate\nfrom whether it is a good idea to build on that infrastructure in any\nparticular way in the time we have left for this release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 09:51:40 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preserve subscription OIDs during pg_upgrade"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 09:51:40AM +0530, Robert Haas wrote:\n> > I am not sure that it is a good idea to relax that for PG17 at this\n> > stage of the development cycle, though, as we have already done a lot\n> > in this area for pg_upgrade and it may require more tweaks during the\n> > beta period depending on the feedback received, so I would suggest to\n> > do more improvements for the 18 cycle instead once we have a cleaner\n> > picture of the whole.\n> \n> That's fair.\n> \n> I want to say that, unlike Tom, I'm basically in favor of preserving\n> OIDs in more places across updates. It seems to have little downside\n> and improve the understandability of the outcome. But that's separate\n> from whether it is a good idea to build on that infrastructure in any\n> particular way in the time we have left for this release.\n\nYes, the _minimal_ approach has changed in the past few years to make\npg_upgrade debugging easier. The original design was ultra-conservative\nwhere it could be, considering how radical the core functionality was.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 4 Mar 2024 20:04:14 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preserve subscription OIDs during pg_upgrade"
},
{
"msg_contents": "Hi,\n\n> It will be better to preserve them as it will be easier to\n> compare subscription related objects in pg_subscription and\n> pg_subscription_rel in the old and new clusters.\n\nIMO it would be helpful if you could give a little bit more context on\nwhy/when this is useful. Personally I find it somewhat difficult to\nimagine a case when I really need to compare Oids of subscriptions\nbetween old and new clusters.\n\nIf we commit to such a guarantee it will lay a certain burden on the\ncommunity in the long run and the benefits are not quite clear, to me\nat least. If we are talking about giving such a guarantee only once\nthe value of this is arguably low.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 5 Mar 2024 16:28:26 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preserve subscription OIDs during pg_upgrade"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nIn our Cloud we have a patch, which allows non-superuser role ('mdb_admin')\nto do some superuser things.\nIn particular, we have a patch that allows mdb admin to cancel the\nautovacuum process and some other processes (processes with\napplication_name = 'MDB'), see the attachment.\nThis is needed to allow non-superuser roles to run pg_repack and to cancel\npg_repack.\nWe need to cancel running autovac to run pg_repack (because of locks), and\nwe need to cancel pg_repack sometimes also.\n\nI want to reduce our internal patch size and transfer this logic to\nextension or to core.\nI have found similar threads [1] and [2], but, as far as I understand, they\ndo not solve this particular case.\nI see 2 possible ways to implement this. The first one is to have hool in\npg_signal_backend, and define a hook in extension which can do the thing.\nThe second one is to have a predefined role. Something like a\n`pg_signal_autovacuum` role which can signal running autovac to cancel. But\nI don't see how we can handle specific `application_name` with this\nsolution.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/F9408A5A-B20B-42D2-9E7F-49CD3D1547BC%40enterprisedb.com\n[2]\nhttps://www.postgresql.org/message-id/flat/20220722203735.GB3996698%40nathanxps13",
"msg_date": "Mon, 26 Feb 2024 12:38:40 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 12:38:40PM +0500, Kirill Reshke wrote:\n> I see 2 possible ways to implement this. The first one is to have hool in\n> pg_signal_backend, and define a hook in extension which can do the thing.\n> The second one is to have a predefined role. Something like a\n> `pg_signal_autovacuum` role which can signal running autovac to cancel. But\n> I don't see how we can handle specific `application_name` with this\n> solution.\n\npg_signal_autovacuum seems useful given commit 3a9b18b.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 09:10:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Mon, 26 Feb 2024 at 20:10, Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Feb 26, 2024 at 12:38:40PM +0500, Kirill Reshke wrote:\n> > I see 2 possible ways to implement this. The first one is to have hool in\n> > pg_signal_backend, and define a hook in extension which can do the thing.\n> > The second one is to have a predefined role. Something like a\n> > `pg_signal_autovacuum` role which can signal running autovac to cancel.\n> But\n> > I don't see how we can handle specific `application_name` with this\n> > solution.\n>\n> pg_signal_autovacuum seems useful given commit 3a9b18b.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\nThank you for your response.\nPlease find a patch attached.\n\nIn patch, pg_signal_autovacuum role with oid 6312 added. I grabbed oid from\nunused_oids script output.\nAlso, tap tests for functionality added. I'm not sure where to place them,\nso I placed them in a separate directory in `src/test/`\nSeems that regression tests for this feature are not possible, am i right?\nAlso, I was thinking of pg_signal_autovacuum vs pg_signal_backend.\nShould pg_signal_autovacuum have power of pg_signal_backend (implicity)? Or\nshould this role have such little scope...",
"msg_date": "Tue, 27 Feb 2024 01:22:31 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Tue, 27 Feb 2024 at 01:22, Kirill Reshke <[email protected]> wrote:\n\n>\n>\n> On Mon, 26 Feb 2024 at 20:10, Nathan Bossart <[email protected]>\n> wrote:\n>\n>> On Mon, Feb 26, 2024 at 12:38:40PM +0500, Kirill Reshke wrote:\n>> > I see 2 possible ways to implement this. The first one is to have hool\n>> in\n>> > pg_signal_backend, and define a hook in extension which can do the\n>> thing.\n>> > The second one is to have a predefined role. Something like a\n>> > `pg_signal_autovacuum` role which can signal running autovac to cancel.\n>> But\n>> > I don't see how we can handle specific `application_name` with this\n>> > solution.\n>>\n>> pg_signal_autovacuum seems useful given commit 3a9b18b.\n>>\n>> --\n>> Nathan Bossart\n>> Amazon Web Services: https://aws.amazon.com\n>\n>\n> Thank you for your response.\n> Please find a patch attached.\n>\n> In patch, pg_signal_autovacuum role with oid 6312 added. I grabbed oid\n> from unused_oids script output.\n> Also, tap tests for functionality added. I'm not sure where to place them,\n> so I placed them in a separate directory in `src/test/`\n> Seems that regression tests for this feature are not possible, am i right?\n> Also, I was thinking of pg_signal_autovacuum vs pg_signal_backend.\n> Should pg_signal_autovacuum have power of pg_signal_backend (implicity)?\n> Or should this role have such little scope...\n>\n> Have a little thought on this, will share.\nDo we need to test the pg_cancel_backend vs autovacuum case at all?\nI think we do. Would it be better to split work into 2 patches: first one\nwith tests against current logic, and second\none with some changes/enhancements which allows to cancel running autovac\nto non-superuser (via `pg_signal_autovacuum` role or some other mechanism)?\n\nOn Tue, 27 Feb 2024 at 01:22, Kirill Reshke <[email protected]> wrote:On Mon, 26 Feb 2024 at 20:10, Nathan Bossart <[email protected]> wrote:On Mon, Feb 26, 2024 at 12:38:40PM +0500, Kirill Reshke wrote:\n> I see 2 possible ways to implement this. The first one is to have hool in\n> pg_signal_backend, and define a hook in extension which can do the thing.\n> The second one is to have a predefined role. Something like a\n> `pg_signal_autovacuum` role which can signal running autovac to cancel. But\n> I don't see how we can handle specific `application_name` with this\n> solution.\n\npg_signal_autovacuum seems useful given commit 3a9b18b.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comThank you for your response.Please find a patch attached. In patch, pg_signal_autovacuum role with oid 6312 added. I grabbed oid from unused_oids script output. Also, tap tests for functionality added. I'm not sure where to place them, so I placed them in a separate directory in `src/test/`Seems that regression tests for this feature are not possible, am i right?Also, I was thinking of pg_signal_autovacuum vs pg_signal_backend. Should pg_signal_autovacuum have power of pg_signal_backend (implicity)? Or should this role have such little scope...Have a little thought on this, will share.Do we need to test the pg_cancel_backend vs autovacuum case at all?I think we do. Would it be better to split work into 2 patches: first one with tests against current logic, and secondone with some changes/enhancements which allows to cancel running autovac to non-superuser (via `pg_signal_autovacuum` role or some other mechanism)?",
"msg_date": "Tue, 27 Feb 2024 23:59:00 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 01:22:31AM +0500, Kirill Reshke wrote:\n> Also, tap tests for functionality added. I'm not sure where to place them,\n> so I placed them in a separate directory in `src/test/`\n> Seems that regression tests for this feature are not possible, am i right?\n\nIt might be difficult to create reliable tests for pg_signal_autovacuum.\nIf we can, it would probably be easiest to do with a TAP test.\n\n> Also, I was thinking of pg_signal_autovacuum vs pg_signal_backend.\n> Should pg_signal_autovacuum have power of pg_signal_backend (implicity)? Or\n> should this role have such little scope...\n\n-1. I don't see why giving a role privileges of pg_signal_autovacuum\nshould also give them the ability to signal all other non-superuser\nbackends.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:12:35 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 11:59:00PM +0500, Kirill Reshke wrote:\n> Do we need to test the pg_cancel_backend vs autovacuum case at all?\n> I think we do. Would it be better to split work into 2 patches: first one\n> with tests against current logic, and second\n> one with some changes/enhancements which allows to cancel running autovac\n> to non-superuser (via `pg_signal_autovacuum` role or some other mechanism)?\n\nIf we need to add tests for pg_signal_backend, I think it's reasonable to\nkeep those in a separate patch from pg_signal_autovacuum.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:23:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Hi, \r\n\r\nI'm new to reviewing postgres patches, but I took an interest in reviewing this patch as recommended by Nathan.\r\n\r\nI have the following comments:\r\n\r\n> \tif (!superuser()) {\r\n>\t\tif (!OidIsValid(proc->roleId)) {\r\n>\t\t\tLocalPgBackendStatus *local_beentry;\r\n>\t\t\tlocal_beentry = pgstat_get_local_beentry_by_backend_id(proc->backendId);\r\n>\r\n>\t\t\tif (!(local_beentry && local_beentry->backendStatus.st_backendType == B_AUTOVAC_WORKER && \r\n>\t\t\t\thas_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM)))\r\n>\t\t\t\t\treturn SIGNAL_BACKEND_NOSUPERUSER;\r\n>\t\t} else {\r\n>\t\t\tif (superuser_arg(proc->roleId))\r\n>\t\t\t\treturn SIGNAL_BACKEND_NOSUPERUSER;\r\n>\r\n>\t\t\t/* Users can signal backends they have role membership in. */\r\n>\t\t\tif (!has_privs_of_role(GetUserId(), proc->roleId) &&\r\n>\t\t\t\t!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_BACKEND))\r\n>\t\t\t\treturn SIGNAL_BACKEND_NOPERMISSION;\r\n>\t\t}\r\n>\t}\r\n>\r\n1. I would suggest not to do nested if blocks since it's becoming harder to read. Also, does it make sense to have a utilities function in backend_status.c to check if a backend of a given backend id is of a certain backend_type. And should we check if proc->backendId is invalid?\r\n\r\n> ALTER SYSTEM SET autovacuum_vacuum_cost_limit TO 1;\r\n> ALTER SYSTEM SET autovacuum_vacuum_cost_delay TO 100;\r\n> ALTER SYSTEM SET autovacuum_naptime TO 1; \r\n2. Could we set these parameters at the beginning of the test before $node->start with $node->append_conf ? That way we can avoid restarting the node and doing the sleep later on.\r\n\r\n> my $res_pid = $node_primary->safe_psql(\r\n>. 'regress',\r\n>\t\"SELECT pid FROM pg_stat_activity WHERE backend_type = 'autovacuum worker' and datname = 'regress';\"\r\n> );\r\n>\r\n> my ($res_reg_psa_1, $stdout_reg_psa_1, $stderr_reg_psa_1) = $node_primary->psql('regress', qq[\r\n SET ROLE psa_reg_role_1;\r\n> SELECT pg_terminate_backend($res_pid);\r\n> ]);\r\n>\r\n> ok($res_reg_psa_1 != 0, \"should fail for non pg_signal_autovacuum\");\r\n> like($stderr_reg_psa_1, qr/Only roles with the SUPERUSER attribute may terminate processes of roles with the SUPERUSER attribute./, \"matches\");\r\n>\r\n> my ($res_reg_psa_2, $stdout_reg_psa_2, $stderr_reg_psa_2) = $node_primary->psql('regress', qq[\r\n> SET ROLE psa_reg_role_2;\r\n> SELECT pg_terminate_backend($res_pid);\r\n> ]\");\r\n3. Some nits on styling \r\n\r\n4. According to Postgres styles, I believe open brackets should be in a new line \r\n\r\n",
"msg_date": "Sun, 10 Mar 2024 04:13:50 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Another comment that I forgot to mention is that we should also make the documentation change in doc/src/sgml/user-manag.sgml for this new predefined role\r\n\r\nThanks.\r\n\r\n-- \r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n",
"msg_date": "Sun, 10 Mar 2024 04:38:59 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "I took the liberty of continuing to work on this after chatting with Nathan.\r\n\r\nI've attached the updated patch with some improvements.\r\n\r\nThanks.\r\n\r\n--\r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 1 Apr 2024 14:29:29 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 02:29:29PM +0000, Leung, Anthony wrote:\n> I've attached the updated patch with some improvements.\n\nThanks!\n\n+ <row>\n+ <entry>pg_signal_autovacuum</entry>\n+ <entry>Allow terminating backend running autovacuum</entry>\n+ </row>\n\nI think we should be more precise here by calling out the exact types of\nworkers:\n\n\t\"Allow signaling autovacuum worker processes to...\"\n\n- if ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId)) &&\n- !superuser())\n- return SIGNAL_BACKEND_NOSUPERUSER;\n-\n- /* Users can signal backends they have role membership in. */\n- if (!has_privs_of_role(GetUserId(), proc->roleId) &&\n- !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_BACKEND))\n- return SIGNAL_BACKEND_NOPERMISSION;\n+ if (!superuser())\n+ {\n+ if (!OidIsValid(proc->roleId))\n+ {\n+ /*\n+ * We only allow user with pg_signal_autovacuum role to terminate\n+ * autovacuum worker as an exception. \n+ */\n+ if (!(pg_stat_is_backend_autovac_worker(proc->backendId) &&\n+ has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM)))\n+ return SIGNAL_BACKEND_NOSUPERUSER;\n+ }\n+ else\n+ {\n+ if (superuser_arg(proc->roleId))\n+ return SIGNAL_BACKEND_NOSUPERUSER;\n+\n+ /* Users can signal backends they have role membership in. */\n+ if (!has_privs_of_role(GetUserId(), proc->roleId) &&\n+ !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_BACKEND))\n+ return SIGNAL_BACKEND_NOPERMISSION;\n+ }\n+ }\n\nI don't think we should rely on !OidIsValid(proc->roleId) for signaling\nautovacuum workers. That might not always be true, and I don't see any\nneed to rely on that otherwise. IMHO we should just add a few lines before\nthe existing code, which doesn't need to be changed at all:\n\n\tif (pg_stat_is_backend_autovac_worker(proc->backendId) &&\n\t\t!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\n\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\n\nI also think we need to return something besides SIGNAL_BACKEND_NOSUPERUSER\nin this case. Specifically, we probably need to introduce a new value and\nprovide the relevant error messages in pg_cancel_backend() and\npg_terminate_backend().\n\n+/* ----------\n+ * pg_stat_is_backend_autovac_worker() -\n+ *\n+ * Return whether the backend of the given backend id is of type autovacuum worker.\n+ */\n+bool\n+pg_stat_is_backend_autovac_worker(BackendId beid)\n+{\n+ PgBackendStatus *ret;\n+\n+ Assert(beid != InvalidBackendId);\n+\n+ ret = pgstat_get_beentry_by_backend_id(beid);\n+\n+ if (!ret)\n+ return false;\n+\n+ return ret->st_backendType == B_AUTOVAC_WORKER;\n+}\n\nCan we have this function return the backend type so that we don't have to\ncreate a new function for every possible type? That might be handy in the\nfuture.\n\nI haven't looked too closely, but I'm pretty skeptical that the test suite\nin your patch would be stable. Unfortunately, I don't have any better\nideas at the moment besides not adding a test for this new role.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 15:21:46 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 2 Apr 2024, at 01:21, Nathan Bossart <[email protected]> wrote:\n> \n> I haven't looked too closely, but I'm pretty skeptical that the test suite\n> in your patch would be stable. Unfortunately, I don't have any better\n> ideas at the moment besides not adding a test for this new role.\n\nWe can add tests just like [0] with injection points.\nI mean replace that \"sleep 1\" with something like \"$node->wait_for_event('autovacuum worker', 'autocauum-runing');\".\nCurrently we have no infrastructure to wait for autovacuum of particular table, but I think it's doable.\nAlso I do not like that test is changing system-wide autovac settings, AFAIR these settings can be set for particular table.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=eeefd4280f6\n\n",
"msg_date": "Tue, 2 Apr 2024 16:35:28 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "> I don't think we should rely on !OidIsValid(proc->roleId) for signaling\r\n> autovacuum workers. That might not always be true, and I don't see any\r\n> need to rely on that otherwise. IMHO we should just add a few lines before\r\n> the existing code, which doesn't need to be changed at all:\r\n> \r\n>\tif (pg_stat_is_backend_autovac_worker(proc->backendId) &&\r\n>\t !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\r\n>\t return SIGNAL_BACKEND_NOAUTOVACUUM;\r\n\r\nI tried to add them above the existing code. When I test it locally, a user without pg_signal_autovacuum will actually fail at this block because the user is not superuser and !OidIsValid(proc->roleId) is also true in the following:\r\n\r\n\t/*\r\n\t * Only allow superusers to signal superuser-owned backends. Any process\r\n\t * not advertising a role might have the importance of a superuser-owned\r\n\t * backend, so treat it that way.\r\n\t */\r\n\tif ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId)) &&\r\n\t\t!superuser())\r\n\t\treturn SIGNAL_BACKEND_NOSUPERUSER;\r\n\r\nThis is what Im planning to do - If the backend is autovacuum worker and the user is not superuser or has pg_signal_autovacuum role, we return the new value and provide the relevant error message\t\r\n\r\n /*\r\n\t * If the backend is autovacuum worker, allow user with privileges of the \r\n * pg_signal_autovacuum role to signal the backend.\r\n\t */\r\n\tif (pgstat_get_backend_type(proc->backendId) == B_AUTOVAC_WORKER)\r\n\t{\r\n\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM) || !superuser())\r\n\t\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\r\n\t}\r\n\t/*\r\n\t * Only allow superusers to signal superuser-owned backends. Any process\r\n\t * not advertising a role might have the importance of a superuser-owned\r\n\t * backend, so treat it that way.\r\n\t*/\r\n\telse if ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId)) &&\r\n\t\t\t !superuser())\r\n\t{\r\n\t\treturn SIGNAL_BACKEND_NOSUPERUSER;\r\n\t}\r\n\t/* Users can signal backends they have role membership in. */\r\n\telse if (!has_privs_of_role(GetUserId(), proc->roleId) &&\r\n\t\t\t !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_BACKEND))\r\n\t{\r\n\t\treturn SIGNAL_BACKEND_NOPERMISSION;\r\n\t}\r\n\r\n\r\n> We can add tests just like [0] with injection points.\r\n> I mean replace that \"sleep 1\" with something like \"$node->wait_for_event('autovacuum worker', 'autocauum-runing');\".\r\n> Currently we have no infrastructure to wait for autovacuum of particular table, but I think it's doable.\r\n> Also I do not like that test is changing system-wide autovac settings, AFAIR these settings can be set for particular table.\r\n\r\nThanks for the suggestion. I will take a look at this. Let me also separate the test into a different patch file.\r\n\r\n--\r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 4 Apr 2024 00:30:51 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Update - the condition should be && \r\n\r\n\tif (pgstat_get_backend_type(proc->backendId) == B_AUTOVAC_WORKER)\r\n\t{\r\n\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM) && !superuser())\r\n\t\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\r\n\t}\r\n\r\nThanks\r\n--\r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n",
"msg_date": "Thu, 4 Apr 2024 00:36:45 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 04:35:28PM +0500, Andrey M. Borodin wrote:\n> We can add tests just like [0] with injection points.\n> I mean replace that \"sleep 1\" with something like\n> \"$node->wait_for_event('autovacuum worker', 'autocauum-runing');\".\n> Currently we have no infrastructure to wait for autovacuum of\n> particular table, but I think it's doable.\n> Also I do not like that test is changing system-wide autovac\n> settings, AFAIR these settings can be set for particular table.\n\nYeah, hardcoded sleeps are not really acceptable. On fast machines\nthey eat in global runtime making the whole slower, impacting the CI.\nOn slow machines, that's not going to be stable and we have a lot of\nbuildfarm animals starved on CPU, like the ones running valgrind or\njust because their environment is slow (one of my animals runs on a\nRPI, for example). Note that slow machines have a lot of value\nbecause they're usually better at catching race conditions. Injection\npoints would indeed make the tests more deterministic by controlling\nthe waits and wakeups you'd like to have in the patch's tests.\n\neeefd4280f6e would be a good example of how to implement a test.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 10:05:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 12:30:51AM +0000, Leung, Anthony wrote:\n>>\tif (pg_stat_is_backend_autovac_worker(proc->backendId) &&\n>>\t !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\n>>\t return SIGNAL_BACKEND_NOAUTOVACUUM;\n> \n> I tried to add them above the existing code. When I test it locally, a\n> user without pg_signal_autovacuum will actually fail at this block\n> because the user is not superuser and !OidIsValid(proc->roleId) is also\n> true in the following:\n\nGood catch.\n\n> This is what Im planning to do - If the backend is autovacuum worker and\n> the user is not superuser or has pg_signal_autovacuum role, we return the\n> new value and provide the relevant error message\n> \n> /*\n> \t * If the backend is autovacuum worker, allow user with privileges of the \n> * pg_signal_autovacuum role to signal the backend.\n> \t */\n> \tif (pgstat_get_backend_type(proc->backendId) == B_AUTOVAC_WORKER)\n> \t{\n> \t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM) || !superuser())\n> \t\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\n> \t}\n> \t/*\n> \t * Only allow superusers to signal superuser-owned backends. Any process\n> \t * not advertising a role might have the importance of a superuser-owned\n> \t * backend, so treat it that way.\n> \t*/\n> \telse if ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId)) &&\n> \t\t\t !superuser())\n> \t{\n> \t\treturn SIGNAL_BACKEND_NOSUPERUSER;\n> \t}\n> \t/* Users can signal backends they have role membership in. */\n> \telse if (!has_privs_of_role(GetUserId(), proc->roleId) &&\n> \t\t\t !has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_BACKEND))\n> \t{\n> \t\treturn SIGNAL_BACKEND_NOPERMISSION;\n> \t}\n\nThere's no need for the explicit superuser() check in the\npg_signal_autovacuum section. That's built into has_privs_of_role()\nalready.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Apr 2024 13:15:33 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "I made some updates based on the feedbacks in v2. This patch only contains the code change for allowing the signaling to av worker with pg_signal_autovacuum. I will send a separate patch for the tap test shortly.\r\n\r\nThanks\r\n\r\n--\r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 4 Apr 2024 20:34:18 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Adding tap test for pg_signal_autovacuum using injection points as a separate patch. I also made a minor change on the original patch.\r\n\r\nThanks.\r\n\r\n--\r\nAnthony \r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 5 Apr 2024 00:03:05 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 5 Apr 2024, at 05:03, Leung, Anthony <[email protected]> wrote:\n> \n> Adding tap test for pg_signal_autovacuum using injection points as a separate patch. I also made a minor change on the original patch.\n\nThe test looks good, but:\n1. remove references to passcheck :)\n2. detach injection point when it's not needed anymore\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 5 Apr 2024 10:26:58 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 12:03:05AM +0000, Leung, Anthony wrote:\n> Adding tap test for pg_signal_autovacuum using injection points as a\n> separate patch. I also made a minor change on the original patch.\n\n+\tret = pgstat_get_beentry_by_proc_number(procNumber);\n+\n+\tif (!ret)\n+\t\treturn false;\n\nAn invalid BackendType is not false, but B_INVALID.\n\n+{ oid => '6312', oid_symbol => 'ROLE_PG_SIGNAL_AUTOVACUUM',\n\nOIDs in patches under development should use a value in the range\n8000-9999. Newly-assigned OIDs are renumbered after the feature\nfreeze.\n\n+\t/*\n+\t * If the backend is autovacuum worker, allow user with the privileges of\n+\t * pg_signal_autovacuum role to signal the backend.\n+\t */\n+\tif (pgstat_get_backend_type(GetNumberFromPGProc(proc)) == B_AUTOVAC_WORKER)\n+\t{\n+\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\n+\t\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\n+\t}\n\nI was wondering why this is not done after we've checked that we have\na superuser-owned backend, and this is giving me a pause. @Nathan,\nwhy do you think we should not rely on the roleId for an autovacuum\nworker? In core, do_autovacuum() is only called in a process without\na role specified, and I've noticed your remark here:\nhttps://www.postgresql.org/message-id/20240401202146.GA2354284@nathanxps13\nIt's feeling more natural here to check that we have a superuser-owned\nbackend first, and then do a lookup of the process type.\n\nOne thing that we should definitely not do is letting any user calling\npg_signal_backend() know that a given PID maps to an autovacuum\nworker. This information is hidden in pg_stat_activity. And\nactually, doesn't the patch leak this information to all users when\ncalling pg_signal_backend with random PID numbers because of the fact\nthat SIGNAL_BACKEND_NOAUTOVACUUM exists? Any role could guess which\nPIDs are used by an autovacuum worker because of the granularity\nrequired for the error related to pg_signal_autovacuum.\n\n+\tINJECTION_POINT(\"autovacuum-start\");\nPerhaps autovacuum-worker-start is more suited here. I am not sure\nthat the beginning of do_autovacuum() is the optimal location, as what\nmatters is that we've done InitPostgres() to be able to grab the PID\nfrom pg_stat_activity. This location does the job.\n\n+if ($ENV{enable_injection_points} ne 'yes')\n+{\n+\tplan skip_all => 'Injection points not supported by this build';\n+}\n[...]\n+$node->safe_psql('postgres',\n+\t\"SELECT injection_points_attach('autovacuum-start', 'wait');\");\n[...]\n+# Wait until the autovacuum worker starts\n+$node->wait_for_event('autovacuum worker', 'autovacuum-start');\n\nThis integration with injection points looks correct to me.\n\n+# Copyright (c) 2022-2024, PostgreSQL Global Development Group\n[...]\n+# Copyright (c) 2024-2024, PostgreSQL Global Development Group\n\nThese need to be cleaned up.\n\n+# Makefile for src/test/recovery\n+#\n+# src/test/recovery/Makefile\n\nThis is incorrect, twice. No problems for me with using a new path in\nsrc/test/ for that kind of tests. There are no similar locations.\n\n+ INSERT INTO tab_int SELECT * FROM generate_series(1, 1000000);\nA good chunk of the test would be spent on that, but you don't need\nthat many tuples to trigger an autovacuum worker as the short naptime\nis able to do it. I would recommend to reduce that to a minimum.\n\n+# User with signal_backend_role cannot terminate autovacuum worker\n\nNot sure that there is a huge point in checking after a role that\nholds pg_signal_backend. An autovacuum worker is not a backend. Only\nthe case of a role not member of pg_signal_autovacuum should be\nenough.\n\n+# Test signaling for pg_signal_autovacuum role. \n\nThis needs a better documentation: the purpose of the test is to\nsignal an autovacuum worker, aka it uses an injection point to ensure\nthat the worker for the whole duration of the test.\n\nIt seems to me that it would be a better practice to wakeup the\ninjection point and detach it before being done with the worker.\nThat's not mandatory but it would encourage the correct flow if this\ncode is copy-pasted around to other tests.\n\n+like($psql_err, qr/ERROR: permission denied to terminate ...\n\nChecking only the ERRROR, and not the DETAIL should be sufficient\nhere.\n\n+# User with pg_signal_backend can terminate autovacuum worker \n+my $terminate_with_pg_signal_av = $node->psql('postgres', qq( \n+ SET ROLE signal_autovacuum_role;\n+ SELECT pg_terminate_backend($av_pid);\n+), stdout => \\$psql_out, stderr => \\$psql_err);\n+\n+ok($terminate_with_pg_signal_av == 0, \"Terminating autovacuum worker should succeed with pg_signal_autovacuum role\");\n\nIs that enough for the validation? How about checking some pattern in\nthe server logs from an offset before running this last query?\n--\nMichael",
"msg_date": "Fri, 5 Apr 2024 14:39:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 02:39:05PM +0900, Michael Paquier wrote:\n> +\t/*\n> +\t * If the backend is autovacuum worker, allow user with the privileges of\n> +\t * pg_signal_autovacuum role to signal the backend.\n> +\t */\n> +\tif (pgstat_get_backend_type(GetNumberFromPGProc(proc)) == B_AUTOVAC_WORKER)\n> +\t{\n> +\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\n> +\t\t\treturn SIGNAL_BACKEND_NOAUTOVACUUM;\n> +\t}\n> \n> I was wondering why this is not done after we've checked that we have\n> a superuser-owned backend, and this is giving me a pause. @Nathan,\n> why do you think we should not rely on the roleId for an autovacuum\n> worker? In core, do_autovacuum() is only called in a process without\n> a role specified, and I've noticed your remark here:\n> https://www.postgresql.org/message-id/20240401202146.GA2354284@nathanxps13\n> It's feeling more natural here to check that we have a superuser-owned\n> backend first, and then do a lookup of the process type.\n\nI figured since there's no reason to rely on that behavior, we might as\nwell do a bit of future-proofing in case autovacuum workers are ever not\nrun as InvalidOid. It'd be easy enough to fix this code if that ever\nhappened, so I'm not too worried about this.\n\n> One thing that we should definitely not do is letting any user calling\n> pg_signal_backend() know that a given PID maps to an autovacuum\n> worker. This information is hidden in pg_stat_activity. And\n> actually, doesn't the patch leak this information to all users when\n> calling pg_signal_backend with random PID numbers because of the fact\n> that SIGNAL_BACKEND_NOAUTOVACUUM exists? Any role could guess which\n> PIDs are used by an autovacuum worker because of the granularity\n> required for the error related to pg_signal_autovacuum.\n\nHm. I hadn't considered that angle. IIUC right now they'll just get the\ngeneric superuser error for autovacuum workers. I don't know how concerned\nto be about users distinguishing autovacuum workers from other superuser\nbackends, _but_ if roles with pg_signal_autovacuum can't even figure out\nthe PIDs for the autovacuum workers, then this feature seems kind-of\nuseless. Perhaps we should allow roles with privileges of\npg_signal_autovacuum to see the autovacuum workers in pg_stat_activity.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 07:56:56 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 07:56:56AM -0500, Nathan Bossart wrote:\n> On Fri, Apr 05, 2024 at 02:39:05PM +0900, Michael Paquier wrote:\n>> One thing that we should definitely not do is letting any user calling\n>> pg_signal_backend() know that a given PID maps to an autovacuum\n>> worker. This information is hidden in pg_stat_activity. And\n>> actually, doesn't the patch leak this information to all users when\n>> calling pg_signal_backend with random PID numbers because of the fact\n>> that SIGNAL_BACKEND_NOAUTOVACUUM exists? Any role could guess which\n>> PIDs are used by an autovacuum worker because of the granularity\n>> required for the error related to pg_signal_autovacuum.\n> \n> Hm. I hadn't considered that angle. IIUC right now they'll just get the\n> generic superuser error for autovacuum workers. I don't know how concerned\n> to be about users distinguishing autovacuum workers from other superuser\n> backends, _but_ if roles with pg_signal_autovacuum can't even figure out\n> the PIDs for the autovacuum workers, then this feature seems kind-of\n> useless. Perhaps we should allow roles with privileges of\n> pg_signal_autovacuum to see the autovacuum workers in pg_stat_activity.\n\nThere is pg_read_all_stats as well, so I don't see a big issue in\nrequiring to be a member of this role as well for the sake of what's\nproposing here. I'd rather not leak any information at the end for\nanybody calling pg_signal_backend without access to the stats, so\nchecking the backend type after the role sounds kind of a safer\nlong-term approach for me.\n--\nMichael",
"msg_date": "Sat, 6 Apr 2024 08:56:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 08:56:04AM +0900, Michael Paquier wrote:\n> There is pg_read_all_stats as well, so I don't see a big issue in\n> requiring to be a member of this role as well for the sake of what's\n> proposing here.\n\nWell, that tells you quite a bit more than just which PIDs correspond to\nautovacuum workers, but maybe that's good enough for now.\n\n> I'd rather not leak any information at the end for\n> anybody calling pg_signal_backend without access to the stats, so\n> checking the backend type after the role sounds kind of a safer\n> long-term approach for me.\n\nI'm not following what you mean by this. Are you suggesting that we should\nkeep the existing superuser message for the autovacuum workers?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 Apr 2024 20:07:51 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 08:07:51PM -0500, Nathan Bossart wrote:\n> On Sat, Apr 06, 2024 at 08:56:04AM +0900, Michael Paquier wrote:\n>> There is pg_read_all_stats as well, so I don't see a big issue in\n>> requiring to be a member of this role as well for the sake of what's\n>> proposing here.\n> \n> Well, that tells you quite a bit more than just which PIDs correspond to\n> autovacuum workers, but maybe that's good enough for now.\n\nThat may be a good initial compromise, for now.\n\n>> I'd rather not leak any information at the end for\n>> anybody calling pg_signal_backend without access to the stats, so\n>> checking the backend type after the role sounds kind of a safer\n>> long-term approach for me.\n> \n> I'm not following what you mean by this. Are you suggesting that we should\n> keep the existing superuser message for the autovacuum workers?\n\nMostly. Just to be clear the patch has the following problem:\n=# CREATE ROLE popo LOGIN;\nCREATE ROLE\n=# CREATE EXTENSION injection_points;\nCREATE EXTENSION\n=# select injection_points_attach('autovacuum-start', 'wait');\n injection_points_attach\n-------------------------\n\n(1 row)\n=# select pid, backend_type from pg_stat_activity\n where wait_event = 'autovacuum-start' LIMIT 1;\n pid | backend_type\n-------+-------------------\n 14163 | autovacuum worker\n(1 row)\n=> \\c postgres popo\nYou are now connected to database \"postgres\" as user \"popo\". \n=> select pg_terminate_backend(14163);\nERROR: 42501: permission denied to terminate autovacuum worker backend\nDETAIL: Only roles with the SUPERUSER attribute or with privileges of\nthe \"pg_signal_autovacuum\" role may terminate autovacuum worker\nbackend\nLOCATION: pg_terminate_backend, signalfuncs.c:267 \n=> select backend_type from pg_stat_activity where pid = 14163;\n backend_type\n--------------\n null\n(1 row)\n\nAnd we should try to reshape things so as we get an ERROR like\n\"permission denied to terminate process\" or \"permission denied to\ncancel query\" for all the error paths, including autovacuum workers \nand backends, so as we never leak any information about the backend\ntypes involved when a role has no permission to issue the signal.\nPerhaps that's the most intuitive thing as well, because autovacuum\nworkers are backends. One thing that we could do is to mention both\npg_signal_backend and pg_signal_autovacuum in the errdetail, and have\nboth cases be handled by SIGNAL_BACKEND_NOPERMISSION on failure.\n--\nMichael",
"msg_date": "Mon, 8 Apr 2024 13:17:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": ">>> There is pg_read_all_stats as well, so I don't see a big issue in\r\n>>> requiring to be a member of this role as well for the sake of what's\r\n>>> proposing here.\r\n>>\r\n>> Well, that tells you quite a bit more than just which PIDs correspond to\r\n>> autovacuum workers, but maybe that's good enough for now.\r\n>\r\n> That may be a good initial compromise, for now.\r\n\r\nSounds good to me. I will update the documentation.\r\n\r\n\r\n> And we should try to reshape things so as we get an ERROR like\r\n> \"permission denied to terminate process\" or \"permission denied to\r\n> cancel query\" for all the error paths, including autovacuum workers \r\n> and backends, so as we never leak any information about the backend\r\n> types involved when a role has no permission to issue the signal.\r\n> Perhaps that's the most intuitive thing as well, because autovacuum\r\n> workers are backends. One thing that we could do is to mention both\r\n> pg_signal_backend and pg_signal_autovacuum in the errdetail, and have\r\n> both cases be handled by SIGNAL_BACKEND_NOPERMISSION on failure.\r\n\r\nI understand your concern that we should avoid exposing the fact that the backend which the user is attempting to terminate is an AV worker unless the user has pg_signal_backend privileges and pg_signal_autovacuum privileges. \r\nBut Im not following how we can re-use SIGNAL_BACKEND_NOPERMISSION for this. If we return SIGNAL_BACKEND_NOPERMISSION here as the following, it'll stay return the \"permission denied to terminate / cancel query\" errmsg and errdetail in pg_cancel/terminate_backend.\r\n\r\n\t/*\r\n\t * If the backend is autovacuum worker, allow user with the privileges of\r\n\t * pg_signal_autovacuum role to signal the backend.\r\n\t */\r\n\tif (pgstat_get_backend_type(GetNumberFromPGProc(proc)) == B_AUTOVAC_WORKER)\r\n\t{\r\n\t\tif (!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVACUUM))\r\n\t\t\treturn SIGNAL_BACKEND_NOPERMISSION;\r\n\t}\r\n\r\nAre you suggesting that we check if the backend is B_AUTOVAC in pg_cancel/ terminate_backend? That seems a bit unclean to me since pg_cancel_backend & pg_cancel_backend does not access to the procNumber to check the type of the backend.\r\n\r\nIMHO, we can keep SIGNAL_BACKEND_NOAUTOVACUUM but just improve the errmsg / errdetail to not expose that the backend is an AV worker. It'll also be helpful if you can suggest what errdetail we should use here.\r\n\r\nThanks\r\n--\r\nAnthony Leung\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 8 Apr 2024 17:42:05 +0000",
"msg_from": "\"Leung, Anthony\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Mon, Apr 08, 2024 at 05:42:05PM +0000, Leung, Anthony wrote:\n> Are you suggesting that we check if the backend is B_AUTOVAC in\n> pg_cancel/ terminate_backend? That seems a bit unclean to me since\n> pg_cancel_backend & pg_cancel_backend does not access to the\n> procNumber to check the type of the backend.\n> \n> IMHO, we can keep SIGNAL_BACKEND_NOAUTOVACUUM but just improve the\n> errmsg / errdetail to not expose that the backend is an AV\n> worker. It'll also be helpful if you can suggest what errdetail we\n> should use here.\n\nThe thing is that you cannot rely on a lookup of the backend type for\nthe error information, or you open yourself to letting the caller of\npg_cancel_backend or pg_terminate_backend know if a backend is\ncontrolled by a superuser or if a backend is an autovacuum worker.\nAnd they may have no access to this information by default, except if\nthe role is a member of pg_read_all_stats able to scan\npg_stat_activity. An option that I can think of, even if it is not\nthe most elegant ever, would be list all the possible system users\nthat can be used in the errdetail under a single SIGNAL_BACKEND_NO*\nstate.\n\nIn the case of your patch it would mean to mention both\npg_signal_backend and pg_signal_autovacuum.\n\nThe choice of pg_signal_autovacuum is a bit inconsistent, as well,\nbecause autovacuum workers operate like regular backends. This name\ncan also be confused with the autovacuum launcher.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2024 14:53:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Hi, thanks for looking into this.\n\nOn Tue, 9 Apr 2024 at 08:53, Michael Paquier <[email protected]> wrote:\n\n> On Mon, Apr 08, 2024 at 05:42:05PM +0000, Leung, Anthony wrote:\n> > Are you suggesting that we check if the backend is B_AUTOVAC in\n> > pg_cancel/ terminate_backend? That seems a bit unclean to me since\n> > pg_cancel_backend & pg_cancel_backend does not access to the\n> > procNumber to check the type of the backend.\n> >\n> > IMHO, we can keep SIGNAL_BACKEND_NOAUTOVACUUM but just improve the\n> > errmsg / errdetail to not expose that the backend is an AV\n> > worker. It'll also be helpful if you can suggest what errdetail we\n> > should use here.\n>\n> The thing is that you cannot rely on a lookup of the backend type for\n> the error information, or you open yourself to letting the caller of\n> pg_cancel_backend or pg_terminate_backend know if a backend is\n> controlled by a superuser or if a backend is an autovacuum worker.\n>\n\nGood catch. Thanks. I think we need to update the error message to not\nleak backend type info.\n\n> The choice of pg_signal_autovacuum is a bit inconsistent, as well,\n> because autovacuum workers operate like regular backends. This name\n> can also be confused with the autovacuum launcher.\n\nOk. What would be a good choice? Is `pg_signal_autovacuum_worker` good\nenough?\n\nHi, thanks for looking into this.On Tue, 9 Apr 2024 at 08:53, Michael Paquier <[email protected]> wrote:On Mon, Apr 08, 2024 at 05:42:05PM +0000, Leung, Anthony wrote:\n> Are you suggesting that we check if the backend is B_AUTOVAC in\n> pg_cancel/ terminate_backend? That seems a bit unclean to me since\n> pg_cancel_backend & pg_cancel_backend does not access to the\n> procNumber to check the type of the backend.\n> \n> IMHO, we can keep SIGNAL_BACKEND_NOAUTOVACUUM but just improve the\n> errmsg / errdetail to not expose that the backend is an AV\n> worker. It'll also be helpful if you can suggest what errdetail we\n> should use here.\n\nThe thing is that you cannot rely on a lookup of the backend type for\nthe error information, or you open yourself to letting the caller of\npg_cancel_backend or pg_terminate_backend know if a backend is\ncontrolled by a superuser or if a backend is an autovacuum worker.Good catch. Thanks. I think we need to update the error message to not leak backend type info.> The choice of pg_signal_autovacuum is a bit inconsistent, as well,\n> because autovacuum workers operate like regular backends. This name\n> can also be confused with the autovacuum launcher.Ok. What would be a good choice? Is `pg_signal_autovacuum_worker` good enough?",
"msg_date": "Wed, 10 Apr 2024 00:52:19 +0300",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 12:52:19AM +0300, Kirill Reshke wrote:\n> On Tue, 9 Apr 2024 at 08:53, Michael Paquier <[email protected]> wrote:\n>> The thing is that you cannot rely on a lookup of the backend type for\n>> the error information, or you open yourself to letting the caller of\n>> pg_cancel_backend or pg_terminate_backend know if a backend is\n>> controlled by a superuser or if a backend is an autovacuum worker.\n> \n> Good catch. Thanks. I think we need to update the error message to not\n> leak backend type info.\n\nYep, that's necessary I am afraid.\n\n>> The choice of pg_signal_autovacuum is a bit inconsistent, as well,\n>> because autovacuum workers operate like regular backends. This name\n>> can also be confused with the autovacuum launcher.\n> \n> Ok. What would be a good choice? Is `pg_signal_autovacuum_worker` good\n> enough?\n\nSounds fine to me. Perhaps others have an opinion about that?\n--\nMichael",
"msg_date": "Wed, 10 Apr 2024 07:58:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 07:58:39AM +0900, Michael Paquier wrote:\n> On Wed, Apr 10, 2024 at 12:52:19AM +0300, Kirill Reshke wrote:\n>> On Tue, 9 Apr 2024 at 08:53, Michael Paquier <[email protected]> wrote:\n>>> The thing is that you cannot rely on a lookup of the backend type for\n>>> the error information, or you open yourself to letting the caller of\n>>> pg_cancel_backend or pg_terminate_backend know if a backend is\n>>> controlled by a superuser or if a backend is an autovacuum worker.\n>> \n>> Good catch. Thanks. I think we need to update the error message to not\n>> leak backend type info.\n> \n> Yep, that's necessary I am afraid.\n\nIsn't it relatively easy to discover this same information today via\npg_stat_progress_vacuum? That has the following code:\n\n\t\t/* Value available to all callers */\n\t\tvalues[0] = Int32GetDatum(beentry->st_procpid);\n\t\tvalues[1] = ObjectIdGetDatum(beentry->st_databaseid);\n\nI guess I'm not quite following why we are worried about leaking whether a\nbackend is an autovacuum worker.\n\n>>> The choice of pg_signal_autovacuum is a bit inconsistent, as well,\n>>> because autovacuum workers operate like regular backends. This name\n>>> can also be confused with the autovacuum launcher.\n>> \n>> Ok. What would be a good choice? Is `pg_signal_autovacuum_worker` good\n>> enough?\n> \n> Sounds fine to me. Perhaps others have an opinion about that?\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Apr 2024 10:00:34 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 10:00:34AM -0500, Nathan Bossart wrote:\n> Isn't it relatively easy to discover this same information today via\n> pg_stat_progress_vacuum? That has the following code:\n> \n> \t\t/* Value available to all callers */\n> \t\tvalues[0] = Int32GetDatum(beentry->st_procpid);\n> \t\tvalues[1] = ObjectIdGetDatum(beentry->st_databaseid);\n> \n> I guess I'm not quite following why we are worried about leaking whether a\n> backend is an autovacuum worker.\n\nGood point. I've missed that we make no effort currently to hide any\nPID information from the progress tables. And we can guess more\ncontext data because of the per-table split of the progress tables.\n\nThis choice comes down to b6fb6471f6af that has introduced the\nprogress report facility, so this ship has long sailed it seems. And\nit makes my argument kind of moot.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 07:21:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Posting updated version of this patch with comments above addressed.\n\n1) pg_signal_autovacuum -> pg_signal_autovacuum_worker, as there seems\nto be no objections to that.\n\n2)\nThere are comments on how to write if statement:\n\n> In core, do_autovacuum() is only called in a process without\n> a role specified\n\n> It's feeling more natural here to check that we have a superuser-owned\n> backend first, and then do a lookup of the process type.\n\n> I figured since there's no reason to rely on that behavior, we might as\n> well do a bit of future-proofing in case autovacuum workers are ever not\n> run as InvalidOid.\n\nI have combined them into this:\n\nif ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId))\n&& pgstat_get_backend_type(GetNumberFromPGProc(proc)) == B_AUTOVAC_WORKER)\n\nThis is both future-proofing and natural, I suppose. Downside of this\nis double checking condition (!OidIsValid(proc->roleId) ||\nsuperuser_arg(proc->roleId)), but i think that is ok for the sake of\nsimplicity.\n\n3) pg_signal_autovacuum_worker Oid changed to random one: 8916\n\n4)\n\n> An invalid BackendType is not false, but B_INVALID.\nfixed, thanks\n\n5)\n\n>>>> There is pg_read_all_stats as well, so I don't see a big issue in\n>>>> requiring to be a member of this role as well for the sake of what's\n>>>> proposing here.\n>>>\n>>> Well, that tells you quite a bit more than just which PIDs correspond to\n>>> autovacuum workers, but maybe that's good enough for now.\n>>\n>> That may be a good initial compromise, for now.\n\n>Sounds good to me. I will update the documentation.\n\n@Anthony if you feel that documentation update adds much value here,\nplease do. Given that we know autovacuum worker PIDs from\npg_stat_progress_vacuum, I don't know how to reflect something about\npg_stat_autovac_worker in doc, and if it is worth it.\n\n6)\n> + INJECTION_POINT(\"autovacuum-start\");\n> Perhaps autovacuum-worker-start is more suited here\n\nfixed, thanks\n\n7)\n\n> +# Copyright (c) 2022-2024, PostgreSQL Global Development Group\n> [...]\n> +# Copyright (c) 2024-2024, PostgreSQL Global Development Group\n\n> These need to be cleaned up.\n\n> +# Makefile for src/test/recovery\n> +#\n> +# src/test/recovery/Makefile\n\n> This is incorrect, twice.\n\nCleaned up, thanks!\n\n8)\n\n> Not sure that there is a huge point in checking after a role that\n> holds pg_signal_backend.\nOk. Removed.\n\nThen:\n\n> +like($psql_err, qr/ERROR: permission denied to terminate ...\n> Checking only the ERRROR, and not the DETAIL should be sufficient\n> here.\n\n\nAfter removing the pg_signal_backend test case we have only one place\nwhere errors check is done. So, I think we should keep DETAIL here to\nensure detail is correct (it differs from regular backend case).\n\n9)\n> +# Test signaling for pg_signal_autovacuum role.\n> This needs a better documentation:\n\nUpdated. Hope now the test documentation helps to understand it.\n\n10)\n\n> +ok($terminate_with_pg_signal_av == 0, \"Terminating autovacuum worker should succeed with pg_signal_autovacuum role\");\n> Is that enough for the validation?\n\nAdded:\nok($node->log_contains(qr/FATAL: terminating autovacuum process due to\nadministrator command/, $offset),\n\"Autovacuum terminates when role is granted with pg_signal_autovacuum_worker\");\n\n11) references to `passcheck` extension removed. errors messages rephrased.\n\n12) injection_point_detach added.\n\n13)\n> + INSERT INTO tab_int SELECT * FROM generate_series(1, 1000000);\n> A good chunk of the test would be spent on that, but you don't need\n> that many tuples to trigger an autovacuum worker as the short naptime\n> is able to do it. I would recommend to reduce that to a minimum.\n\n+1\nSingle tuple works.\n\n14)\n\nv3 suffers from segfault:\n2024-04-11 11:28:31.116 UTC [147437] 001_signal_autovacuum.pl LOG:\nstatement: SELECT pg_terminate_backend(147427);\n2024-04-11 11:28:31.116 UTC [147427] FATAL: terminating autovacuum\nprocess due to administrator command\n2024-04-11 11:28:31.116 UTC [147410] LOG: server process (PID 147427)\nwas terminated by signal 11: Segmentation fault\n2024-04-11 11:28:31.116 UTC [147410] LOG: terminating any other\nactive server processes\n2024-04-11 11:28:31.117 UTC [147410] LOG: shutting down because\nrestart_after_crash is off\n2024-04-11 11:28:31.121 UTC [147410] LOG: database system is shut down\n\nThe test doesn't fail because pg_terminate_backend actually meets his\npoint: autovac is killed. But while dying, autovac also receives\nsegfault. Thats because of injections points:\n\n\n(gdb) bt\n#0 0x000056361c3379ea in tas (lock=0x7fbcb9632224 <error: Cannot\naccess memory at address 0x7fbcb9632224>) at\n../../../../src/include/storage/s_lock.h:228\n#1 ConditionVariableCancelSleep () at condition_variable.c:238\n#2 0x000056361c337e4b in ConditionVariableBroadcast\n(cv=0x7fbcb66f498c) at condition_variable.c:310\n#3 0x000056361c330a40 in CleanupProcSignalState (status=<optimized\nout>, arg=<optimized out>) at procsignal.c:240\n#4 0x000056361c328801 in shmem_exit (code=code@entry=1) at ipc.c:276\n#5 0x000056361c3288fc in proc_exit_prepare (code=code@entry=1) at ipc.c:198\n#6 0x000056361c3289bf in proc_exit (code=code@entry=1) at ipc.c:111\n#7 0x000056361c49ffa8 in errfinish (filename=<optimized out>,\nlineno=<optimized out>, funcname=0x56361c654370 <__func__.16>\n\"ProcessInterrupts\") at elog.c:592\n#8 0x000056361bf7191e in ProcessInterrupts () at postgres.c:3264\n#9 0x000056361c3378d7 in ConditionVariableTimedSleep\n(cv=0x7fbcb9632224, timeout=timeout@entry=-1,\nwait_event_info=117440513) at condition_variable.c:196\n#10 0x000056361c337d0b in ConditionVariableTimedSleep\n(wait_event_info=<optimized out>, timeout=-1, cv=<optimized out>) at\ncondition_variable.c:135\n#11 ConditionVariableSleep (cv=<optimized out>,\nwait_event_info=<optimized out>) at condition_variable.c:98\n#12 0x00000000b96347d0 in ?? ()\n#13 0x3a3f1d9baa4f5500 in ?? ()\n#14 0x000056361cc6cbd0 in ?? ()\n#15 0x000056361ccac300 in ?? ()\n#16 0x000056361c62be63 in ?? ()\n#17 0x00007fbcb96347d0 in ?? () at injection_points.c:201 from\n/home/reshke/postgres/tmp_install/home/reshke/postgres/pgbin/lib/injection_points.so\n#18 0x00007fffe4122b10 in ?? ()\n#19 0x00007fffe4122b70 in ?? ()\n#20 0x0000000000000000 in ?? ()\n\ndiscovered because of\n# Release injection point.\n$node->safe_psql('postgres',\n\"SELECT injection_point_detach('autovacuum-worker-start');\");\nadded\n\nv4 also suffers from that. i will try to fix that.",
"msg_date": "Thu, 11 Apr 2024 16:55:59 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 04:55:59PM +0500, Kirill Reshke wrote:\n> Posting updated version of this patch with comments above addressed.\n\nI look for a commitfest entry for this one, but couldn't find it. Would\nyou mind either creating one or, if I've somehow missed it, pointing me to\nthe existing entry?\n\n\thttps://commitfest.postgresql.org/48/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 09:06:59 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 04:55:59PM +0500, Kirill Reshke wrote:\n>> It's feeling more natural here to check that we have a superuser-owned\n>> backend first, and then do a lookup of the process type.\n> \n>> I figured since there's no reason to rely on that behavior, we might as\n>> well do a bit of future-proofing in case autovacuum workers are ever not\n>> run as InvalidOid.\n> \n> I have combined them into this:\n> \n> if ((!OidIsValid(proc->roleId) || superuser_arg(proc->roleId))\n> && pgstat_get_backend_type(GetNumberFromPGProc(proc)) == B_AUTOVAC_WORKER)\n> \n> This is both future-proofing and natural, I suppose. Downside of this\n> is double checking condition (!OidIsValid(proc->roleId) ||\n> superuser_arg(proc->roleId)), but i think that is ok for the sake of\n> simplicity.\n\nIf we want to retain the check, IMO we might as well combine the first two\nblocks like Anthony proposed:\n\n\tif (!OidIsValid(proc->roleId) || superuser_arg(proc->roleId))\n\t{\n\t\tProcNumber procNumber = GetNumberFromPGProc(proc);\n\t\tPGBackendStatus procStatus = pgstat_get_beentry_by_proc_number(procNumber);\n\n\t\tif (procStatus && procStatus->st_backendType == B_AUTOVAC_WORKER &&\n\t\t\t!has_privs_of_role(GetUserId(), ROLE_PG_SIGNAL_AUTOVAC_WORKER))\n\t\t\treturn SIGNAL_BACKEND_NOAUTOVAC;\n\t\telse if (!superuser())\n\t\t\treturn SIGNAL_BACKEND_NOSUPERUSER;\n\t}\n\n+ <row>\n+ <entry>pg_signal_autovacuum_worker</entry>\n+ <entry>Allow signaling autovacuum worker backend to cancel or terminate</entry>\n+ </row>\n\nI think we need to be more specific about what pg_cancel_backend() and\npg_terminate_backend() do for autovacuum workers. The code offers some\nclues:\n\n\t/*\n\t * SIGINT is used to signal canceling the current table's vacuum; SIGTERM\n\t * means abort and exit cleanly, and SIGQUIT means abandon ship.\n\t */\n\tpqsignal(SIGINT, StatementCancelHandler);\n\tpqsignal(SIGTERM, die);\n\n+/* ----------\n+ * pgstat_get_backend_type() -\n+ *\n+ * Return the backend type of the backend for the given proc number.\n+ * ----------\n+ */\n+BackendType\n+pgstat_get_backend_type(ProcNumber procNumber)\n+{\n+\tPgBackendStatus *ret;\n+\n+\tret = pgstat_get_beentry_by_proc_number(procNumber);\n+\n+\tif (!ret)\n+\t\treturn B_INVALID;\n+\n+\treturn ret->st_backendType;\n+}\n\nI'm not sure we really need to introduce a new function for this. I\navoided using it in my example snippet above. But, maybe it'll come in\nhandy down the road...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 09:38:07 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, 11 Apr 2024 at 19:07, Nathan Bossart <[email protected]> wrote:\n>\n> On Thu, Apr 11, 2024 at 04:55:59PM +0500, Kirill Reshke wrote:\n> > Posting updated version of this patch with comments above addressed.\n>\n> I look for a commitfest entry for this one, but couldn't find it. Would\n> you mind either creating one\n\nDone: https://commitfest.postgresql.org/48/4922/\n\n\n",
"msg_date": "Thu, 11 Apr 2024 20:20:56 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 04:55:59PM +0500, Kirill Reshke wrote:\n> The test doesn't fail because pg_terminate_backend actually meets his\n> point: autovac is killed. But while dying, autovac also receives\n> segfault. Thats because of injections points:\n> \n> (gdb) bt\n> #0 0x000056361c3379ea in tas (lock=0x7fbcb9632224 <error: Cannot\n> access memory at address 0x7fbcb9632224>) at\n> ../../../../src/include/storage/s_lock.h:228\n> #1 ConditionVariableCancelSleep () at condition_variable.c:238\n> #2 0x000056361c337e4b in ConditionVariableBroadcast\n> (cv=0x7fbcb66f498c) at condition_variable.c:310\n> #3 0x000056361c330a40 in CleanupProcSignalState (status=<optimized\n> out>, arg=<optimized out>) at procsignal.c:240\n> #4 0x000056361c328801 in shmem_exit (code=code@entry=1) at ipc.c:276\n> #5 0x000056361c3288fc in proc_exit_prepare (code=code@entry=1) at ipc.c:198\n> #6 0x000056361c3289bf in proc_exit (code=code@entry=1) at ipc.c:111\n> #7 0x000056361c49ffa8 in errfinish (filename=<optimized out>,\n> lineno=<optimized out>, funcname=0x56361c654370 <__func__.16>\n> \"ProcessInterrupts\") at elog.c:592\n> #8 0x000056361bf7191e in ProcessInterrupts () at postgres.c:3264\n> #9 0x000056361c3378d7 in ConditionVariableTimedSleep\n> (cv=0x7fbcb9632224, timeout=timeout@entry=-1,\n> wait_event_info=117440513) at condition_variable.c:196\n> #10 0x000056361c337d0b in ConditionVariableTimedSleep\n> (wait_event_info=<optimized out>, timeout=-1, cv=<optimized out>) at\n> condition_variable.c:135\n> #11 ConditionVariableSleep (cv=<optimized out>,\n> wait_event_info=<optimized out>) at condition_variable.c:98\n> \n> discovered because of\n> # Release injection point.\n> $node->safe_psql('postgres',\n> \"SELECT injection_point_detach('autovacuum-worker-start');\");\n> added\n> \n> v4 also suffers from that. i will try to fix that.\n\nI can see this stack trace as well. Capturing a bit more than your\nown stack, this is crashing in the autovacuum worker while waiting on\na condition variable when processing a ProcessInterrupts().\n\nThat may point to a legit bug with condition variables in this\ncontext, actually? From what I can see, issuing a signal on a backend\nprocess waiting with a condition variable is able to process the\ninterrupt correctly.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2024 09:10:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "At Fri, 12 Apr 2024 09:10:35 +0900, Michael Paquier <[email protected]> wrote in \n> On Thu, Apr 11, 2024 at 04:55:59PM +0500, Kirill Reshke wrote:\n> > The test doesn't fail because pg_terminate_backend actually meets his\n> > point: autovac is killed. But while dying, autovac also receives\n> > segfault. Thats because of injections points:\n> > \n> > (gdb) bt\n> > #0 0x000056361c3379ea in tas (lock=0x7fbcb9632224 <error: Cannot\n> > access memory at address 0x7fbcb9632224>) at\n> > ../../../../src/include/storage/s_lock.h:228\n> > #1 ConditionVariableCancelSleep () at condition_variable.c:238\n...\n> > #3 0x000056361c330a40 in CleanupProcSignalState (status=<optimized\nout>, arg=<optimized out>) at procsignal.c:240\n> > #4 0x000056361c328801 in shmem_exit (code=code@entry=1) at ipc.c:276\n> > #9 0x000056361c3378d7 in ConditionVariableTimedSleep\n> > (cv=0x7fbcb9632224, timeout=timeout@entry=-1,\n...\n> I can see this stack trace as well. Capturing a bit more than your\n> own stack, this is crashing in the autovacuum worker while waiting on\n> a condition variable when processing a ProcessInterrupts().\n> \n> That may point to a legit bug with condition variables in this\n> context, actually? From what I can see, issuing a signal on a backend\n> process waiting with a condition variable is able to process the\n> interrupt correctly.\n\nProcSignalInit sets up CleanupProcSignalState to be called via\non_shmem_exit. If the CV is allocated in a dsm segment, shmem_exit\nshould have detached the region for the CV. CV cleanup code should be\ninvoked via before_shmem_exit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 12 Apr 2024 11:01:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, 11 Apr 2024 at 16:55, Kirill Reshke <[email protected]> wrote:\n\n> 7)\n>\n> > +# Copyright (c) 2022-2024, PostgreSQL Global Development Group\n> > [...]\n> > +# Copyright (c) 2024-2024, PostgreSQL Global Development Group\n>\n> > These need to be cleaned up.\n>\n> > +# Makefile for src/test/recovery\n> > +#\n> > +# src/test/recovery/Makefile\n>\n> > This is incorrect, twice.\n>\n> Cleaned up, thanks!\n\nOh, wait, I did this wrong.\n\nShould i use\n\n+# Copyright (c) 2024-2024, PostgreSQL Global Development Group\n\n(Like in src/test/signals/meson.build &\nsrc/test/signals/t/001_signal_autovacuum.pl)\nor\n\n+#\n+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group\n+# Portions Copyright (c) 1994, Regents of the University of California\n+#\n(Like in src/test/signals/Makefile)\n\nat the beginning of each added file?\n\n\n",
"msg_date": "Fri, 12 Apr 2024 13:32:42 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Apr 12, 2024 at 01:32:42PM +0500, Kirill Reshke wrote:\n> +#\n> +# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group\n> +# Portions Copyright (c) 1994, Regents of the University of California\n> +#\n> (Like in src/test/signals/Makefile)\n> \n> at the beginning of each added file?\n\nAssuming that these files are merged in 2024, you could just use:\nCopyright (c) 2024, PostgreSQL Global Development Group\n\nSee for example slotsync.c introduced recently in commit ddd5f4f54a02.\n--\nMichael",
"msg_date": "Mon, 15 Apr 2024 13:47:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "I adjusted 0001 based on my upthread feedback.\n\n-- \nnathan",
"msg_date": "Wed, 12 Jun 2024 16:04:06 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 13 Jun 2024, at 02:04, Nathan Bossart <[email protected]> wrote:\n> \n> I adjusted 0001 based on my upthread feedback.\n\nThis patch looks good to me. Spellchecker is complaining about “signaling” instead of “signalling”, but ISTM it’s OK.\n\nI’ve tried to dig into the test.\nThe problem is CV is allocated in\n\ninj_state = GetNamedDSMSegment(\"injection_points”,\n\nwhich seems to be destroyed in\n\nshmem_exit() calling dsm_backend_shutdown()\n\nThis happens before we broadcast that sleep is over.\nI think this might happen with any wait on injection point if it is pg_terminate_backend()ed.\n\nIs there way to wake up from CV sleep before processing actual termination?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 14 Jun 2024 12:06:36 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 12:06:36PM +0500, Andrey M. Borodin wrote:\n> This patch looks good to me.\n\nThanks for looking.\n\n> Spellchecker is complaining about \"signaling\" instead of \"signalling\",\n> but ISTM it�s OK.\n\nI think this is an en-US versus en-GB thing. We've standardized on en-US\nfor \"cancel\" (see commits 8c9da14, 21f1e15, and af26857), so IMO we might\nas well do so for \"signal,\" too.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 14 Jun 2024 15:12:50 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 12:06:36PM +0500, Andrey M. Borodin wrote:\n> I’ve tried to dig into the test.\n> The problem is CV is allocated in\n> \n> inj_state = GetNamedDSMSegment(\"injection_points”,\n> \n> which seems to be destroyed in\n> \n> shmem_exit() calling dsm_backend_shutdown()\n> \n> This happens before we broadcast that sleep is over.\n> I think this might happen with any wait on injection point if it is\n> pg_terminate_backend()ed.\n\nExcept if I am missing something, this is not a problem for a normal\nbackend, for example with one using a `SELECT injection_points_run()`.\n\n> Is there way to wake up from CV sleep before processing actual termination?\n\nI am honestly not sure if this is worth complicating the sigjmp path\nof the autovacuum worker just for the sake of this test. It seems to\nme that it would be simple enough to move the injection point\nautovacuum-worker-start within the transaction block a few lines down\nin do_autovacuum(), no?\n--\nMichael",
"msg_date": "Fri, 21 Jun 2024 13:01:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 21 Jun 2024, at 09:01, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Jun 14, 2024 at 12:06:36PM +0500, Andrey M. Borodin wrote:\n>> I’ve tried to dig into the test.\n>> The problem is CV is allocated in\n>> \n>> inj_state = GetNamedDSMSegment(\"injection_points”,\n>> \n>> which seems to be destroyed in\n>> \n>> shmem_exit() calling dsm_backend_shutdown()\n>> \n>> This happens before we broadcast that sleep is over.\n>> I think this might happen with any wait on injection point if it is\n>> pg_terminate_backend()ed.\n> \n> Except if I am missing something, this is not a problem for a normal\n> backend, for example with one using a `SELECT injection_points_run()`.\n\nYes, i’ve tried to get similar error in other CV-sleeps and in injection points of normal backend - everything works just fine. The error is specific to just this test.\n\n>> Is there way to wake up from CV sleep before processing actual termination?\n> \n> I am honestly not sure if this is worth complicating the sigjmp path\n> of the autovacuum worker just for the sake of this test. It seems to\n> me that it would be simple enough to move the injection point\n> autovacuum-worker-start within the transaction block a few lines down\n> in do_autovacuum(), no?\n\nThanks for the pointer, I’ll try this approach!\n\n\nBest regards, Andrey Borodin,\n\n\n\n",
"msg_date": "Fri, 21 Jun 2024 10:31:30 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 03:12:50PM -0500, Nathan Bossart wrote:\n> On Fri, Jun 14, 2024 at 12:06:36PM +0500, Andrey M. Borodin wrote:\n> > This patch looks good to me.\n> \n> Thanks for looking.\n\nWhile double-checking the whole, where I don't have much to say about\n0001, I have fixed a few issues with the test presented upthread and\nstabilized it (CI and my stuff are both OK). I'd suggest to move it\nto test_misc/, because there is no clear category where to put it, and\nwe have another test with injection points there for timeouts so the\nmodule dependency with EXTRA_INSTALL is already cleared.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Fri, 21 Jun 2024 14:36:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jun 21, 2024 at 10:31:30AM +0500, Andrey M. Borodin wrote:\n> Thanks for the pointer, I’ll try this approach!\n\nThanks. FWIW, I've put my mind into it, and fixed the thing a few\nminutes ago:\nhttps://www.postgresql.org/message-id/ZnURUaujl39wSoEW%40paquier.xyz\n--\nMichael",
"msg_date": "Fri, 21 Jun 2024 14:38:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 21 Jun 2024, at 10:36, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Jun 14, 2024 at 03:12:50PM -0500, Nathan Bossart wrote:\n>> On Fri, Jun 14, 2024 at 12:06:36PM +0500, Andrey M. Borodin wrote:\n>>> This patch looks good to me.\n>> \n>> Thanks for looking.\n> \n> While double-checking the whole, where I don't have much to say about\n> 0001, I have fixed a few issues with the test presented upthread and\n> stabilized it (CI and my stuff are both OK). I'd suggest to move it\n> to test_misc/, because there is no clear category where to put it, and\n> we have another test with injection points there for timeouts so the\n> module dependency with EXTRA_INSTALL is already cleared.\n> \n> What do you think?\n\nThanks Michael!\n\nAll changes look good to me.\n\nI just have one more concern: we do not wakeup() upon test end. I observed that there might happen more autovacuums and start sleeping in injection point. In every case I observed - these autovacuums quit gracefully. But is it guaranteed that test will shut down node even if some of backends are waiting in injection points?\nOr, perhaps, should we always wakeup() after detaching? (in case when new point run might happen)\n\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Fri, 21 Jun 2024 13:44:06 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "I've committed 0001. It looks like 0002 failed CI testing [0], but I\nhaven't investigated why.\n\n[0] https://cirrus-ci.com/task/5668467599212544\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 9 Jul 2024 13:12:59 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Hi\n\nOn Tue, 9 Jul 2024 at 23:13, Nathan Bossart <[email protected]> wrote:\n>\n> I've committed 0001. It looks like 0002 failed CI testing [0], but I\n> haven't investigated why.\n>\n> [0] https://cirrus-ci.com/task/5668467599212544\n>\n> --\n> nathan\n\nThe problem is the error message has been changed.\n\n# DETAIL: Only roles with privileges of the\n\"pg_signal_autovacuum_worker\" role may terminate autovacuum workers.'\n# doesn't match '(?^:ERROR: permission denied to terminate\nprocess\\nDETAIL: Only roles with privileges of the\n\"pg_signal_autovacuum_worker\" role may terminate autovacuum worker\nprocesses.)'\n# Looks like you failed 1 test of 2.\n\nI changed the test to match the error message.",
"msg_date": "Wed, 10 Jul 2024 10:03:04 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Tue, Jul 09, 2024 at 01:12:59PM -0500, Nathan Bossart wrote:\n> I've committed 0001. It looks like 0002 failed CI testing [0], but I\n> haven't investigated why.\n> \n> [0] https://cirrus-ci.com/task/5668467599212544\n\nNice catch by the CI. This looks like a race condition to me. I\nthink that we should wait for the autovacuum worker to exit, and then\nscan the server logs we expect.\n\nFor this failure, look at the timestamps of the server logs:\n2024-07-08 12:48:23.271 UTC [32697][client backend]\n[006_signal_autovacuum.pl][11/3:0] LOG: statement: SELECT\npg_terminate_backend(32672);\n2024-07-08 12:48:23.275 UTC [32697][client backend]\n[006_signal_autovacuum.pl][:0] LOG: disconnection: session time:\n0:00:00.018 user=postgres database=postgres host=[local]\n2024-07-08 12:48:23.278 UTC [32672][autovacuum worker] FATAL:\nterminating autovacuum process due to administrator command\n\nAnd then the timestamp of the tests:\n[12:48:23.277](0.058s) not ok 2 - autovacuum worker signaled with\npg_signal_autovacuum_worker granted\n\nWe check for the contents of the logs 1ms before they are generated,\nhence failing the lookup check because the test is faster than the\nbackend.\n\nLike what we are doing in 010_pg_basebackup.pl, we could do a\npoll_query_until() until the PID of the autovacuum worker is gone from\npg_stat_activity before fetching the logs as ProcessInterrupts() stuff\nwould be logged before the process exits, say:\n+# Wait for the autovacuum worker to exit before scanning the logs.\n+$node->poll_query_until('postgres',\n+ \"SELECT count(*) = 0 FROM pg_stat_activity \"\n+ . \"WHERE pid = $av_pid AND backend_type = 'autovacuum worker';\");\n\nThat gives something like the attached. Does that look correct to\nyou?\n--\nMichael",
"msg_date": "Wed, 10 Jul 2024 14:14:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 10:03:04AM +0500, Kirill Reshke wrote:\n> The problem is the error message has been changed.\n> \n> # DETAIL: Only roles with privileges of the\n> \"pg_signal_autovacuum_worker\" role may terminate autovacuum workers.'\n> # doesn't match '(?^:ERROR: permission denied to terminate\n> process\\nDETAIL: Only roles with privileges of the\n> \"pg_signal_autovacuum_worker\" role may terminate autovacuum worker\n> processes.)'\n> # Looks like you failed 1 test of 2.\n> \n> I changed the test to match the error message.\n\nThe script has two tests, and the CI is failing for the second test\nwhere we expect the signal to be processed:\n[12:48:23.370] # Failed test 'autovacuum worker signaled with\npg_signal_autovacuum_worker granted'\n[12:48:23.370] # at t/006_signal_autovacuum.pl line 90.\n\nIt is true that the first test where we expect the signal to not go\nthrough also failed as the DETAIL string has been changed, which is\nwhat you've fixed :) \n--\nMichael",
"msg_date": "Wed, 10 Jul 2024 14:24:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "> The script has two tests, and the CI is failing for the second test\n> where we expect the signal to be processed:\n> [12:48:23.370] # Failed test 'autovacuum worker signaled with\n> pg_signal_autovacuum_worker granted'\n> [12:48:23.370] # at t/006_signal_autovacuum.pl line 90.\n\n> --\n> Michael\n\nThat's very strange, because the test works fine on my virtual\nmachine. Also, it seems that it works in Cirrus [0], as there is this\nline:\n\n[autovacuum worker] FATAL: terminating autovacuum process due to\nadministrator command\n\nafter `SET ROLE signal_autovacuum_worker_role;` and `SELECT\npg_terminate_backend` in the log file.\n\nSomehow the `node->log_contains` check does not catch that. Maybe\nthere is some issue with `$offset`? Will try to investigate\n\n[0] https://api.cirrus-ci.com/v1/artifact/task/5668467599212544/log/src/test/modules/test_misc/tmp_check/log/006_signal_autovacuum_node.log\n\n\n",
"msg_date": "Wed, 10 Jul 2024 11:27:54 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "\n\n> On 10 Jul 2024, at 11:27, Kirill Reshke <[email protected]> wrote:\n> \n> That's very strange, because the test works fine on my virtual\n> machine. Also, it seems that it works in Cirrus [0], as there is this\n> line:\n\nSo far I could not reproduce that failure.\nI’ve checkouted 6edec53 from CFbot repository, but it works fine in both Cirrus[0,1,2] and my machines…\nIt seems like we have to rely on intuition to know what happened.\n\n\nBest regards, Andrey Borodin.\n[0] https://github.com/x4m/postgres_g/runs/27266322657\n[1] https://github.com/x4m/postgres_g/runs/27266278325\n[2] https://github.com/x4m/postgres_g/runs/27266052318\n\n",
"msg_date": "Wed, 10 Jul 2024 17:28:34 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "Hi, that's for digging into this. Turns out I completely missed one of\nyour emails today morning.\n\nOn Wed, 10 Jul 2024 at 10:15, Michael Paquier <[email protected]> wrote:\n> And then the timestamp of the tests:\n> [12:48:23.277](0.058s) not ok 2 - autovacuum worker signaled with\n> pg_signal_autovacuum_worker granted\n>\n> We check for the contents of the logs 1ms before they are generated,\n> hence failing the lookup check because the test is faster than the\n> backend.\n>\n> Like what we are doing in 010_pg_basebackup.pl, we could do a\n> poll_query_until() until the PID of the autovacuum worker is gone from\n> pg_stat_activity before fetching the logs as ProcessInterrupts() stuff\n> would be logged before the process exits, say:\n> +# Wait for the autovacuum worker to exit before scanning the logs.\n> +$node->poll_query_until('postgres',\n> + \"SELECT count(*) = 0 FROM pg_stat_activity \"\n> + . \"WHERE pid = $av_pid AND backend_type = 'autovacuum worker';\");\n>\n> That gives something like the attached. Does that look correct to\n> you?\n> --\n> Michael\n\n+1.\n\n\n",
"msg_date": "Wed, 10 Jul 2024 22:57:45 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 10:57:45PM +0500, Kirill Reshke wrote:\n> Hi, that's for digging into this. Turns out I completely missed one of\n> your emails today morning.\n\nDon't worry. Using this domain tends to put my emails in one's spam\nfolder.\n--\nMichael",
"msg_date": "Thu, 11 Jul 2024 08:34:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 02:14:55PM +0900, Michael Paquier wrote:\n> +# Only non-superuser roles granted pg_signal_autovacuum_worker are allowed\n> +# to signal autovacuum workers. This test uses an injection point located\n> +# at the beginning of the autovacuum worker startup.\n\nnitpick: Superuser roles are also allowed to signal autovacuum workers.\nMaybe this should read \"Only roles with privileges of...\"\n\n> +# Create some content and set an aggressive autovacuum.\n> +$node->safe_psql(\n> +\t'postgres', qq(\n> + CREATE TABLE tab_int(i int);\n> + ALTER TABLE tab_int SET (autovacuum_vacuum_cost_limit = 1);\n> + ALTER TABLE tab_int SET (autovacuum_vacuum_cost_delay = 100);\n> +));\n> +\n> +$node->safe_psql(\n> +\t'postgres', qq(\n> + INSERT INTO tab_int VALUES(1);\n> +));\n> +\n> +# Wait until an autovacuum worker starts.\n> +$node->wait_for_event('autovacuum worker', 'autovacuum-worker-start');\n\nI'm not following how this is guaranteed to trigger an autovacuum quickly.\nShouldn't we set autovacuum_vacuum_insert_threshold to 1 so that it is\neligible for autovacuum?\n\n> +# Wait for the autovacuum worker to exit before scanning the logs.\n> +$node->poll_query_until('postgres',\n> +\t\"SELECT count(*) = 0 FROM pg_stat_activity \"\n> +\t. \"WHERE pid = $av_pid AND backend_type = 'autovacuum worker';\");\n\nWFM. Even if the PID is quickly reused, this should work. We just might\nend up waiting a little longer.\n\nIs it worth testing cancellation, too?\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 11 Jul 2024 20:50:57 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Thu, Jul 11, 2024 at 08:50:57PM -0500, Nathan Bossart wrote:\n> I'm not following how this is guaranteed to trigger an autovacuum quickly.\n> Shouldn't we set autovacuum_vacuum_insert_threshold to 1 so that it is\n> eligible for autovacuum?\n\nYou are right, this is not going to influence a faster startup of a\nworker, so we could remove that entirely. On closer look, the main\nbottlebeck is that the test is spending a lot of time waiting on the\nnaptime even if we are using the minimum value of 1s, as the scan of\npg_stat_activity checking for workers keeps looping.\n\n[ ..thinks.. ]\n\nI have one trick in my sleeve for this one to make the launcher more\nresponsive in checking the timestamps of the database list. That's\nnot perfect, but it reduces the wait behind the worker startups by\n400ms (1s previously as of the naptime, 600ms now) with a reload to\nset the launcher's latch after the injection point has been attached.\nThe difference in runtime is noticeable.\n\n>> +# Wait for the autovacuum worker to exit before scanning the logs.\n>> +$node->poll_query_until('postgres',\n>> +\t\"SELECT count(*) = 0 FROM pg_stat_activity \"\n>> +\t. \"WHERE pid = $av_pid AND backend_type = 'autovacuum worker';\");\n> \n> WFM. Even if the PID is quickly reused, this should work. We just might\n> end up waiting a little longer.\n> \n> Is it worth testing cancellation, too?\n\nThe point is to check after pg_signal_backend, so I am not sure it is\nworth the extra cycles for the cancellation.\n\nAttaching the idea, with a fix for the comment you have mentioned and\nappending \"regress_\" the role names for the warnings generated by\n-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, while on it.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Fri, 12 Jul 2024 14:21:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 02:21:09PM +0900, Michael Paquier wrote:\n> On Thu, Jul 11, 2024 at 08:50:57PM -0500, Nathan Bossart wrote:\n>> I'm not following how this is guaranteed to trigger an autovacuum quickly.\n>> Shouldn't we set autovacuum_vacuum_insert_threshold to 1 so that it is\n>> eligible for autovacuum?\n> \n> You are right, this is not going to influence a faster startup of a\n> worker, so we could remove that entirely. On closer look, the main\n> bottlebeck is that the test is spending a lot of time waiting on the\n> naptime even if we are using the minimum value of 1s, as the scan of\n> pg_stat_activity checking for workers keeps looping.\n\nI suppose it would be silly to allow even lower values for\nautovacuum_naptime (e.g., by moving it to ConfigureNamesReal and setting\nthe minimum to 0.1).\n\n> I have one trick in my sleeve for this one to make the launcher more\n> responsive in checking the timestamps of the database list. That's\n> not perfect, but it reduces the wait behind the worker startups by\n> 400ms (1s previously as of the naptime, 600ms now) with a reload to\n> set the launcher's latch after the injection point has been attached.\n> The difference in runtime is noticeable.\n\nThat's a neat trick. I was confused why this test generates an autovacuum\nworker at all, but I now see that you are pausing it before we even gather\nthe list of tables that need to be vacuumed.\n\n>> Is it worth testing cancellation, too?\n> \n> The point is to check after pg_signal_backend, so I am not sure it is\n> worth the extra cycles for the cancellation.\n\nAgreed.\n\n> What do you think?\n\nLooks reasonable to me.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 12 Jul 2024 11:19:05 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 11:19:05AM -0500, Nathan Bossart wrote:\n> I suppose it would be silly to allow even lower values for\n> autovacuum_naptime (e.g., by moving it to ConfigureNamesReal and setting\n> the minimum to 0.1).\n\nI've thought about that as well, and did not mention it as this would\nencourage insanely low naptime values resulting in fork() bursts.\n\n> That's a neat trick. I was confused why this test generates an autovacuum\n> worker at all, but I now see that you are pausing it before we even gather\n> the list of tables that need to be vacuumed.\n\nYep. More aggressive signals aren't going to help. One thing I also\nconsidered here is to manipulate the db list timestamps inside a\nUSE_INJECTION_POINTS block in the launcher to make the spawn more\naggressive. Anyway, with 600ms in detection where I've tested it, I\ncan live with the responsiveness of the patch as proposed.\n\n> Looks reasonable to me.\n\nThanks. I'll see about stressing the buildfarm tomorrow or so, after\nlooking at how the CI reacts.\n--\nMichael",
"msg_date": "Mon, 15 Jul 2024 09:54:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 09:54:43AM +0900, Michael Paquier wrote:\n> Thanks. I'll see about stressing the buildfarm tomorrow or so, after\n> looking at how the CI reacts.\n\nThere were a few more things here:\n1) The new test was missing from test_misc/meson.build.\n2) With 1) fixed, the CI has been complaining about the test\nstability, when retrieving the PID of a worker with this query:\nSELECT pid FROM pg_stat_activity WHERE backend_type = 'autovacuum worker'\n\nAnd it's annoying to have missed what's wrong here:\n- We don't check that the PID comes from a worker waiting on an\ninjection point, so it could be a PID of something running, still gone\nonce the signals are processed.\n- No limit check, so we could finish with a list of PIDs while only\none is necessary. Windows was slow enough to spot that, spawning\nmultiple autovacuum workers waiting on the injection point.\n\nAfter improving all that, I have checked again the CI and it was\nhappy, so applied on HEAD. Let's see now how the buildfarm reacts.\n--\nMichael",
"msg_date": "Tue, 16 Jul 2024 10:14:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow non-superuser to cancel superuser tasks."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI would like to understand why we have code [1] that retrieves\nRecentFlushPtr in WalSndWaitForWal() outside of the loop. We utilize\nRecentFlushPtr later within the loop, but prior to that, we already\nhave [2]. Wouldn't [2] alone be sufficient?\n\nJust to check the impact, I ran 'make check-world' after removing [1],\nand did not see any issue exposed by the test at-least.\n\nAny thoughts?\n\n[1]:\n /* Get a more recent flush pointer. */\n if (!RecoveryInProgress())\n RecentFlushPtr = GetFlushRecPtr(NULL);\n else\n RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n\n[2]:\n /* Update our idea of the currently flushed position. */\n else if (!RecoveryInProgress())\n RecentFlushPtr = GetFlushRecPtr(NULL);\n else\n RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n\nthanks\nShveta\n\n\n",
"msg_date": "Mon, 26 Feb 2024 17:16:39 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 26, 2024 at 05:16:39PM +0530, shveta malik wrote:\n> Hi hackers,\n> \n> I would like to understand why we have code [1] that retrieves\n> RecentFlushPtr in WalSndWaitForWal() outside of the loop. We utilize\n> RecentFlushPtr later within the loop, but prior to that, we already\n> have [2]. Wouldn't [2] alone be sufficient?\n> \n> Just to check the impact, I ran 'make check-world' after removing [1],\n> and did not see any issue exposed by the test at-least.\n> \n> Any thoughts?\n> \n> [1]:\n> /* Get a more recent flush pointer. */\n> if (!RecoveryInProgress())\n> RecentFlushPtr = GetFlushRecPtr(NULL);\n> else\n> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> \n> [2]:\n> /* Update our idea of the currently flushed position. */\n> else if (!RecoveryInProgress())\n> RecentFlushPtr = GetFlushRecPtr(NULL);\n> else\n> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> \n\nIt seems to me that [2] alone could be sufficient.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Mar 2024 10:35:52 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "On Mon, 26 Feb 2024 at 12:46, shveta malik <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I would like to understand why we have code [1] that retrieves\n> RecentFlushPtr in WalSndWaitForWal() outside of the loop. We utilize\n> RecentFlushPtr later within the loop, but prior to that, we already\n> have [2]. Wouldn't [2] alone be sufficient?\n>\n> Just to check the impact, I ran 'make check-world' after removing [1],\n> and did not see any issue exposed by the test at-least.\n\nYeah, that seems accurate.\n\n> Any thoughts?\n[...]\n> [2]:\n> /* Update our idea of the currently flushed position. */\n> else if (!RecoveryInProgress())\n\nI can't find where this \"else\" of this \"else if\" clause came from, as\nthis piece of code hasn't changed in years. But apart from that, your\nobservation seems accurate, yes.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 1 Mar 2024 12:10:00 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 4:40 PM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 26 Feb 2024 at 12:46, shveta malik <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > I would like to understand why we have code [1] that retrieves\n> > RecentFlushPtr in WalSndWaitForWal() outside of the loop. We utilize\n> > RecentFlushPtr later within the loop, but prior to that, we already\n> > have [2]. Wouldn't [2] alone be sufficient?\n> >\n> > Just to check the impact, I ran 'make check-world' after removing [1],\n> > and did not see any issue exposed by the test at-least.\n>\n> Yeah, that seems accurate.\n>\n> > Any thoughts?\n> [...]\n> > [2]:\n> > /* Update our idea of the currently flushed position. */\n> > else if (!RecoveryInProgress())\n>\n> I can't find where this \"else\" of this \"else if\" clause came from, as\n> this piece of code hasn't changed in years.\n>\n\nRight, I think the quoted code has check \"if (!RecoveryInProgress())\".\n\n>\n But apart from that, your\n> observation seems accurate, yes.\n>\n\nI also find the observation correct and the code has been like that\nsince commit 5a991ef8 [1]. So, let's wait to see if Robert or Andres\nremembers the reason, otherwise, we should probably nuke this code.\n\n\n[1]\ncommit 5a991ef8692ed0d170b44958a81a6bd70e90585c\nAuthor: Robert Haas <[email protected]>\nDate: Mon Mar 10 13:50:28 2014 -0400\n\n Allow logical decoding via the walsender interface.\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Mar 2024 16:44:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 4:44 PM Amit Kapila <[email protected]> wrote:\n>\n> Right, I think the quoted code has check \"if (!RecoveryInProgress())\".\n>\n> >\n> But apart from that, your\n> > observation seems accurate, yes.\n> >\n>\n> I also find the observation correct and the code has been like that\n> since commit 5a991ef8 [1]. So, let's wait to see if Robert or Andres\n> remembers the reason, otherwise, we should probably nuke this code.\n\nPlease find the patch attached for the same.\n\nthanks\nShveta",
"msg_date": "Mon, 11 Mar 2024 16:16:50 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 4:17 PM shveta malik <[email protected]> wrote:\n>\n> On Sat, Mar 2, 2024 at 4:44 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Right, I think the quoted code has check \"if (!RecoveryInProgress())\".\n> >\n> > >\n> > But apart from that, your\n> > > observation seems accurate, yes.\n> > >\n> >\n> > I also find the observation correct and the code has been like that\n> > since commit 5a991ef8 [1]. So, let's wait to see if Robert or Andres\n> > remembers the reason, otherwise, we should probably nuke this code.\n>\n> Please find the patch attached for the same.\n>\n\nLGTM. I'll push this tomorrow unless I see any comments/objections to\nthis change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:36:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 4:36 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Mar 11, 2024 at 4:17 PM shveta malik <[email protected]> wrote:\n> >\n> >\n> > Please find the patch attached for the same.\n> >\n>\n> LGTM. I'll push this tomorrow unless I see any comments/objections to\n> this change.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 10:58:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regardign RecentFlushPtr in WalSndWaitForWal()"
}
] |
[
{
"msg_contents": "Hello Hackers. We’re proposing an improved README for PostgreSQL that\nincludes more helpful links for prospective PostgreSQL contributors and has\na nicer presentation.\n\nAlthough development does not take place on GitHub or GitLab for\nPostgreSQL, many developers might view the PostgreSQL source code using one\nof those mirrors (I do myself). Since both support Markdown files, a\nMarkdown version of the README (as README.md) gets presentational benefits\nthat I think are helpful.\n\nFor a head-to-head comparison of what that looks like, review the current\nREADME and a proposed README.md version below:\n\nCurrent version:\n\nhttps://github.com/andyatkinson/postgres/blob/master/README\n\nMarkdown README.md version on GitHub:\n\nhttps://github.com/andyatkinson/postgres/blob/e88138765750b6f7898089b4016641eee01bf616/README.md\n\n\n---- Feedback Requested ----\n\nSamay Sharma are both interested in the initial developer experience for\nPostgreSQL. We had a chat about the role the README plays in that, while\nit's a small role, we thought this might be a place to start.\n\n\nWe'd love some feedback.\n\nProspective contributors need to know about compilation, the mailing lists,\nand how the commitfest events work. This information is scattered around on\nwiki pages, but we're wondering if more could be brought into the README\nand whether that would help?\n\nIf you do check out the new file, we'd love to know whether you think\nthere's useful additions, or there's content that's missing.\n\nIf there's any kind of feedback or consensus on this thread, I'm happy to\ncreate and send a patch.\n\nThanks for taking a look!\n\nAndrew Atkinson w/ reviews from Samay Sharma\n\nHello Hackers. We’re proposing an improved README for PostgreSQL that includes more helpful links for prospective PostgreSQL contributors and has a nicer presentation.Although development does not take place on GitHub or GitLab for PostgreSQL, many developers might view the PostgreSQL source code using one of those mirrors (I do myself). Since both support Markdown files, a Markdown version of the README (as README.md) gets presentational benefits that I think are helpful.For a head-to-head comparison of what that looks like, review the current README and a proposed README.md version below:Current version:https://github.com/andyatkinson/postgres/blob/master/READMEMarkdown README.md version on GitHub:https://github.com/andyatkinson/postgres/blob/e88138765750b6f7898089b4016641eee01bf616/README.md---- Feedback Requested ----Samay Sharma are both interested in the initial developer experience for PostgreSQL. We had a chat about the role the README plays in that, while it's a small role, we thought this might be a place to start.We'd love some feedback.Prospective contributors need to know about compilation, the mailing lists, and how the commitfest events work. This information is scattered around on wiki pages, but we're wondering if more could be brought into the README and whether that would help?If you do check out the new file, we'd love to know whether you think there's useful additions, or there's content that's missing.If there's any kind of feedback or consensus on this thread, I'm happy to create and send a patch.Thanks for taking a look!Andrew Atkinson w/ reviews from Samay Sharma",
"msg_date": "Mon, 26 Feb 2024 11:31:19 -0600",
"msg_from": "Andrew Atkinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 11:31:19AM -0600, Andrew Atkinson wrote:\n> Hello Hackers. We’re proposing an improved README for PostgreSQL that\n> includes more helpful links for prospective PostgreSQL contributors and has\n> a nicer presentation.\n> \n> Although development does not take place on GitHub or GitLab for\n> PostgreSQL, many developers might view the PostgreSQL source code using one\n> of those mirrors (I do myself). Since both support Markdown files, a\n> Markdown version of the README (as README.md) gets presentational benefits\n> that I think are helpful.\n\nI think this would be nice. If the Markdown version is reasonably readable\nas plain-text, maybe we could avoid maintaining two READMEs files, too.\nBut overall, +1 to modernizing the README a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 12:30:18 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I think this would be nice. If the Markdown version is reasonably readable\n> as plain-text, maybe we could avoid maintaining two READMEs files, too.\n> But overall, +1 to modernizing the README a bit.\n\nPer past track record, we change the top-level README only once every\nthree years or so, so I doubt it'd be too painful to maintain two\nversions of it.\n\nHaving said that, any proposal for this ought to be submitted as\na patch, rather than expecting people to go digging around on\nsome other repo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Feb 2024 15:30:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Thanks for the feedback Nathan and Tom. Samay also suggested adding the\npatch. I've added a .patch with the file for consideration.\n\nOn Mon, Feb 26, 2024 at 2:30 PM Tom Lane <[email protected]> wrote:\n\n> Nathan Bossart <[email protected]> writes:\n> > I think this would be nice. If the Markdown version is reasonably\n> readable\n> > as plain-text, maybe we could avoid maintaining two READMEs files, too.\n> > But overall, +1 to modernizing the README a bit.\n>\n> Per past track record, we change the top-level README only once every\n> three years or so, so I doubt it'd be too painful to maintain two\n> versions of it.\n>\n> Having said that, any proposal for this ought to be submitted as\n> a patch, rather than expecting people to go digging around on\n> some other repo.\n>\n> regards, tom lane\n>",
"msg_date": "Mon, 26 Feb 2024 14:40:05 -0600",
"msg_from": "Andrew Atkinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "> On 26 Feb 2024, at 21:30, Tom Lane <[email protected]> wrote:\n> \n> Nathan Bossart <[email protected]> writes:\n>> I think this would be nice. If the Markdown version is reasonably readable\n>> as plain-text, maybe we could avoid maintaining two READMEs files, too.\n>> But overall, +1 to modernizing the README a bit.\n> \n> Per past track record, we change the top-level README only once every\n> three years or so, so I doubt it'd be too painful to maintain two\n> versions of it.\n\nIt wont be, and we kind of already have two since there is another similar\nREADME displayed at https://www.postgresql.org/ftp/. That being said, a\nmajority of those reading the README will likely be new developers accustomed\nto Markdown (or doing so via interfaces such as Github) so going to Markdown\nmight not be a bad idea. We can also render a plain text version with pandoc\nfor release builds should we want to.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:46:11 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "+1 on the general idea. Maybe make that COPYRIGHT link go to an absolute\nURI, like all the other links, in case this file gets copied somewhere?\nPerhaps point it to https://www.postgresql.org/about/licence/\nCheers,\nGreg\n\n+1 on the general idea. Maybe make that COPYRIGHT link go to an absolute URI, like all the other links, in case this file gets copied somewhere? Perhaps point it to https://www.postgresql.org/about/licence/ Cheers,Greg",
"msg_date": "Wed, 28 Feb 2024 09:56:57 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 02:46:11PM +0100, Daniel Gustafsson wrote:\n>> On 26 Feb 2024, at 21:30, Tom Lane <[email protected]> wrote:\n>> Nathan Bossart <[email protected]> writes:\n>>> I think this would be nice. If the Markdown version is reasonably readable\n>>> as plain-text, maybe we could avoid maintaining two READMEs files, too.\n>>> But overall, +1 to modernizing the README a bit.\n>> \n>> Per past track record, we change the top-level README only once every\n>> three years or so, so I doubt it'd be too painful to maintain two\n>> versions of it.\n> \n> It wont be, and we kind of already have two since there is another similar\n> README displayed at https://www.postgresql.org/ftp/. That being said, a\n> majority of those reading the README will likely be new developers accustomed\n> to Markdown (or doing so via interfaces such as Github) so going to Markdown\n> might not be a bad idea. We can also render a plain text version with pandoc\n> for release builds should we want to.\n\nSorry, my suggestion wasn't meant to imply that I have any strong concerns\nabout maintaining two README files. If we can automate generating one or\nthe other, that'd be great, but I don't see that as a prerequisite to\nadding a Markdown version.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 11:02:37 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 09:56:57AM -0500, Greg Sabino Mullane wrote:\n> +1 on the general idea. Maybe make that COPYRIGHT link go to an absolute\n> URI, like all the other links, in case this file gets copied somewhere?\n> Perhaps point it to https://www.postgresql.org/about/licence/\n\nI suspect there will be quite a bit of discussion about what to add to the\nREADME, which is great, but I think we should establish an order of\noperations here. We could either add suggested content to the README and\nthen create an identical Markdown version, or we could create a Markdown\nversion and add content to both afterwards. The former has my vote since\nit seems like it would require less churn. In any case, I think it would\nbe useful to keep the Markdown effort separate from the content effort\nsomehow (e.g., separate threads).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 11:10:48 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Sorry, my suggestion wasn't meant to imply that I have any strong concerns\n> about maintaining two README files. If we can automate generating one or\n> the other, that'd be great, but I don't see that as a prerequisite to\n> adding a Markdown version.\n\nAgreed, and I'd go so far as to say that adding automation now\nwould be investing work that might well go to waste. When and\nif we get annoyed by the manual labor involved in maintaining\ntwo copies, it'd be time to put work into automating it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Feb 2024 12:17:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "> On 28 Feb 2024, at 18:02, Nathan Bossart <[email protected]> wrote:\n> \n> On Wed, Feb 28, 2024 at 02:46:11PM +0100, Daniel Gustafsson wrote:\n>>> On 26 Feb 2024, at 21:30, Tom Lane <[email protected]> wrote:\n>>> Nathan Bossart <[email protected]> writes:\n>>>> I think this would be nice. If the Markdown version is reasonably readable\n>>>> as plain-text, maybe we could avoid maintaining two READMEs files, too.\n>>>> But overall, +1 to modernizing the README a bit.\n>>> \n>>> Per past track record, we change the top-level README only once every\n>>> three years or so, so I doubt it'd be too painful to maintain two\n>>> versions of it.\n>> \n>> It wont be, and we kind of already have two since there is another similar\n>> README displayed at https://www.postgresql.org/ftp/. That being said, a\n>> majority of those reading the README will likely be new developers accustomed\n>> to Markdown (or doing so via interfaces such as Github) so going to Markdown\n>> might not be a bad idea. We can also render a plain text version with pandoc\n>> for release builds should we want to.\n> \n> Sorry, my suggestion wasn't meant to imply that I have any strong concerns\n> about maintaining two README files. If we can automate generating one or\n> the other, that'd be great, but I don't see that as a prerequisite to\n> adding a Markdown version.\n\nAgreed, and I didn't say we should do it but rather that we can do it based on\nthe toolchain we already have. Personally I think just having a Markdown\nversion is enough, it's become the de facto standard for such documentation for\ngood reasons.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 18:25:32 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2/28/24 12:25, Daniel Gustafsson wrote:\n>> On 28 Feb 2024, at 18:02, Nathan Bossart <[email protected]> wrote:\n>> \n>> On Wed, Feb 28, 2024 at 02:46:11PM +0100, Daniel Gustafsson wrote:\n>>>> On 26 Feb 2024, at 21:30, Tom Lane <[email protected]> wrote:\n>>>> Nathan Bossart <[email protected]> writes:\n>>>>> I think this would be nice. If the Markdown version is reasonably readable\n>>>>> as plain-text, maybe we could avoid maintaining two READMEs files, too.\n>>>>> But overall, +1 to modernizing the README a bit.\n>>>> \n>>>> Per past track record, we change the top-level README only once every\n>>>> three years or so, so I doubt it'd be too painful to maintain two\n>>>> versions of it.\n>>> \n>>> It wont be, and we kind of already have two since there is another similar\n>>> README displayed at https://www.postgresql.org/ftp/. That being said, a\n>>> majority of those reading the README will likely be new developers accustomed\n>>> to Markdown (or doing so via interfaces such as Github) so going to Markdown\n>>> might not be a bad idea. We can also render a plain text version with pandoc\n>>> for release builds should we want to.\n>> \n>> Sorry, my suggestion wasn't meant to imply that I have any strong concerns\n>> about maintaining two README files. If we can automate generating one or\n>> the other, that'd be great, but I don't see that as a prerequisite to\n>> adding a Markdown version.\n> \n> Agreed, and I didn't say we should do it but rather that we can do it based on\n> the toolchain we already have. Personally I think just having a Markdown\n> version is enough, it's become the de facto standard for such documentation for\n> good reasons.\n\n+1\n\nMarkdown is pretty readable as text, I'm not sure why we need both.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 12:43:24 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2024-Feb-28, Joe Conway wrote:\n\n> Markdown is pretty readable as text, I'm not sure why we need both.\n\n*IF* people don't go overboard, yes. I agree, but let's keep an eye so\nthat it doesn't become an unreadable mess. I've seen some really\nhorrible markdown files that I'm sure most of you would object to.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 28 Feb 2024 19:51:02 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Feb 28, 2024, at 1:51 PM, Alvaro Herrera <[email protected]> wrote:\n \n> *IF* people don't go overboard, yes. I agree, but let's keep an eye so\n> that it doesn't become an unreadable mess. I've seen some really\n> horrible markdown files that I'm sure most of you would object to.\n\nMarkdown++\n\nIME the keys to decent-looking Markdown are:\n\n1. Wrapping lines to a legible width (76-80 chars)\n2. Link references rather than inline links\n\nI try to follow these for my blog; posts end up looking like this:\n\nhttps://justatheory.com/2024/02/extension-metadata-typology.text\n\n(Append `.text` to any post to see the raw(ish) Markdown.\n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 13:54:59 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "> On 28 Feb 2024, at 19:51, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2024-Feb-28, Joe Conway wrote:\n> \n>> Markdown is pretty readable as text, I'm not sure why we need both.\n> \n> *IF* people don't go overboard, yes. I agree, but let's keep an eye so\n> that it doesn't become an unreadable mess. I've seen some really\n> horrible markdown files that I'm sure most of you would object to.\n\nAbsolutely, I agree. Considering the lengths we go to to keep our code readable I’m not worried, I expect that a markdown README would end up pretty close to the txt version we have today.\n\n./daniel\n\n",
"msg_date": "Wed, 28 Feb 2024 20:07:30 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 01:54:59PM -0500, David E. Wheeler wrote:\n> On Feb 28, 2024, at 1:51 PM, Alvaro Herrera <[email protected]> wrote:\n>> *IF* people don't go overboard, yes. I agree, but let's keep an eye so\n>> that it doesn't become an unreadable mess. I've seen some really\n>> horrible markdown files that I'm sure most of you would object to.\n> \n> Markdown++\n> \n> IME the keys to decent-looking Markdown are:\n> \n> 1. Wrapping lines to a legible width (76-80 chars)\n> 2. Link references rather than inline links\n> \n> I try to follow these for my blog; posts end up looking like this:\n> \n> https://justatheory.com/2024/02/extension-metadata-typology.text\n> \n> (Append `.text` to any post to see the raw(ish) Markdown.\n\nHere is what converting the current README to Markdown with no other\ncontent changes might look like.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 28 Feb 2024 13:08:03 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n> -PostgreSQL Database Management System\n> -=====================================\n> +# PostgreSQL Database Management System\n\nThis change can be omitted, which makes the conversion even simpler.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 28 Feb 2024 13:30:36 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n>> -PostgreSQL Database Management System\n>> -=====================================\n>> +# PostgreSQL Database Management System\n\n> This change can be omitted, which makes the conversion even simpler.\n\nThat's a pretty convincing proof-of-concept. Let's just do this,\nand then make sure to keep the file legible as plain text.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:36:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "I've grabbed Nathan's patch, and pushed it to GitHub simply to preview the\nrendered Markdown there. This isn't intended to be reviewed, this is just\nfor anyone else that's interested in easily seeing the HTML version of the\nMarkdown file compared with the earlier one.\n\nNathan's direct conversion:\nhttps://github.com/postgres/postgres/blob/9c0f1dd350ee29ad3ade2816c4338b7ca5186bba/README.md\n\nOriginal email version with more sections and content:\nhttps://github.com/andyatkinson/postgres/blob/e88138765750b6f7898089b4016641eee01bf616/README.md\n\nI agree that starting with the direct conversion is reasonable. Markdown\n\"modernizes\" the file using a popular plain text file format that's\nrenderable.\n\nHowever, I also think it would be cool to get some input on what the most\nuseful 2-3 content items are for new developers and make any additions\npossible there. In writing this, I had an idea to ask about whether this\ntopic could be covered as an upcoming PostgreSQL community blog post\nseries. In theory, we could gather a variety of perspectives that way. That\ncould make it less subjective if we see several people independently\nsuggesting a particular wiki page for example, for inclusion in the README.\nI'll pursue that outside the mailing list and report back!\n\n\n\nOn Wed, Feb 28, 2024 at 1:36 PM Tom Lane <[email protected]> wrote:\n\n> Nathan Bossart <[email protected]> writes:\n> > On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n> >> -PostgreSQL Database Management System\n> >> -=====================================\n> >> +# PostgreSQL Database Management System\n>\n> > This change can be omitted, which makes the conversion even simpler.\n>\n> That's a pretty convincing proof-of-concept. Let's just do this,\n> and then make sure to keep the file legible as plain text.\n>\n> regards, tom lane\n>\n\nI've grabbed Nathan's patch, and pushed it to GitHub simply to preview the rendered Markdown there. This isn't intended to be reviewed, this is just for anyone else that's interested in easily seeing the HTML version of the Markdown file compared with the earlier one.Nathan's direct conversion:https://github.com/postgres/postgres/blob/9c0f1dd350ee29ad3ade2816c4338b7ca5186bba/README.mdOriginal email version with more sections and content:https://github.com/andyatkinson/postgres/blob/e88138765750b6f7898089b4016641eee01bf616/README.mdI agree that starting with the direct conversion is reasonable. Markdown \"modernizes\" the file using a popular plain text file format that's renderable.However, I also think it would be cool to get some input on what the most useful 2-3 content items are for new developers and make any additions possible there. In writing this, I had an idea to ask about whether this topic could be covered as an upcoming PostgreSQL community blog post series. In theory, we could gather a variety of perspectives that way. That could make it less subjective if we see several people independently suggesting a particular wiki page for example, for inclusion in the README. I'll pursue that outside the mailing list and report back!On Wed, Feb 28, 2024 at 1:36 PM Tom Lane <[email protected]> wrote:Nathan Bossart <[email protected]> writes:\n> On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n>> -PostgreSQL Database Management System\n>> -=====================================\n>> +# PostgreSQL Database Management System\n\n> This change can be omitted, which makes the conversion even simpler.\n\nThat's a pretty convincing proof-of-concept. Let's just do this,\nand then make sure to keep the file legible as plain text.\n\n regards, tom lane",
"msg_date": "Wed, 28 Feb 2024 14:07:34 -0600",
"msg_from": "Andrew Atkinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "> On 28 Feb 2024, at 20:36, Tom Lane <[email protected]> wrote:\n> \n> Nathan Bossart <[email protected]> writes:\n>> On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n>>> -PostgreSQL Database Management System\n>>> -=====================================\n>>> +# PostgreSQL Database Management System\n> \n>> This change can be omitted, which makes the conversion even simpler.\n> \n> That's a pretty convincing proof-of-concept. Let's just do this,\n> and then make sure to keep the file legible as plain text.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 21:13:56 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2/28/24 14:36, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n>>> -PostgreSQL Database Management System\n>>> -=====================================\n>>> +# PostgreSQL Database Management System\n> \n>> This change can be omitted, which makes the conversion even simpler.\n> \n> That's a pretty convincing proof-of-concept. Let's just do this,\n> and then make sure to keep the file legible as plain text.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:17:34 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 02:07:34PM -0600, Andrew Atkinson wrote:\n> I've grabbed Nathan's patch, and pushed it to GitHub simply to preview the\n> rendered Markdown there. This isn't intended to be reviewed, this is just\n> for anyone else that's interested in easily seeing the HTML version of the\n> Markdown file compared with the earlier one.\n> \n> Nathan's direct conversion:\n> https://github.com/postgres/postgres/blob/9c0f1dd350ee29ad3ade2816c4338b7ca5186bba/README.md\n> \n> Original email version with more sections and content:\n> https://github.com/andyatkinson/postgres/blob/e88138765750b6f7898089b4016641eee01bf616/README.md\n> \n> I agree that starting with the direct conversion is reasonable. Markdown\n> \"modernizes\" the file using a popular plain text file format that's\n> renderable.\n\nThanks. I'll commit this shortly.\n\n> However, I also think it would be cool to get some input on what the most\n> useful 2-3 content items are for new developers and make any additions\n> possible there. In writing this, I had an idea to ask about whether this\n> topic could be covered as an upcoming PostgreSQL community blog post\n> series. In theory, we could gather a variety of perspectives that way. That\n> could make it less subjective if we see several people independently\n> suggesting a particular wiki page for example, for inclusion in the README.\n> I'll pursue that outside the mailing list and report back!\n\nI see many projects have files like SECURITY.md, CODE_OF_CONDUCT.md, and\nCONTRIBUTING.md, and I think it would be relatively easy to add content to\neach of those for PostgreSQL, even if they just link elsewhere.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:21:49 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n> On Wed, Feb 28, 2024 at 02:07:34PM -0600, Andrew Atkinson wrote:\n>> I agree that starting with the direct conversion is reasonable. Markdown\n>> \"modernizes\" the file using a popular plain text file format that's\n>> renderable.\n> \n> Thanks. I'll commit this shortly.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:59:16 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Looks good!\n\nPresentation of Markdown file as HTML on mirrors I know of:\nhttps://github.com/postgres/postgres/blob/master/README.md\nhttps://gitlab.com/postgres/postgres/-/blob/master/README.md\n\n\n\nOn Wed, Feb 28, 2024 at 2:59 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n> > On Wed, Feb 28, 2024 at 02:07:34PM -0600, Andrew Atkinson wrote:\n> >> I agree that starting with the direct conversion is reasonable. Markdown\n> >> \"modernizes\" the file using a popular plain text file format that's\n> >> renderable.\n> >\n> > Thanks. I'll commit this shortly.\n>\n> Committed.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nLooks good!Presentation of Markdown file as HTML on mirrors I know of:https://github.com/postgres/postgres/blob/master/README.mdhttps://gitlab.com/postgres/postgres/-/blob/master/README.mdOn Wed, Feb 28, 2024 at 2:59 PM Nathan Bossart <[email protected]> wrote:On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n> On Wed, Feb 28, 2024 at 02:07:34PM -0600, Andrew Atkinson wrote:\n>> I agree that starting with the direct conversion is reasonable. Markdown\n>> \"modernizes\" the file using a popular plain text file format that's\n>> renderable.\n> \n> Thanks. I'll commit this shortly.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 28 Feb 2024 15:43:27 -0600",
"msg_from": "Andrew Atkinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 28.02.24 20:30, Nathan Bossart wrote:\n> On Wed, Feb 28, 2024 at 01:08:03PM -0600, Nathan Bossart wrote:\n>> -PostgreSQL Database Management System\n>> -=====================================\n>> +# PostgreSQL Database Management System\n> \n> This change can be omitted, which makes the conversion even simpler.\n\nThe committed README.md contains trailing whitespace on one line:\n\n General documentation about this version of PostgreSQL can be found at:$\n-https://www.postgresql.org/docs/devel/$\n+<https://www.postgresql.org/docs/devel/> $\n\nIf this is intentional (it could be, since trailing whitespace is \npotentially significant in Markdown), then please add something to \n.gitattributes. Otherwise, please fix that.\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 14:42:27 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 11:55 AM David E. Wheeler <[email protected]>\nwrote:\n\n>\n> IME the keys to decent-looking Markdown are:\n>\n> 1. Wrapping lines to a legible width (76-80 chars)\n> 2. Link references rather than inline links\n\n\n+1 on Markdown including David's suggestions above. Agree that without\nproper guidelines,\nmd files can become messy looking.\n\nRoberto\n\nOn Wed, Feb 28, 2024 at 11:55 AM David E. Wheeler <[email protected]> wrote:\nIME the keys to decent-looking Markdown are:\n\n1. Wrapping lines to a legible width (76-80 chars)\n2. Link references rather than inline links +1 on Markdown including David's suggestions above. Agree that without proper guidelines,md files can become messy looking.Roberto",
"msg_date": "Thu, 21 Mar 2024 07:52:38 -0600",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 02:42:27PM +0100, Peter Eisentraut wrote:\n> The committed README.md contains trailing whitespace on one line:\n> \n> General documentation about this version of PostgreSQL can be found at:$\n> -https://www.postgresql.org/docs/devel/$\n> +<https://www.postgresql.org/docs/devel/> $\n> \n> If this is intentional (it could be, since trailing whitespace is\n> potentially significant in Markdown), then please add something to\n> .gitattributes. Otherwise, please fix that.\n\nI added that to maintain the line break that was in the non-Markdown\nversion. I'd rather match the style of the following paragraph (patch\nattached) than mess with .gitattributes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 21 Mar 2024 09:11:30 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "> On 21 Mar 2024, at 15:11, Nathan Bossart <[email protected]> wrote:\n> \n> On Thu, Mar 21, 2024 at 02:42:27PM +0100, Peter Eisentraut wrote:\n>> The committed README.md contains trailing whitespace on one line:\n>> \n>> General documentation about this version of PostgreSQL can be found at:$\n>> -https://www.postgresql.org/docs/devel/$\n>> +<https://www.postgresql.org/docs/devel/> $\n>> \n>> If this is intentional (it could be, since trailing whitespace is\n>> potentially significant in Markdown), then please add something to\n>> .gitattributes. Otherwise, please fix that.\n> \n> I added that to maintain the line break that was in the non-Markdown\n> version. I'd rather match the style of the following paragraph (patch\n> attached) than mess with .gitattributes.\n\n+1, this looks better IMHO.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 15:24:17 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 03:24:17PM +0100, Daniel Gustafsson wrote:\n>> On 21 Mar 2024, at 15:11, Nathan Bossart <[email protected]> wrote:\n>> I added that to maintain the line break that was in the non-Markdown\n>> version. I'd rather match the style of the following paragraph (patch\n>> attached) than mess with .gitattributes.\n> \n> +1, this looks better IMHO.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:20:32 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2024-02-28 14:59:16 -0600, Nathan Bossart wrote:\n> On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n> > On Wed, Feb 28, 2024 at 02:07:34PM -0600, Andrew Atkinson wrote:\n> >> I agree that starting with the direct conversion is reasonable. Markdown\n> >> \"modernizes\" the file using a popular plain text file format that's\n> >> renderable.\n> > \n> > Thanks. I'll commit this shortly.\n> \n> Committed.\n\nI was out while this was proposed and committed. Just wanted to say: Thanks!\nIt was high time that we added this...\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:24:12 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 10:24:12AM -0700, Andres Freund wrote:\n> I was out while this was proposed and committed. Just wanted to say: Thanks!\n> It was high time that we added this...\n\nDefinitely. I hope we are able to build on this in the near future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Mar 2024 13:39:37 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n> I see many projects have files like SECURITY.md, CODE_OF_CONDUCT.md, and\n> CONTRIBUTING.md, and I think it would be relatively easy to add content to\n> each of those for PostgreSQL, even if they just link elsewhere.\n\nHere's a first attempt at this. You can see some of the effects of these\nfiles at [0]. More information about these files is available at [1] [2]\n[3].\n\nI figured we'd want to keep these pretty bare-bones to avoid duplicating\ninformation that's already available at postgresql.org, but perhaps it's\nworth filling them out a bit more. Anyway, I just wanted to gauge interest\nin this stuff.\n\n[0] https://github.com/nathan-bossart/postgres/tree/special-files\n[1] https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository\n[2] https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/adding-a-code-of-conduct-to-your-project\n[3] https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 16 Apr 2024 21:36:09 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 17.04.24 04:36, Nathan Bossart wrote:\n> On Wed, Feb 28, 2024 at 02:21:49PM -0600, Nathan Bossart wrote:\n>> I see many projects have files like SECURITY.md, CODE_OF_CONDUCT.md, and\n>> CONTRIBUTING.md, and I think it would be relatively easy to add content to\n>> each of those for PostgreSQL, even if they just link elsewhere.\n> Here's a first attempt at this. You can see some of the effects of these\n> files at [0]. More information about these files is available at [1] [2]\n> [3].\n> \n> I figured we'd want to keep these pretty bare-bones to avoid duplicating\n> information that's already available at postgresql.org, but perhaps it's\n> worth filling them out a bit more. Anyway, I just wanted to gauge interest\n> in this stuff.\n\nI don't know, I find these files kind of \"yelling\". It's fine to have a \ncouple, but now it's getting a bit much, and there are more that could \nbe added.\n\nIf we want to enhance the GitHub experience, we can also add these files \nto the organization instead: \nhttps://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n\n\n\n",
"msg_date": "Sun, 12 May 2024 17:17:42 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Sun, May 12, 2024 at 05:17:42PM +0200, Peter Eisentraut wrote:\n> I don't know, I find these files kind of \"yelling\". It's fine to have a\n> couple, but now it's getting a bit much, and there are more that could be\n> added.\n\nI'm not sure what you mean by this. Do you mean that the contents are too\nblunt? That there are too many files? Something else?\n\n> If we want to enhance the GitHub experience, we can also add these files to\n> the organization instead: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n\nThis was the intent of my patch. There might be a few others that we could\nuse, but I figured we could start with the low-hanging fruit that would\nhave the most impact on the GitHub experience.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 May 2024 10:26:09 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2024-May-13, Nathan Bossart wrote:\n\n> > If we want to enhance the GitHub experience, we can also add these files to\n> > the organization instead: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n> \n> This was the intent of my patch. There might be a few others that we could\n> use, but I figured we could start with the low-hanging fruit that would\n> have the most impact on the GitHub experience.\n\nCan't we add these two lines per topic to the README.md?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n",
"msg_date": "Mon, 13 May 2024 17:43:45 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Mon, May 13, 2024 at 05:43:45PM +0200, Alvaro Herrera wrote:\n> Can't we add these two lines per topic to the README.md?\n\nThat would be fine with me, too. The multiple-files approach is perhaps a\nbit more tailored to GitHub, but there's something to be said for keeping\nthis information centralized.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 May 2024 10:55:53 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 13.05.24 17:43, Alvaro Herrera wrote:\n> On 2024-May-13, Nathan Bossart wrote:\n> \n>>> If we want to enhance the GitHub experience, we can also add these files to\n>>> the organization instead: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n>>\n>> This was the intent of my patch. There might be a few others that we could\n>> use, but I figured we could start with the low-hanging fruit that would\n>> have the most impact on the GitHub experience.\n> \n> Can't we add these two lines per topic to the README.md?\n\nThe point of these special file names is that GitHub will produce \nspecial links to them. If you look at Nathan's tree\n\nhttps://github.com/nathan-bossart/postgres/tree/special-files\n\nand scroll down to the README display, you will see links for \"Code of \nConduct\", \"License\", and \"Security\" across the top.\n\nWhether it's worth having these files just to produce these links is the \ndebate.\n\n\n\n",
"msg_date": "Tue, 14 May 2024 09:55:49 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 13.05.24 17:26, Nathan Bossart wrote:\n> On Sun, May 12, 2024 at 05:17:42PM +0200, Peter Eisentraut wrote:\n>> I don't know, I find these files kind of \"yelling\". It's fine to have a\n>> couple, but now it's getting a bit much, and there are more that could be\n>> added.\n> \n> I'm not sure what you mean by this. Do you mean that the contents are too\n> blunt? That there are too many files? Something else?\n\nI mean the all-caps file names, cluttering up the top-level directory.\n\n>> If we want to enhance the GitHub experience, we can also add these files to\n>> the organization instead: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n> \n> This was the intent of my patch. There might be a few others that we could\n> use, but I figured we could start with the low-hanging fruit that would\n> have the most impact on the GitHub experience.\n\nMy point is, in order to get that enhanced GitHub experience, you don't \nactually have to commit these files into the individual source code \nrepository. You can add them to the organization and they will apply to \nall repositories under the organization. This is explained at the above \nlink.\n\nHowever, I don't think these files are actually that useful. People can \ngo to the web site to find out about things about the PostgreSQL \ncommunity. We don't need to add bunch of $X.md files that just say, \nessentially, got to postgresql.org/$X.\n\n\n",
"msg_date": "Tue, 14 May 2024 10:05:01 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Tue, May 14, 2024 at 10:05:01AM +0200, Peter Eisentraut wrote:\n> On 13.05.24 17:26, Nathan Bossart wrote:\n>> On Sun, May 12, 2024 at 05:17:42PM +0200, Peter Eisentraut wrote:\n>> > I don't know, I find these files kind of \"yelling\". It's fine to have a\n>> > couple, but now it's getting a bit much, and there are more that could be\n>> > added.\n>> \n>> I'm not sure what you mean by this. Do you mean that the contents are too\n>> blunt? That there are too many files? Something else?\n> \n> I mean the all-caps file names, cluttering up the top-level directory.\n\nIt looks like we could also put these files in .github/ or docs/ to avoid\nthe clutter.\n\n>> > If we want to enhance the GitHub experience, we can also add these files to\n>> > the organization instead: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file\n>> \n>> This was the intent of my patch. There might be a few others that we could\n>> use, but I figured we could start with the low-hanging fruit that would\n>> have the most impact on the GitHub experience.\n> \n> My point is, in order to get that enhanced GitHub experience, you don't\n> actually have to commit these files into the individual source code\n> repository. You can add them to the organization and they will apply to all\n> repositories under the organization. This is explained at the above link.\n\nOh, I apologize, my brain skipped over the word \"organization\" in your\nmessage.\n\n> However, I don't think these files are actually that useful. People can go\n> to the web site to find out about things about the PostgreSQL community. We\n> don't need to add bunch of $X.md files that just say, essentially, got to\n> postgresql.org/$X.\n\nThat's a reasonable stance. I think the main argument in favor of these\nextra files is to make things a tad more accessible to folks who are\naccustomed to using GitHub when contributing to open-source projects, but\nyou're right that this information is already pretty easy to find.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 May 2024 10:54:48 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, May 14, 2024 at 10:05:01AM +0200, Peter Eisentraut wrote:\n>> My point is, in order to get that enhanced GitHub experience, you don't\n>> actually have to commit these files into the individual source code\n>> repository. You can add them to the organization and they will apply to all\n>> repositories under the organization. This is explained at the above link.\n\n> Oh, I apologize, my brain skipped over the word \"organization\" in your\n> message.\n\nFWIW, I'd vote against doing it that way, because then\nmaintaining/updating those files would only be possible for whoever\nowns the github repo. I don't have a position on whether we want\nthese additional files or not; but if we do, I think the best answer\nis to stick 'em under .github/ where they are out of the way but yet\nupdatable by any committer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2024 12:06:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 2024-May-14, Tom Lane wrote:\n\n> I don't have a position on whether we want\n> these additional files or not; but if we do, I think the best answer\n> is to stick 'em under .github/ where they are out of the way but yet\n> updatable by any committer.\n\n+1 for .github/, that was my first reaction as well after reading the\nlink Peter posted.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n",
"msg_date": "Tue, 14 May 2024 18:12:26 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Tue, May 14, 2024 at 06:12:26PM +0200, Alvaro Herrera wrote:\n> On 2024-May-14, Tom Lane wrote:\n> \n>> I don't have a position on whether we want\n>> these additional files or not; but if we do, I think the best answer\n>> is to stick 'em under .github/ where they are out of the way but yet\n>> updatable by any committer.\n> \n> +1 for .github/, that was my first reaction as well after reading the\n> link Peter posted.\n\nHere's an updated patch that uses .github/.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 May 2024 12:33:21 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 14.05.24 19:33, Nathan Bossart wrote:\n> On Tue, May 14, 2024 at 06:12:26PM +0200, Alvaro Herrera wrote:\n>> On 2024-May-14, Tom Lane wrote:\n>>\n>>> I don't have a position on whether we want\n>>> these additional files or not; but if we do, I think the best answer\n>>> is to stick 'em under .github/ where they are out of the way but yet\n>>> updatable by any committer.\n>>\n>> +1 for .github/, that was my first reaction as well after reading the\n>> link Peter posted.\n> \n> Here's an updated patch that uses .github/.\n\nI'm fine with putting them under .github/.\n\nI think for CONTRIBUTING.md, a better link would be \n<https://www.postgresql.org/developer/>.\n\n\n\n",
"msg_date": "Wed, 15 May 2024 07:23:19 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Wed, May 15, 2024 at 07:23:19AM +0200, Peter Eisentraut wrote:\n> I think for CONTRIBUTING.md, a better link would be\n> <https://www.postgresql.org/developer/>.\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 May 2024 14:34:09 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On 15.05.24 21:34, Nathan Bossart wrote:\n> On Wed, May 15, 2024 at 07:23:19AM +0200, Peter Eisentraut wrote:\n>> I think for CONTRIBUTING.md, a better link would be\n>> <https://www.postgresql.org/developer/>.\n> \n> WFM\n\nThis patch version looks good to me.\n\n\n\n",
"msg_date": "Thu, 16 May 2024 12:07:32 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "+Andres\n\nOn Thu, May 16, 2024 at 12:07:32PM +0200, Peter Eisentraut wrote:\n> This patch version looks good to me.\n\nAt pgconf.dev, Andres opined that it would be better to put these files in\nthe top-level directory so that they would be more visible to non-GitHub\nusers. I personally don't have any strong opinion on the matter, but I\nwill note that even though the files I have staged for commit are pretty\nbare-bones, I do think we should eventually build on their content so that\nthey are more than just links to the website. My goal here is to get\nsomething basic in place so that future discussion can be focused on the\ncontent.\n\nTom, Alvaro, and Peter have all expressed a preference to use the .github/\ndirectory, so at the moment I still intend to proceed with the v3 patch\nunless further discussion changes things.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 3 Jun 2024 10:10:58 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "On Mon, Jun 03, 2024 at 10:10:58AM -0500, Nathan Bossart wrote:\n> Tom, Alvaro, and Peter have all expressed a preference to use the .github/\n> directory, so at the moment I still intend to proceed with the v3 patch\n> unless further discussion changes things.\n\nCommitted.\n\nWe could also add a GOVERNANCE.md file that points to the new\nhttps://www.postgresql.org/about/governance/ page, but I couldn't find\nwhere that gets displayed on GitHub, so AFAICT it would just be buried in\n.github/.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 2 Jul 2024 13:11:04 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> We could also add a GOVERNANCE.md file that points to the new\n> https://www.postgresql.org/about/governance/ page, but I couldn't find\n> where that gets displayed on GitHub, so AFAICT it would just be buried in\n> .github/.\n\nNot much point then. IMV this subdirectory is just there to provide\nthings that GitHub displays specially.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2024 14:15:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: An improved README experience for PostgreSQL"
}
] |
[
{
"msg_contents": "Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\nerror\" messages when a %TYPE or %ROWTYPE construct references a\nnonexistent object. Here's a quick little finger exercise to try\nto improve that.\n\nThe basic point is that plpgsql_parse_wordtype and friends are\ndesigned to return NULL rather than failing (at least when it's\neasy to do so), but that leaves the caller without enough info\nto deliver a good error message. There is only one caller,\nand it has no use at all for this behavior, so let's just\nchange those functions to throw appropriate errors. Amusingly,\nplpgsql_parse_wordrowtype was already behaving that way, and\nplpgsql_parse_cwordrowtype did so in more cases than not,\nso we didn't even have a consistent \"return NULL\" story.\n\nAlong the way I got rid of plpgsql_parse_cwordtype's restriction\non what relkinds can be referenced. I don't really see the\npoint of that --- as long as the relation has the desired\ncolumn, the column's type is surely well-defined.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/88b574f4-cc08-46c5-826b-020849e5a356%40gelassene-pferde.biz",
"msg_date": "Mon, 26 Feb 2024 15:02:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "po 26. 2. 2024 v 21:02 odesílatel Tom Lane <[email protected]> napsal:\n\n> Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> error\" messages when a %TYPE or %ROWTYPE construct references a\n> nonexistent object. Here's a quick little finger exercise to try\n> to improve that.\n>\n> The basic point is that plpgsql_parse_wordtype and friends are\n> designed to return NULL rather than failing (at least when it's\n> easy to do so), but that leaves the caller without enough info\n> to deliver a good error message. There is only one caller,\n> and it has no use at all for this behavior, so let's just\n> change those functions to throw appropriate errors. Amusingly,\n> plpgsql_parse_wordrowtype was already behaving that way, and\n> plpgsql_parse_cwordrowtype did so in more cases than not,\n> so we didn't even have a consistent \"return NULL\" story.\n>\n> Along the way I got rid of plpgsql_parse_cwordtype's restriction\n> on what relkinds can be referenced. I don't really see the\n> point of that --- as long as the relation has the desired\n> column, the column's type is surely well-defined.\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/88b574f4-cc08-46c5-826b-020849e5a356%40gelassene-pferde.biz\n>\n>\n\npo 26. 2. 2024 v 21:02 odesílatel Tom Lane <[email protected]> napsal:Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\nerror\" messages when a %TYPE or %ROWTYPE construct references a\nnonexistent object. Here's a quick little finger exercise to try\nto improve that.\n\nThe basic point is that plpgsql_parse_wordtype and friends are\ndesigned to return NULL rather than failing (at least when it's\neasy to do so), but that leaves the caller without enough info\nto deliver a good error message. There is only one caller,\nand it has no use at all for this behavior, so let's just\nchange those functions to throw appropriate errors. Amusingly,\nplpgsql_parse_wordrowtype was already behaving that way, and\nplpgsql_parse_cwordrowtype did so in more cases than not,\nso we didn't even have a consistent \"return NULL\" story.\n\nAlong the way I got rid of plpgsql_parse_cwordtype's restriction\non what relkinds can be referenced. I don't really see the\npoint of that --- as long as the relation has the desired\ncolumn, the column's type is surely well-defined.+1Pavel\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/88b574f4-cc08-46c5-826b-020849e5a356%40gelassene-pferde.biz",
"msg_date": "Mon, 26 Feb 2024 21:10:08 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "\n\n> Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> error\" messages when a %TYPE or %ROWTYPE construct references a\n> nonexistent object. Here's a quick little finger exercise to try\n> to improve that.\n\nLooks this modify the error message, I want to know how ould we treat\nerror-message-compatible issue during minor / major upgrade. I'm not\nsure if my question is too inconceivable, I ask this because one of my\npatch [1] has blocked on this kind of issue [only] for 2 months and\nactaully the error-message-compatible requirement was metioned by me at\nthe first and resolve it by adding a odd parameter. Then the odd\nparameter blocked the whole process. \n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 08:40:17 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:\n\n> > Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> > error\" messages when a %TYPE or %ROWTYPE construct references a\n> > nonexistent object. Here's a quick little finger exercise to try\n> > to improve that.\n>\n> Looks this modify the error message, I want to know how ould we treat\n> error-message-compatible issue during minor / major upgrade.\n>\n\nThere is no bug here so no back-patch; and we are not yet past feature\nfreeze for v17.\n\nDavid J.\n\nOn Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:> Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> error\" messages when a %TYPE or %ROWTYPE construct references a\n> nonexistent object. Here's a quick little finger exercise to try\n> to improve that.\n\nLooks this modify the error message, I want to know how ould we treat\nerror-message-compatible issue during minor / major upgrade.There is no bug here so no back-patch; and we are not yet past feature freeze for v17.David J.",
"msg_date": "Mon, 26 Feb 2024 17:54:05 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:\n>> Looks this modify the error message,\n\nWell, yeah, that's sort of the point.\n\n>> I want to know how ould we treat\n>> error-message-compatible issue during minor / major upgrade.\n\n> There is no bug here so no back-patch; and we are not yet past feature\n> freeze for v17.\n\nIndeed, I did not intend this for back-patch. However, I'm having\na hard time seeing the errors in question as something you'd have\nautomated handling for, so I don't grasp why you would think\nthere's a compatibility hazard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Feb 2024 20:27:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "\n\"David G. Johnston\" <[email protected]> writes:\n\n> On Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:\n>\n> > Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> > error\" messages when a %TYPE or %ROWTYPE construct references a\n> > nonexistent object. Here's a quick little finger exercise to try\n> > to improve that.\n>\n> Looks this modify the error message, I want to know how ould we treat\n> error-message-compatible issue during minor / major upgrade.\n>\n> There is no bug here so no back-patch; and we are not yet past feature freeze for v17.\n\nAcutally I didn't asked about back-patch. I meant error message is an\npart of user interface, if we change a error message, the end\nuser may be impacted, at least in theory. for example, end-user has some\ncode like this:\n\nString errMsg = ex.getErrorMsg();\n\nif (errMsg.includes(\"a-target-string\"))\n{\n // do sth.\n}\n\nSo if the error message is changed, the above code may be broken.\n\nI have little experience on this, so I want to know the policy we are\nusing, for the background which I said in previous reply.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 09:49:00 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 6:54 PM Andy Fan <[email protected]> wrote:\n\n>\n> \"David G. Johnston\" <[email protected]> writes:\n>\n> > On Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:\n> >\n> > > Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> > > error\" messages when a %TYPE or %ROWTYPE construct references a\n> > > nonexistent object. Here's a quick little finger exercise to try\n> > > to improve that.\n> >\n> > Looks this modify the error message, I want to know how ould we treat\n> > error-message-compatible issue during minor / major upgrade.\n> >\n> > There is no bug here so no back-patch; and we are not yet past feature\n> freeze for v17.\n>\n> Acutally I didn't asked about back-patch.\n\n\nWhat else should I be understanding when you write the words \"minor\nupgrade\"?\n\n\n> So if the error message is changed, the above code may be broken.\n>\n>\nA fair point to bring up, and is change-specific. User-facing error\nmessages should be informative and where they are not changing them is\nreasonable. Runtime errors probably need more restraint since they are\nmore likely to be in a production monitoring alerting system but anything\nthat is reporting what amounts to a syntax error should be reasonable to\nchange and not expect people to be writing production code looking for\nthem. This seems to fall firmly into the \"badly written code\"/syntax\ncategory.\n\nDavid J.\n\nOn Mon, Feb 26, 2024 at 6:54 PM Andy Fan <[email protected]> wrote:\n\"David G. Johnston\" <[email protected]> writes:\n\n> On Mon, Feb 26, 2024 at 5:46 PM Andy Fan <[email protected]> wrote:\n>\n> > Per recent discussion[1], plpgsql returns fairly unhelpful \"syntax\n> > error\" messages when a %TYPE or %ROWTYPE construct references a\n> > nonexistent object. Here's a quick little finger exercise to try\n> > to improve that.\n>\n> Looks this modify the error message, I want to know how ould we treat\n> error-message-compatible issue during minor / major upgrade.\n>\n> There is no bug here so no back-patch; and we are not yet past feature freeze for v17.\n\nAcutally I didn't asked about back-patch.What else should I be understanding when you write the words \"minor upgrade\"? \nSo if the error message is changed, the above code may be broken.A fair point to bring up, and is change-specific. User-facing error messages should be informative and where they are not changing them is reasonable. Runtime errors probably need more restraint since they are more likely to be in a production monitoring alerting system but anything that is reporting what amounts to a syntax error should be reasonable to change and not expect people to be writing production code looking for them. This seems to fall firmly into the \"badly written code\"/syntax category.David J.",
"msg_date": "Mon, 26 Feb 2024 19:01:55 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error messages for %TYPE and %ROWTYPE in plpgsql"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a small documentation patch to the HOT updates page\n<https://www.postgresql.org/docs/current/storage-hot.html>to add references\nto summary (BRIN) indexes not blocking HOT updates\n<https://www.postgresql.org/message-id/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg@mail.gmail.com>,\ncommitted 19d8e2308b.\n\nThanks,\nElizabeth Christensen",
"msg_date": "Mon, 26 Feb 2024 14:29:37 -0600",
"msg_from": "Elizabeth Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "Greetings,\n\n* Elizabeth Christensen ([email protected]) wrote:\n> I have a small documentation patch to the HOT updates page\n> <https://www.postgresql.org/docs/current/storage-hot.html>to add references\n> to summary (BRIN) indexes not blocking HOT updates\n> <https://www.postgresql.org/message-id/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg@mail.gmail.com>,\n> committed 19d8e2308b.\n\nSounds good to me, though the attached patch didn't want to apply, and\nit isn't immediately clear to me why, but perhaps you could try saving\nthe patch from a different editor and re-sending it?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 26 Feb 2024 17:21:36 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "> On Feb 26, 2024, at 4:21 PM, Stephen Frost <[email protected]> wrote:\n> \n> Greetings,\n> \n> * Elizabeth Christensen ([email protected]) wrote:\n>> I have a small documentation patch to the HOT updates page\n>> <https://www.postgresql.org/docs/current/storage-hot.html>to add references\n>> to summary (BRIN) indexes not blocking HOT updates\n>> <https://www.postgresql.org/message-id/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg@mail.gmail.com>,\n>> committed 19d8e2308b.\n> \n> Sounds good to me, though the attached patch didn't want to apply, and\n> it isn't immediately clear to me why, but perhaps you could try saving\n> the patch from a different editor and re-sending it?\n> \n> Thanks,\n> \n> Stephen\n\nThanks Stephen, attempt #2 here. \n\nElizabeth",
"msg_date": "Mon, 26 Feb 2024 16:25:39 -0600",
"msg_from": "Elizabeth Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "Greetings,\n\n* Elizabeth Christensen ([email protected]) wrote:\n> > On Feb 26, 2024, at 4:21 PM, Stephen Frost <[email protected]> wrote:\n> > * Elizabeth Christensen ([email protected]) wrote:\n> >> I have a small documentation patch to the HOT updates page\n> >> <https://www.postgresql.org/docs/current/storage-hot.html>to add references\n> >> to summary (BRIN) indexes not blocking HOT updates\n> >> <https://www.postgresql.org/message-id/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg@mail.gmail.com>,\n> >> committed 19d8e2308b.\n> > \n> > Sounds good to me, though the attached patch didn't want to apply, and\n> > it isn't immediately clear to me why, but perhaps you could try saving\n> > the patch from a different editor and re-sending it?\n> \n> Thanks Stephen, attempt #2 here. \n\nHere's an updated patch which tries to improve on the wording a bit by\nhaving it be a bit more consistent. Would certainly welcome feedback on\nit though, of course. This is a tricky bit of language to write and\na complex optimization to explain.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 26 Feb 2024 18:28:42 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "Hello,\n\nOn 2024-Feb-26, Stephen Frost wrote:\n\n> Here's an updated patch which tries to improve on the wording a bit by\n> having it be a bit more consistent. Would certainly welcome feedback on\n> it though, of course. This is a tricky bit of language to write and\n> a complex optimization to explain.\n\nShould we try to explain what is a \"summarizing\" index is? Right now\nthe only way to know is to look at the source code or attach a debugger\nand see if IndexAmRoutine->amsummarizing is true. Maybe we can just say\n\"currently the only in-core summarizing index is BRIN\" somewhere in the\npage. (The patch's proposal to say \"... such as BRIN\" strikes me as too\nvague.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Tue, 27 Feb 2024 13:52:02 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera ([email protected]) wrote:\n> On 2024-Feb-26, Stephen Frost wrote:\n> > Here's an updated patch which tries to improve on the wording a bit by\n> > having it be a bit more consistent. Would certainly welcome feedback on\n> > it though, of course. This is a tricky bit of language to write and\n> > a complex optimization to explain.\n> \n> Should we try to explain what is a \"summarizing\" index is? Right now\n> the only way to know is to look at the source code or attach a debugger\n> and see if IndexAmRoutine->amsummarizing is true. Maybe we can just say\n> \"currently the only in-core summarizing index is BRIN\" somewhere in the\n> page. (The patch's proposal to say \"... such as BRIN\" strikes me as too\n> vague.)\n\nNot sure about explaining what one is, but I'd be fine adding that\nlanguage. I was disappointed to see that there's no way to figure out\nthe value of amsummarizing for an access method other than looking at\nthe code. Not as part of this specific patch, but I'd generally support\nhaving a way to that information at the SQL level (or perhaps everything\nfrom IndexAmRoutine?).\n\nAttached is an updated patch which drops the 'such as' and adds a\nsentence mentioning that BRIN is the only in-core summarizing index.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 27 Feb 2024 09:48:54 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "On Tue, 2024-02-27 at 09:48 -0500, Stephen Frost wrote:\n> Attached is an updated patch which drops the 'such as' and adds a\n> sentence mentioning that BRIN is the only in-core summarizing index.\n\nThe original patch reads more clearly to me. In v4, summarizing (the\nexception) feels like it's dominating the description.\n\nAlso, is it standard practice to backport this kind of doc update? I\nordinarily wouldn't be inclined to do so, but v4 seems targeted at 16\nas well.\n\nAttached my own suggested wording that hopefully addresses Stephen and\nAlvaro's concerns. I agree that it's tricky to write so I took a more\nminimalist approach:\n\n * I got rid of the \"In summary\" sentence because (a) it's confusing\nnow that we're talking about summarizing indexes; and (b) it's not\nsummarizing anything, it's just redundant.\n\n * I removed the mention partial or expression indexes. It's a bit\nredundant and doesn't seem especially helpful in this context.\n\nIf this is agreeable I can commit it.\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 04 Mar 2024 15:32:19 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "\n\n> On Mar 4, 2024, at 5:32 PM, Jeff Davis <[email protected]> wrote:\n\n> Attached my own suggested wording that hopefully addresses Stephen and\n> Alvaro's concerns. I agree that it's tricky to write so I took a more\n> minimalist approach:\n> \n> If this is agreeable I can commit it.\n> \n> Regards,\n> \tJeff Davis\n> \n> <v5-0001-docs-Update-HOT-update-docs-for-19d8e2308b.patch>\n\n\nLooks good to me. Thanks!\n\nElizabeth\n\n\n",
"msg_date": "Tue, 5 Mar 2024 09:53:06 -0600",
"msg_from": "Elizabeth Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis ([email protected]) wrote:\n> On Tue, 2024-02-27 at 09:48 -0500, Stephen Frost wrote:\n> > Attached is an updated patch which drops the 'such as' and adds a\n> > sentence mentioning that BRIN is the only in-core summarizing index.\n> \n> The original patch reads more clearly to me. In v4, summarizing (the\n> exception) feels like it's dominating the description.\n> \n> Also, is it standard practice to backport this kind of doc update? I\n> ordinarily wouldn't be inclined to do so, but v4 seems targeted at 16\n> as well.\n\nI do think this change should be back-ported to when the change\nhappened, otherwise the documentation won't reflect what's in the\nproduct for that version...\n\n> Attached my own suggested wording that hopefully addresses Stephen and\n> Alvaro's concerns. I agree that it's tricky to write so I took a more\n> minimalist approach:\n\n> * I got rid of the \"In summary\" sentence because (a) it's confusing\n> now that we're talking about summarizing indexes; and (b) it's not\n> summarizing anything, it's just redundant.\n\n> * I removed the mention partial or expression indexes. It's a bit\n> redundant and doesn't seem especially helpful in this context.\n\nJust to point it out- the \"In summary\" did provide a bit of a summary,\nbefore the 'partial or expression indexes' bit was removed. That said,\nI tend to still agree with these changes as I feel that users will\ngenerally be able to infer that this applies to partial and expression\nindexes without it having to be spelled out to them.\n\n> If this is agreeable I can commit it.\n\nGreat, thanks!\n\nStephen",
"msg_date": "Tue, 5 Mar 2024 11:32:17 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
},
{
"msg_contents": "On Tue, 2024-03-05 at 09:53 -0600, Elizabeth Christensen wrote:\n> Looks good to me. Thanks!\n\nThank you, committed.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 10:09:47 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] updates to docs about HOT updates for BRIN"
}
] |
[
{
"msg_contents": "IMO, the routine eval_const_expressions_mutator contains some stale code:\n\ncase T_SubPlan:\ncase T_AlternativeSubPlan:\n /*\n * Return a SubPlan unchanged --- too late to do anything with it.\n *\n * XXX should we ereport() here instead? Probably this routine\n * should never be invoked after SubPlan creation.\n */\n return node;\n\nAt least, this code could be achieved with estimate_expression_value(). \nSo, we should fix the comment. But maybe we need to do a bit more. \nAccording to the mutator idea, only the Query node is returned \nunchanged. If the Subplan node is on top of the expression, the call \nabove returns the same node, which may be unconventional.\nI'm not totally sure about the impossibility of constantifying SubPlan: \nwe already have InitPlans for uncorrelated subplans. Maybe something \nabout parameters that can be estimated as constants at this level and, \nas a result, allow to return a Const instead of SubPlan?\nBut at least we can return a flat copy of the SubPplan node just for the \nconvention — the same thing for the AlternativeSubPlan. See the patch in \nthe attachment.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Tue, 27 Feb 2024 11:08:31 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "The const expression evaluation routine should always return a copy"
},
{
"msg_contents": "Andrei Lepikhov <[email protected]> writes:\n> IMO, the routine eval_const_expressions_mutator contains some stale code:\n\nI'd like to push back against the idea that eval_const_expressions\nis expected to return a freshly-copied tree. Its API specification\ncontains no such claim. It's true that it appears to do that\neverywhere but here, but I think that's an implementation detail\nthat callers had better not depend on. It's not hard to imagine\nsomeone trying to improve its performance by avoiding unnecessary\ncopying.\n\nAlso, your proposed patch introduces a great deal of schizophrenia,\nbecause SubPlan has substructure. What's the point of making a copy\nof the SubPlan node itself, if the testexpr and args aren't copied?\nBut we shouldn't modify those, because as the comment states, it's\na bit late to be doing so.\n\nI agree that the comment claiming we can't get here is outdated,\nbut I'm unexcited about changing the code's behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Feb 2024 16:19:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The const expression evaluation routine should always return a\n copy"
},
{
"msg_contents": "On 28/2/2024 04:19, Tom Lane wrote:\n> Andrei Lepikhov <[email protected]> writes:\n>> IMO, the routine eval_const_expressions_mutator contains some stale code:\n> \n> I'd like to push back against the idea that eval_const_expressions\n> is expected to return a freshly-copied tree. Its API specification\n> contains no such claim. It's true that it appears to do that\n> everywhere but here, but I think that's an implementation detail\n> that callers had better not depend on. It's not hard to imagine\n> someone trying to improve its performance by avoiding unnecessary\n> copying.\nThanks for the explanation. I was just such a developer of extensions \nwho had looked into the internals of the eval_const_expressions, found a \ncall for a '_mutator' function, and made such a mistake :).\nI agree that freeing the return node value doesn't provide a sensible \nbenefit because the underlying bushy tree was copied during mutation. \nWhat's more, it makes even less sense with the selectivity context \ncoming shortly (I hope).\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 10:24:47 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The const expression evaluation routine should always return a\n copy"
}
] |
[
{
"msg_contents": "Hello,\n\nThe date_bin() function has a bug where it returns an incorrect binned date\nwhen both of the following are true:\n1) the origin timestamp is before the source timestamp\n2) the origin timestamp is exactly equivalent to some valid binned date in\nthe set of binned dates that date_bin() can return given a specific stride\nand source timestamp.\n\nFor example, consider the following function call:\ndate_bin('30 minutes'::interval, '2024-01-01 15:00:00'::timestamp,\n'2024-01-01 17:00:00'::timestamp);\n\nThis function call will return '2024-01-01 14:30:00' instead of '2024-01-01\n15:00:00' despite '2024-01-01 15:00:00' being the valid binned date for the\ntimestamp '2024-01-01 15:00:00'. This commit fixes that by editing the\ntimestamp_bin() function in timestamp.c file.\n\nThe reason this happens is that the code in timestamp_bin() that allows for\ncorrect date binning when source timestamp < origin timestamp subtracts one\nstride in all cases.\nHowever, that is not valid for this case when the source timestamp is\nexactly equivalent to a valid binned date as in the example mentioned above.\n\nTo account for this edge, we simply add another condition in the if\nstatement to not perform the subtraction by one stride interval if the time\ndifference is divisible by the stride.\n\nBest regards,\nMoaaz Assali",
"msg_date": "Tue, 27 Feb 2024 12:42:26 +0400",
"msg_from": "Moaaz Assali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix for edge case in date_bin() function"
},
{
"msg_contents": "> On 27 Feb 2024, at 09:42, Moaaz Assali <[email protected]> wrote:\n\n> To account for this edge, we simply add another condition in the if statement to not perform the subtraction by one stride interval if the time difference is divisible by the stride.\n\nI only skimmed the patch, but I recommend adding a testcase to it to keep the\nregression from reappearing. src/test/regress/sql/timestamp.sql might be a\ngood candidate testsuite.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 09:48:12 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Hello Daniel,\n\nI have added a test case for this in timestamp.sql and timestamp.out, and\ntests pass when using the bug fix patch in the first email.\nI have attached a new patch in this email below with the new tests only\n(doesn't include the bug fix).\n\nP.S. I forgot to CC the mailing list in my previous reply. This is just a\ncopy of it.\n\nBest regards,\nMoaaz Assali\n\nOn Tue, Feb 27, 2024 at 12:48 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 27 Feb 2024, at 09:42, Moaaz Assali <[email protected]> wrote:\n>\n> > To account for this edge, we simply add another condition in the if\n> statement to not perform the subtraction by one stride interval if the time\n> difference is divisible by the stride.\n>\n> I only skimmed the patch, but I recommend adding a testcase to it to keep\n> the\n> regression from reappearing. src/test/regress/sql/timestamp.sql might be a\n> good candidate testsuite.\n>\n> --\n> Daniel Gustafsson\n>\n>",
"msg_date": "Tue, 27 Feb 2024 14:27:46 +0400",
"msg_from": "Moaaz Assali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Moaaz Assali <[email protected]> writes:\n> The date_bin() function has a bug where it returns an incorrect binned date\n> when both of the following are true:\n> 1) the origin timestamp is before the source timestamp\n> 2) the origin timestamp is exactly equivalent to some valid binned date in\n> the set of binned dates that date_bin() can return given a specific stride\n> and source timestamp.\n\nHmm, yeah. The \"stride_usecs > 1\" test seems like it's a partial\nattempt to account for this that is probably redundant given the\nadditional condition. Also, can we avoid computing tm_diff %\nstride_usecs twice? Maybe the compiler is smart enough to remove the\ncommon subexpression, but it's a mighty expensive computation if not.\n\nI'm also itching a bit over whether there are integer-overflow\nhazards here. Maybe the range of timestamp is constrained enough\nthat there aren't, but I didn't look hard.\n\nAlso, whatever we do here, surely timestamptz_bin() has the\nsame problem(s).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Feb 2024 12:29:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "I wrote:\n> Hmm, yeah. The \"stride_usecs > 1\" test seems like it's a partial\n> attempt to account for this that is probably redundant given the\n> additional condition. Also, can we avoid computing tm_diff %\n> stride_usecs twice? Maybe the compiler is smart enough to remove the\n> common subexpression, but it's a mighty expensive computation if not.\n\nI think we could do it like this:\n\n tm_diff = timestamp - origin;\n tm_modulo = tm_diff % stride_usecs;\n tm_delta = tm_diff - tm_modulo;\n /* We want to round towards -infinity when tm_diff is negative */\n if (tm_modulo < 0)\n tm_delta -= stride_usecs;\n\nExcluding tm_modulo == 0 from the correction is what fixes the\nproblem.\n\n> I'm also itching a bit over whether there are integer-overflow\n> hazards here. Maybe the range of timestamp is constrained enough\n> that there aren't, but I didn't look hard.\n\nHmm, it's not. For instance this triggers the overflow check in\ntimestamp_mi:\n\nregression=# select '294000-01-01'::timestamp - '4714-11-24 00:00:00 BC';\nERROR: interval out of range\nregression=# \\errverbose \nERROR: 22008: interval out of range\nLOCATION: timestamp_mi, timestamp.c:2832\n\nSo we ought to guard the subtraction that produces tm_diff similarly.\nI suspect it's also possible to overflow int64 while computing\nstride_usecs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Feb 2024 14:13:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Yeah you are right about the integer overflow.\n\nConsider this query: select date_bin('15 minutes'::interval, timestamp\n'294276-12-30 10:24:00', timestamp '4000-12-20 23:00:00 BC');\n\nIt will return 294276-12-30 10:31:49.551616 when it should be 294276-12-30\n10:15:00, which happens because source timestamp is close to INT64_MAX and\norigin timestamp is negative, causing an integer overflow.\nSo, the subsequent calculations are wrong.\n\nI fixed the integer overflow and original bug and added test cases in my 3\npatches in this reply below:\n\nv2-0001:\n- Fixed both the original bug discussed and the integer overflow issues\n(and used your suggestion to store the modulo result).\n- Any timestamp used will output the correct date_bin() result.\n- I have used INT64 -> UINT64 mapping in order to ensure no integer\noverflows are possible.\n- The only additional cost is 3 subtractions/addition to do the INT64 ->\nUINT64 mapping for timestamp, origin and final result.\n- It is probably possible to tackle the integer overflow issue without\ncasting but, from my attempts, it seemed much more convoluted and complex.\n- Implemented the fix for both timestamp_bin() and timestamptz_bin().\n\nv2-0002:\n- Added multiple test cases in timestamp.sql for integer overflow by\ntesting with timestamps around INT64_MIN and INT64_MAX.\n- Added test case for the original bug where source timestamp is a valid\nbinned date already and does not require the additional stride interval\nsubtraction.\n\nv2-0003:\n- Exactly the same as v2-0002 but for timestamptz.sql\n\n\nAlso, I would like to create a new patch on the 2024-03 commitfest, but\nsince I just created my account yesterday I get this error:\n\"The site you are trying to log in to (commitfest.postgresql.org) requires\na cool-off period between account creation and logging in. You have not\npassed the cool off period yet.\"\n\nHow long is the cool off period, so that I can create a new patch in the\ncommitfest before submissions close after tomorrow.\nAlternatively, is it possible for someone to open a new patch on my behalf\nlinking this email thread, so it can be added to the 2024-03 commitfest?\n\n\nBest regards,\nMoaaz Assali\n\n\nOn Tue, Feb 27, 2024 at 11:13 PM Tom Lane <[email protected]> wrote:\n\n> I wrote:\n> > Hmm, yeah. The \"stride_usecs > 1\" test seems like it's a partial\n> > attempt to account for this that is probably redundant given the\n> > additional condition. Also, can we avoid computing tm_diff %\n> > stride_usecs twice? Maybe the compiler is smart enough to remove the\n> > common subexpression, but it's a mighty expensive computation if not.\n>\n> I think we could do it like this:\n>\n> tm_diff = timestamp - origin;\n> tm_modulo = tm_diff % stride_usecs;\n> tm_delta = tm_diff - tm_modulo;\n> /* We want to round towards -infinity when tm_diff is negative */\n> if (tm_modulo < 0)\n> tm_delta -= stride_usecs;\n>\n> Excluding tm_modulo == 0 from the correction is what fixes the\n> problem.\n>\n> > I'm also itching a bit over whether there are integer-overflow\n> > hazards here. Maybe the range of timestamp is constrained enough\n> > that there aren't, but I didn't look hard.\n>\n> Hmm, it's not. For instance this triggers the overflow check in\n> timestamp_mi:\n>\n> regression=# select '294000-01-01'::timestamp - '4714-11-24 00:00:00 BC';\n> ERROR: interval out of range\n> regression=# \\errverbose\n> ERROR: 22008: interval out of range\n> LOCATION: timestamp_mi, timestamp.c:2832\n>\n> So we ought to guard the subtraction that produces tm_diff similarly.\n> I suspect it's also possible to overflow int64 while computing\n> stride_usecs.\n>\n> regards, tom lane\n>",
"msg_date": "Wed, 28 Feb 2024 20:11:00 +0400",
"msg_from": "Moaaz Assali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Moaaz Assali <[email protected]> writes:\n> - I have used INT64 -> UINT64 mapping in order to ensure no integer\n> overflows are possible.\n\nI don't think I trust this idea, and in any case it doesn't remove\nall the overflow hazards: the reduction of the stride interval to\nmicroseconds can overflow, and the final subtraction of the stride\ncan too. I changed it to just do the straightforward thing\n(throwing error if the pg_xxx_s64_overflow routines detect error),\nand pushed it. Thanks for the report and patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:06:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Hello Tom,\n\nThanks for the quick patch!\n\nYou're right. The stride_usecs calculation, tm_delta += ustride_usecs, and\nfinal result calculations can overflow and need a guard.\n\nHowever, I don't see the issue with the INT64 -> UINT64 mapping. The\ncurrent implementation results in integer overflows (errors instead after\nthe recent patch) even for valid timestamps where the result of date_bin()\nis also another valid timestamp.\n\nOn the other hand, the INT64 -> UINT64 mapping solves this issue and allows\nthe input of any valid source and origin timestamps as long as the stride\nchosen doesn't output invalid timestamps that cannot be represented by\nTimestamp(tz) type anyways. Since all INT64 values can be mapped 1-to-1 in\nUINT64, I don't see where the problem is.\n\nBest regards,\nMoaaz Assali\n\nHello Tom,Thanks for the quick patch!You're right. The stride_usecs calculation, tm_delta += ustride_usecs, and final result calculations can overflow and need a guard.However, I don't see the issue with the INT64 -> UINT64 mapping. The current implementation results in integer overflows (errors instead after the recent patch) even for valid timestamps where the result of date_bin() is also another valid timestamp. On the other hand, the INT64 -> UINT64 mapping solves this issue and allows the input of any valid source and origin timestamps as long as the stride chosen doesn't output invalid timestamps that cannot be represented by Timestamp(tz) type anyways. Since all INT64 values can be mapped 1-to-1 in UINT64, I don't see where the problem is.Best regards,Moaaz Assali",
"msg_date": "Thu, 29 Feb 2024 10:53:42 +0400",
"msg_from": "Moaaz Assali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix for edge case in date_bin() function"
},
{
"msg_contents": "Moaaz Assali <[email protected]> writes:\n> However, I don't see the issue with the INT64 -> UINT64 mapping. The\n> current implementation results in integer overflows (errors instead after\n> the recent patch) even for valid timestamps where the result of date_bin()\n> is also another valid timestamp.\n\n> On the other hand, the INT64 -> UINT64 mapping solves this issue and allows\n> the input of any valid source and origin timestamps as long as the stride\n> chosen doesn't output invalid timestamps that cannot be represented by\n> Timestamp(tz) type anyways. Since all INT64 values can be mapped 1-to-1 in\n> UINT64, I don't see where the problem is.\n\nWhat I don't like about it is that it's complicated (and you didn't\nmake any effort whatsoever to make the code intelligible or self-\ndocumenting), and that complication has zero real-world benefit.\nThe only way to hit an overflow in this subtraction is with dates\nwell beyond 200000 AD. If you are actually dealing with such dates\n(maybe you're an astronomer or a geologist), then timestamp[tz] isn't\nthe data type for you, because you probably need orders of magnitude\nwider range than it's got.\n\nNow I'll freely admit that the pg_xxx_yyy_overflow() functions are\nnotationally klugy, but they're well documented and they're something\nthat people would need to understand anyway for a lot of other places\nin Postgres. So I think there's less cognitive load for readers of\nthe code in the let's-throw-an-error approach than in writing one-off\nmagic code that in the end can avoid only one of the three possible\noverflow cases in this function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Feb 2024 12:04:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix for edge case in date_bin() function"
}
] |
[
{
"msg_contents": "Dear Postgres Community,\n\nI hope this email finds you well. I am reaching out to seek clarification\non an issue I am encountering with logical replication in PostgreSQL.\n\nMy specific question pertains to determining the appropriate LSN (Log\nSequence Number) from which to start logical replication. Allow me to\nprovide detailed context for better understanding:\n\nDuring the process of performing a parallel pg_basebackup, I concurrently\nexecute DML queries. As part of the pg_basebackup command, I utilize the\noption create-slot to create a replication slot. Subsequently, upon\ncompletion of the base backup, I initiate logical replication using the\nrestart_lsn obtained during the execution of the pg_basebackup command. My\nintention is to ensure consistency between two database clusters.\n\nHowever, I am encountering errors during this process. Specifically, I\nreceive the following error message on the source side:\n\n\"\"\"\n2024-02-27 16:20:09.271 IST [2838457] ERROR: duplicate key value violates\nunique constraint \"table_15_36_pkey\"\n2024-02-27 16:20:09.271 IST [2838457] DETAIL: Key (col_1, col_2)=(23,\n2024-02-27 15:14:24.332557) already exists.\n2024-02-27 16:20:09.272 IST [2834967] LOG: background worker \"logical\nreplication worker\" (PID 2838457) exited with exit code 1\nUpon analysis, it appears that the errors stem from starting the logical\nreplication with an incorrect LSN, one that has already been applied to the\ntarget side, leading to duplicate key conflicts.\n\"\"\"\n\nIn light of this issue, I seek guidance on determining the appropriate LSN\nfrom which to commence logical replication.\n\nTo further clarify my problem:\n\n1)I have a source machine and a target machine.\n2) I perform a backup from the source to the target using pg_basebackup.\n3) Prior to initiating the base backup, I create logical replication slots\non the source machine.\n4) During the execution of pg_basebackup, DML queries are executed, and I\naim to replicate this data on the target machine.\n5) My dilemma lies in determining the correct LSN to begin the logical\nreplication process.\nYour insights and guidance on this matter would be immensely appreciated.\nThank you for your time and assistance.\n\nWarm regards,\nPradeep\n\nDear Postgres Community,I hope this email finds you well. I am reaching out to seek clarification on an issue I am encountering with logical replication in PostgreSQL.My specific question pertains to determining the appropriate LSN (Log Sequence Number) from which to start logical replication. Allow me to provide detailed context for better understanding:During the process of performing a parallel pg_basebackup, I concurrently execute DML queries. As part of the pg_basebackup command, I utilize the option create-slot to create a replication slot. Subsequently, upon completion of the base backup, I initiate logical replication using the restart_lsn obtained during the execution of the pg_basebackup command. My intention is to ensure consistency between two database clusters.However, I am encountering errors during this process. Specifically, I receive the following error message on the source side:\"\"\"2024-02-27 16:20:09.271 IST [2838457] ERROR: duplicate key value violates unique constraint \"table_15_36_pkey\" 2024-02-27 16:20:09.271 IST [2838457] DETAIL: Key (col_1, col_2)=(23, 2024-02-27 15:14:24.332557) already exists. 2024-02-27 16:20:09.272 IST [2834967] LOG: background worker \"logical replication worker\" (PID 2838457) exited with exit code 1Upon analysis, it appears that the errors stem from starting the logical replication with an incorrect LSN, one that has already been applied to the target side, leading to duplicate key conflicts.\"\"\"In light of this issue, I seek guidance on determining the appropriate LSN from which to commence logical replication.To further clarify my problem:1)I have a source machine and a target machine.2) I perform a backup from the source to the target using pg_basebackup.3) Prior to initiating the base backup, I create logical replication slots on the source machine.4) During the execution of pg_basebackup, DML queries are executed, and I aim to replicate this data on the target machine.5) My dilemma lies in determining the correct LSN to begin the logical replication process.Your insights and guidance on this matter would be immensely appreciated. Thank you for your time and assistance.Warm regards,Pradeep",
"msg_date": "Tue, 27 Feb 2024 17:20:07 +0530",
"msg_from": "Pradeep Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seeking Clarification on Logical Replication Start LSN"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 5:56 PM Pradeep Kumar <[email protected]> wrote:\n>\n> Dear Postgres Community,\n>\n> I hope this email finds you well. I am reaching out to seek clarification on an issue I am encountering with logical replication in PostgreSQL.\n>\n> My specific question pertains to determining the appropriate LSN (Log Sequence Number) from which to start logical replication. Allow me to provide detailed context for better understanding:\n>\n> During the process of performing a parallel pg_basebackup, I concurrently execute DML queries. As part of the pg_basebackup command, I utilize the option create-slot to create a replication slot. Subsequently, upon completion of the base backup, I initiate logical replication using the restart_lsn obtained during the execution of the pg_basebackup command. My intention is to ensure consistency between two database clusters.\n>\n> However, I am encountering errors during this process. Specifically, I receive the following error message on the source side:\n>\n> \"\"\"\n> 2024-02-27 16:20:09.271 IST [2838457] ERROR: duplicate key value violates unique constraint \"table_15_36_pkey\"\n> 2024-02-27 16:20:09.271 IST [2838457] DETAIL: Key (col_1, col_2)=(23, 2024-02-27 15:14:24.332557) already exists.\n> 2024-02-27 16:20:09.272 IST [2834967] LOG: background worker \"logical replication worker\" (PID 2838457) exited with exit code 1\n> Upon analysis, it appears that the errors stem from starting the logical replication with an incorrect LSN, one that has already been applied to the target side, leading to duplicate key conflicts.\n> \"\"\"\n>\n> In light of this issue, I seek guidance on determining the appropriate LSN from which to commence logical replication.\n>\n> To further clarify my problem:\n>\n> 1)I have a source machine and a target machine.\n> 2) I perform a backup from the source to the target using pg_basebackup.\n> 3) Prior to initiating the base backup, I create logical replication slots on the source machine.\n> 4) During the execution of pg_basebackup, DML queries are executed, and I aim to replicate this data on the target machine.\n> 5) My dilemma lies in determining the correct LSN to begin the logical replication process.\n>\n\nI think the reason of the problem you are seeing is pg_basebackup also\nincludes the WAL generated during backup if you specify -X method. See\n[1]. Now, as you have created a logical slot before starting backup,\ndata duplication is possible. I don't see a very straightforward way\nbut you might be able to achieve your desired purpose if somehow\nidentify the last WAL location copied in backup and use that as your\nstarting point for logical replication.\n\n[1] - https://www.postgresql.org/docs/devel/app-pgbasebackup.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Mar 2024 17:05:52 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking Clarification on Logical Replication Start LSN"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThe current descriptions for server_ca.config and client_ca.config are \nnot so accurate. For example, one of the descriptions in \nserver_ca.config states, \"This certificate is used to sign server \ncertificates. It is self-signed.\" However, the server_ca.crt and \nclient_ca.crt are actually signed by the root_ca.crt, which is the only \nself-signed certificate. Therefore, it would be more accurate to change \nit to \"This certificate is used to sign server certificates. It is an \nIntermediate CA.\"\n\nAttached is a patch attempting to fix the description issue.\n\nBest regards,\n\nDavid",
"msg_date": "Tue, 27 Feb 2024 11:38:37 -0800",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong description in server_ca.config and client_ca.config"
},
{
"msg_contents": "> On 27 Feb 2024, at 20:38, David Zhang <[email protected]> wrote:\n> \n> Hi Hackers,\n> \n> The current descriptions for server_ca.config and client_ca.config are not so accurate. For example, one of the descriptions in server_ca.config states, \"This certificate is used to sign server certificates. It is self-signed.\" However, the server_ca.crt and client_ca.crt are actually signed by the root_ca.crt, which is the only self-signed certificate.\n\nIIRC the intent was to say it isn't signed by an official CA, but I agree it's\nmisleading.\n\n> Therefore, it would be more accurate to change it to \"This certificate is used to sign server certificates. It is an Intermediate CA.\"\n\nAgreed. We should perhaps add the \"This certificate is self-signed\" sentence\nto root_ca.conf as well while at it, it's currently only mentioned in\nsslfiles.mk and adding it to the config would make the documentation more\nconsistent.\n\n> Attached is a patch attempting to fix the description issue.\n\nThanks, I'll have another look and will apply.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 14:29:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong description in server_ca.config and client_ca.config"
}
] |
[
{
"msg_contents": "Any objections to removing the ./configure --with-CC option? It's been \ndeprecated since commit cb292206c5 from July 2000:\n\n> # For historical reasons you can also use --with-CC to specify the C compiler\n> # to use, although the standard way to do this is to set the CC environment\n> # variable.\n> PGAC_ARG_REQ(with, CC, [CMD], [set compiler (deprecated)], [CC=$with_CC])\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 28 Feb 2024 01:03:27 +0400",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove --with-CC autoconf option"
},
{
"msg_contents": "> On 27 Feb 2024, at 22:03, Heikki Linnakangas <[email protected]> wrote:\n\n> Any objections to removing the ./configure --with-CC option? It's been deprecated since commit cb292206c5 from July 2000:\n\nNone, and removing it will chip away further at getting autoconf and meson\nfully in sync so +1.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 22:09:24 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove --with-CC autoconf option"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI spent some time debugging an issue with standby not being able to\ncontinue streaming after failover.\n\nThe problem manifests itself by following messages in the log:\nLOG: received SIGHUP, reloading configuration files\nLOG: parameter \"primary_conninfo\" changed to \"port=58669\nhost=/tmp/dn20WVmNqF\"\nLOG: restored log file \"000000010000000000000003\" from archive\nLOG: invalid magic number 0000 in WAL segment 000000010000000000000003,\nLSN 0/301A000, offset 106496\nLOG: fetching timeline history file for timeline 2 from primary server\nLOG: started streaming WAL from primary at 0/3000000 on timeline 1\nLOG: replication terminated by primary server\nDETAIL: End of WAL reached on timeline 1 at 0/3019158.\nFATAL: terminating walreceiver process due to administrator command\nLOG: restored log file \"00000002.history\" from archive\nLOG: new target timeline is 2\nLOG: restored log file \"000000020000000000000003\" from archive\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\nLOG: started streaming WAL from primary at 0/3000000 on timeline 2\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\nFATAL: terminating walreceiver process due to administrator command\nLOG: waiting for WAL to become available at 0/301A04E\nLOG: restored log file \"000000020000000000000003\" from archive\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\nLOG: waiting for WAL to become available at 0/301A04E\nLOG: restored log file \"000000020000000000000003\" from archive\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\nLOG: invalid magic number 0000 in WAL segment 000000020000000000000003,\nLSN 0/301A000, offset 106496\n\nThe problem happens when standbys received only the first part of the WAL\nrecord that spans multiple pages.\nIn this case the promoted standby discards the first part of the WAL record\nand writes END_OF_RECOVERY instead. If in addition to that someone will\ncall pg_switch_wal(), then there are chances that SWITCH record will also\nfit to the page where the discarded part was settling, As a result the\nother standby (that wasn't promoted) will infinitely try making attempts to\ndecode WAL record span on multiple pages by reading the next page, which is\nfilled with zero bytes. And, this next page will never be written, because\nthe new primary will be writing to the new WAL file after pg_switch_wal().\n\nRestart of the stuck standby fixes the problem, because it will be first\nreading the history file and therefore will never read the incomplete WAL\nfile from the old timeline. That is, all major versions starting from v13\nare impacted (including the master branch), because they allow changing of\nprimary_conninfo GUC with reload.\n\nPlease find attached the TAP test that reproduces the problem.\n\nTo be honest, I don't know yet how to fix it nicely. I am thinking about\nreturning XLREAD_FAIL from XLogPageRead() if it suddenly switched to a new\ntimeline while trying to read a page and if this page is invalid.\n\n-- \nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Wed, 28 Feb 2024 11:19:41 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 11:19:41AM +0100, Alexander Kukushkin wrote:\n> I spent some time debugging an issue with standby not being able to\n> continue streaming after failover.\n>\n> The problem happens when standbys received only the first part of the WAL\n> record that spans multiple pages.\n> In this case the promoted standby discards the first part of the WAL record\n> and writes END_OF_RECOVERY instead. If in addition to that someone will\n> call pg_switch_wal(), then there are chances that SWITCH record will also\n> fit to the page where the discarded part was settling, As a result the\n> other standby (that wasn't promoted) will infinitely try making attempts to\n> decode WAL record span on multiple pages by reading the next page, which is\n> filled with zero bytes. And, this next page will never be written, because\n> the new primary will be writing to the new WAL file after pg_switch_wal().\n\nWow. Have you seen that in an actual production environment?\n\nI was just trying your TAP test to see it looping on a single record\nas you mentioned:\n2024-02-29 12:57:44.884 JST [2555] LOG: invalid magic number 0000 in\nWAL segment 000000020000000000000003, LSN 0/301A000, offset 106496\n2024-02-29 12:57:44.884 JST [2555] LOG: invalid magic number 0000 in\nWAL segment 000000020000000000000003, LSN 0/301A000, offset 106496 \n\n> Restart of the stuck standby fixes the problem, because it will be first\n> reading the history file and therefore will never read the incomplete WAL\n> file from the old timeline. That is, all major versions starting from v13\n> are impacted (including the master branch), because they allow changing of\n> primary_conninfo GUC with reload.\n\nStill that's not nice at a large scale, because you would not know\nabout the problem until your monitoring tools raise alarms because\nsome nodes in your cluster setup decide to lag behind.\n\n> Please find attached the TAP test that reproduces the problem.\n\nmy $start_page = start_of_page($end_lsn);\nmy $wal_file = write_wal($primary, $TLI, $start_page,\n \"\\x00\" x $WAL_BLOCK_SIZE);\n# copy the file we just \"hacked\" to the archive\ncopy($wal_file, $primary->archive_dir);\n\nSo you are emulating a failure by filling with zeros the second page\nwhere the last emit_message() generated a record, and the page before\nthat includes the continuation record. Then abuse of WAL archiving to\nforce the replay of the last record. That's kind of cool.\n\n> To be honest, I don't know yet how to fix it nicely. I am thinking about\n> returning XLREAD_FAIL from XLogPageRead() if it suddenly switched to a new\n> timeline while trying to read a page and if this page is invalid.\n\nHmm. I suspect that you may be right on a TLI change when reading a\npage. There are a bunch of side cases with continuation records and\nheader validation around XLogReaderValidatePageHeader(). Perhaps you\nhave an idea of patch to show your point?\n\nNit. In your test, it seems to me that you should not call directly\nset_standby_mode and enable_restoring, just rely on has_restoring with\nthe standby option included.\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 14:05:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Thu, 29 Feb 2024 14:05:15 +0900, Michael Paquier <[email protected]> wrote in \n> On Wed, Feb 28, 2024 at 11:19:41AM +0100, Alexander Kukushkin wrote:\n> > I spent some time debugging an issue with standby not being able to\n> > continue streaming after failover.\n> >\n> > The problem happens when standbys received only the first part of the WAL\n> > record that spans multiple pages.\n> > In this case the promoted standby discards the first part of the WAL record\n> > and writes END_OF_RECOVERY instead. If in addition to that someone will\n> > call pg_switch_wal(), then there are chances that SWITCH record will also\n> > fit to the page where the discarded part was settling, As a result the\n> > other standby (that wasn't promoted) will infinitely try making attempts to\n> > decode WAL record span on multiple pages by reading the next page, which is\n> > filled with zero bytes. And, this next page will never be written, because\n> > the new primary will be writing to the new WAL file after pg_switch_wal().\n\nIn the first place, it's important to note that we do not guarantee\nthat an async standby can always switch its replication connection to\nthe old primary or another sibling standby. This is due to the\nvariations in replication lag among standbys. pg_rewind is required to\nadjust such discrepancies.\n\nI might be overlooking something, but I don't understand how this\noccurs without purposefully tweaking WAL files. The repro script\npushes an incomplete WAL file to the archive as a non-partial\nsegment. This shouldn't happen in the real world.\n\nIn the repro script, the replication connection of the second standby\nis switched from the old primary to the first standby after its\npromotion. After the switching, replication is expected to continue\nfrom the beginning of the last replayed segment. But with the script,\nthe second standby copies the intentionally broken file, which differs\nfrom the data that should be received via streaming. A similar problem\nto the issue here was seen at segment boundaries, before we introduced\nthe XLP_FIRST_IS_OVERWRITE_CONTRECORD flag, which prevents overwriting\na WAL file that is already archived. However, in this case, the second\nstandby won't see the broken record because it cannot be in a\nnon-partial segment in the archive, and the new primary streams\nEND_OF_RECOVERY instead of the broken record.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 29 Feb 2024 16:18:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hi Michael,\n\nOn Thu, 29 Feb 2024 at 06:05, Michael Paquier <[email protected]> wrote:\n\n>\n> Wow. Have you seen that in an actual production environment?\n>\n\nYes, we see it regularly, and it is reproducible in test environments as\nwell.\n\n\n> my $start_page = start_of_page($end_lsn);\n> my $wal_file = write_wal($primary, $TLI, $start_page,\n> \"\\x00\" x $WAL_BLOCK_SIZE);\n> # copy the file we just \"hacked\" to the archive\n> copy($wal_file, $primary->archive_dir);\n>\n> So you are emulating a failure by filling with zeros the second page\n> where the last emit_message() generated a record, and the page before\n> that includes the continuation record. Then abuse of WAL archiving to\n> force the replay of the last record. That's kind of cool.\n>\n\nRight, at this point it is easier than to cause an artificial crash on the\nprimary after it finished writing just one page.\n\n\n> > To be honest, I don't know yet how to fix it nicely. I am thinking about\n> > returning XLREAD_FAIL from XLogPageRead() if it suddenly switched to a\n> new\n> > timeline while trying to read a page and if this page is invalid.\n>\n> Hmm. I suspect that you may be right on a TLI change when reading a\n> page. There are a bunch of side cases with continuation records and\n> header validation around XLogReaderValidatePageHeader(). Perhaps you\n> have an idea of patch to show your point?\n>\n\nNot yet, but hopefully I will get something done next week.\n\n\n>\n> Nit. In your test, it seems to me that you should not call directly\n> set_standby_mode and enable_restoring, just rely on has_restoring with\n> the standby option included.\n>\n\nThanks, I'll look into it.\n\n-- \nRegards,\n--\nAlexander Kukushkin\n\nHi Michael,On Thu, 29 Feb 2024 at 06:05, Michael Paquier <[email protected]> wrote:\n\nWow. Have you seen that in an actual production environment?Yes, we see it regularly, and it is reproducible in test environments as well. \nmy $start_page = start_of_page($end_lsn);\nmy $wal_file = write_wal($primary, $TLI, $start_page,\n \"\\x00\" x $WAL_BLOCK_SIZE);\n# copy the file we just \"hacked\" to the archive\ncopy($wal_file, $primary->archive_dir);\n\nSo you are emulating a failure by filling with zeros the second page\nwhere the last emit_message() generated a record, and the page before\nthat includes the continuation record. Then abuse of WAL archiving to\nforce the replay of the last record. That's kind of cool.Right, at this point it is easier than to cause an artificial crash on the primary after it finished writing just one page. \n> To be honest, I don't know yet how to fix it nicely. I am thinking about\n> returning XLREAD_FAIL from XLogPageRead() if it suddenly switched to a new\n> timeline while trying to read a page and if this page is invalid.\n\nHmm. I suspect that you may be right on a TLI change when reading a\npage. There are a bunch of side cases with continuation records and\nheader validation around XLogReaderValidatePageHeader(). Perhaps you\nhave an idea of patch to show your point?Not yet, but hopefully I will get something done next week. \n\nNit. In your test, it seems to me that you should not call directly\nset_standby_mode and enable_restoring, just rely on has_restoring with\nthe standby option included.Thanks, I'll look into it. -- Regards,--Alexander Kukushkin",
"msg_date": "Thu, 29 Feb 2024 17:36:29 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hi Kyotaro,\n\nOn Thu, 29 Feb 2024 at 08:18, Kyotaro Horiguchi <[email protected]>\nwrote:\n\nIn the first place, it's important to note that we do not guarantee\n> that an async standby can always switch its replication connection to\n> the old primary or another sibling standby. This is due to the\n> variations in replication lag among standbys. pg_rewind is required to\n> adjust such discrepancies.\n>\n\nSure, I know. But in this case the async standby received and flushed\nabsolutely the same amount of WAL as the promoted one.\n\n\n>\n> I might be overlooking something, but I don't understand how this\n> occurs without purposefully tweaking WAL files. The repro script\n> pushes an incomplete WAL file to the archive as a non-partial\n> segment. This shouldn't happen in the real world.\n>\n\nIt easily happens if the primary crashed and standbys didn't receive\nanother page with continuation record.\n\nIn the repro script, the replication connection of the second standby\n> is switched from the old primary to the first standby after its\n> promotion. After the switching, replication is expected to continue\n> from the beginning of the last replayed segment.\n\n\nWell, maybe, but apparently the standby is busy trying to decode a record\nthat spans multiple pages, and it is just infinitely waiting for the next\npage to arrive. Also, the restart \"fixes\" the problem, because indeed it is\nreading the file from the beginning.\n\n\n> But with the script,\n> the second standby copies the intentionally broken file, which differs\n> from the data that should be received via streaming.\n\n\nAs I already said, this is a simple way to emulate the primary crash while\nstandbys receiving WAL.\nIt could easily happen that the record spans on multiple pages is not fully\nreceived and flushed.\n\n-- \nRegards,\n--\nAlexander Kukushkin\n\nHi Kyotaro,On Thu, 29 Feb 2024 at 08:18, Kyotaro Horiguchi <[email protected]> wrote:\nIn the first place, it's important to note that we do not guarantee\nthat an async standby can always switch its replication connection to\nthe old primary or another sibling standby. This is due to the\nvariations in replication lag among standbys. pg_rewind is required to\nadjust such discrepancies.Sure, I know. But in this case the async standby received and flushed absolutely the same amount of WAL as the promoted one. \n\nI might be overlooking something, but I don't understand how this\noccurs without purposefully tweaking WAL files. The repro script\npushes an incomplete WAL file to the archive as a non-partial\nsegment. This shouldn't happen in the real world.It easily happens if the primary crashed and standbys didn't receive another page with continuation record.\nIn the repro script, the replication connection of the second standby\nis switched from the old primary to the first standby after its\npromotion. After the switching, replication is expected to continue\nfrom the beginning of the last replayed segment.Well, maybe, but apparently the standby is busy trying to decode a record that spans multiple pages, and it is just infinitely waiting for the next page to arrive. Also, the restart \"fixes\" the problem, because indeed it is reading the file from the beginning. But with the script,\nthe second standby copies the intentionally broken file, which differs\nfrom the data that should be received via streaming.As I already said, this is a simple way to emulate the primary crash while standbys receiving WAL.It could easily happen that the record spans on multiple pages is not fully received and flushed.-- Regards,--Alexander Kukushkin",
"msg_date": "Thu, 29 Feb 2024 17:44:25 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 05:44:25PM +0100, Alexander Kukushkin wrote:\n> On Thu, 29 Feb 2024 at 08:18, Kyotaro Horiguchi <[email protected]>\n> wrote:\n>> In the first place, it's important to note that we do not guarantee\n>> that an async standby can always switch its replication connection to\n>> the old primary or another sibling standby. This is due to the\n>> variations in replication lag among standbys. pg_rewind is required to\n>> adjust such discrepancies.\n> \n> Sure, I know. But in this case the async standby received and flushed\n> absolutely the same amount of WAL as the promoted one.\n\nUgh. If it can happen transparently to the user without the user\nknowing directly about it, that does not sound good to me. I did not\nlook very closely at monitoring tools available out there, but if both\nstandbys flushed the same WAL locations a rewind should not be\nrequired. It is not something that monitoring tools would be able to\ndetect because they just look at LSNs.\n\n>> In the repro script, the replication connection of the second standby\n>> is switched from the old primary to the first standby after its\n>> promotion. After the switching, replication is expected to continue\n>> from the beginning of the last replayed segment.\n> \n> Well, maybe, but apparently the standby is busy trying to decode a record\n> that spans multiple pages, and it is just infinitely waiting for the next\n> page to arrive. Also, the restart \"fixes\" the problem, because indeed it is\n> reading the file from the beginning.\n\nWhat happens if the continuation record spawns across multiple segment\nfiles boundaries in this case? We would go back to the beginning of\nthe segment where the record spawning across multiple segments began,\nright? (I may recall this part of contrecords incorrectly, feel free\nto correct me if necessary.)\n\n>> But with the script,\n>> the second standby copies the intentionally broken file, which differs\n>> from the data that should be received via streaming.\n> \n> As I already said, this is a simple way to emulate the primary crash while\n> standbys receiving WAL.\n> It could easily happen that the record spans on multiple pages is not fully\n> received and flushed.\n\nI think that's OK to do so at test level to force a test in the\nbackend, FWIW, because that's cheaper, and 039_end_of_wal.pl has\nproved that this can be designed to be cheap and stable across the\nbuildfarm fleet.\n\nFor anything like that, the result had better have solid test\ncoverage, where perhaps we'd better refactor some of the routines of\n039_end_of_wal.pl into a module to use them here, rather than\nduplicate the code. The other test has a few assumptions with the\ncalculation of page boundaries, and I'd rather not duplicate that\nacross the tree.\n\nSeeing that Alexander is a maintainer of Patroni, which is very\nprobably used by his employer across a large set of PostgreSQL\ninstances, if he says that he's seen that in the field, that's good\nenough for me to respond to the problem, especially if reconnecting a\nstandby to a promoted node where both flushed the same LSN. Now the\nlevel of response also depends on the invasiness of the change, and we\nneed a very careful evaluation here.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 08:17:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Fri, 1 Mar 2024 08:17:04 +0900, Michael Paquier <[email protected]> wrote in \n> On Thu, Feb 29, 2024 at 05:44:25PM +0100, Alexander Kukushkin wrote:\n> > On Thu, 29 Feb 2024 at 08:18, Kyotaro Horiguchi <[email protected]>\n> > wrote:\n> >> In the first place, it's important to note that we do not guarantee\n> >> that an async standby can always switch its replication connection to\n> >> the old primary or another sibling standby. This is due to the\n> >> variations in replication lag among standbys. pg_rewind is required to\n> >> adjust such discrepancies.\n> > \n> > Sure, I know. But in this case the async standby received and flushed\n> > absolutely the same amount of WAL as the promoted one.\n> \n> Ugh. If it can happen transparently to the user without the user\n> knowing directly about it, that does not sound good to me. I did not\n> look very closely at monitoring tools available out there, but if both\n> standbys flushed the same WAL locations a rewind should not be\n> required. It is not something that monitoring tools would be able to\n> detect because they just look at LSNs.\n> \n> >> In the repro script, the replication connection of the second standby\n> >> is switched from the old primary to the first standby after its\n> >> promotion. After the switching, replication is expected to continue\n> >> from the beginning of the last replayed segment.\n> > \n> > Well, maybe, but apparently the standby is busy trying to decode a record\n> > that spans multiple pages, and it is just infinitely waiting for the next\n> > page to arrive. Also, the restart \"fixes\" the problem, because indeed it is\n> > reading the file from the beginning.\n> \n> What happens if the continuation record spawns across multiple segment\n> files boundaries in this case? We would go back to the beginning of\n> the segment where the record spawning across multiple segments began,\n> right? (I may recall this part of contrecords incorrectly, feel free\n> to correct me if necessary.)\n\nAfter reading this, I came up with a possibility that walreceiver\nrecovers more quickly than the calling interval to\nWaitForWALtoBecomeAvailable(). If walreceiver disconnects after a call\nto the function WaitForWAL...(), and then somehow recovers the\nconnection before the next call, the function doesn't notice the\ndisconnection and returns XLREAD_SUCCESS wrongly. If this assumption\nis correct, the correct fix might be for us to return XLREAD_FAIL when\nreconnection happens after the last call to the WaitForWAL...()\nfunction.\n\n> >> But with the script,\n> >> the second standby copies the intentionally broken file, which differs\n> >> from the data that should be received via streaming.\n> > \n> > As I already said, this is a simple way to emulate the primary crash while\n> > standbys receiving WAL.\n> > It could easily happen that the record spans on multiple pages is not fully\n> > received and flushed.\n> \n> I think that's OK to do so at test level to force a test in the\n> backend, FWIW, because that's cheaper, and 039_end_of_wal.pl has\n> proved that this can be designed to be cheap and stable across the\n> buildfarm fleet.\n\nYeah, I agree that it clearly illustrates the final state after the\nissue happened, but if my assumption above is correct, the test\ndoesn't manifest the real issue.\n\n> For anything like that, the result had better have solid test\n> coverage, where perhaps we'd better refactor some of the routines of\n> 039_end_of_wal.pl into a module to use them here, rather than\n> duplicate the code. The other test has a few assumptions with the\n> calculation of page boundaries, and I'd rather not duplicate that\n> across the tree.\n\nI agree to the point.\n\n> Seeing that Alexander is a maintainer of Patroni, which is very\n> probably used by his employer across a large set of PostgreSQL\n> instances, if he says that he's seen that in the field, that's good\n> enough for me to respond to the problem, especially if reconnecting a\n> standby to a promoted node where both flushed the same LSN. Now the\n> level of response also depends on the invasiness of the change, and we\n> need a very careful evaluation here.\n\nI don't mean to say that we should respond with DNF to this \"issue\" at\nall. I simply wanted to make the real issue clear.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 01 Mar 2024 10:29:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Fri, 01 Mar 2024 10:29:12 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> After reading this, I came up with a possibility that walreceiver\n> recovers more quickly than the calling interval to\n> WaitForWALtoBecomeAvailable(). If walreceiver disconnects after a call\n> to the function WaitForWAL...(), and then somehow recovers the\n> connection before the next call, the function doesn't notice the\n> disconnection and returns XLREAD_SUCCESS wrongly. If this assumption\n> is correct, the correct fix might be for us to return XLREAD_FAIL when\n> reconnection happens after the last call to the WaitForWAL...()\n> function.\n\nThat's my stupid. The function performs reconnection by itself.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 01 Mar 2024 12:04:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Fri, 01 Mar 2024 12:04:31 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> At Fri, 01 Mar 2024 10:29:12 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> > After reading this, I came up with a possibility that walreceiver\n> > recovers more quickly than the calling interval to\n> > WaitForWALtoBecomeAvailable(). If walreceiver disconnects after a call\n> > to the function WaitForWAL...(), and then somehow recovers the\n> > connection before the next call, the function doesn't notice the\n> > disconnection and returns XLREAD_SUCCESS wrongly. If this assumption\n> > is correct, the correct fix might be for us to return XLREAD_FAIL when\n> > reconnection happens after the last call to the WaitForWAL...()\n> > function.\n> \n> That's my stupid. The function performs reconnection by itself.\n\nAnyway, our current policy here is to avoid record-rereads beyond\nsource switches. However, fixing this seems to require that source\nswitches cause record rereads unless some additional information is\navailable to know if replay LSN needs to back up.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 01 Mar 2024 12:37:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Fri, 01 Mar 2024 12:37:55 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Anyway, our current policy here is to avoid record-rereads beyond\n> source switches. However, fixing this seems to require that source\n> switches cause record rereads unless some additional information is\n> available to know if replay LSN needs to back up.\n\nIt seems to me that the error messages are related to commit 0668719801.\n\nXLogPageRead:\n> * Check the page header immediately, so that we can retry immediately if\n> * it's not valid. This may seem unnecessary, because ReadPageInternal()\n> * validates the page header anyway, and would propagate the failure up to\n> * ReadRecord(), which would retry. However, there's a corner case with\n> * continuation records, if a record is split across two pages such that\n> * we would need to read the two pages from different sources. For\n...\n>\tif (StandbyMode &&\n>\t\t!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n>\t{\n>\t\t/*\n>\t\t * Emit this error right now then retry this page immediately. Use\n>\t\t * errmsg_internal() because the message was already translated.\n>\t\t */\n>\t\tif (xlogreader->errormsg_buf[0])\n>\t\t\tereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),\n>\t\t\t\t\t(errmsg_internal(\"%s\", xlogreader->errormsg_buf)));\n\nThis code intends to prevent a page header error from causing a record\nreread, when a record is required to be read from multiple sources. We\ncould restrict this to only fire at segment boundaries. At segment\nboundaries, we won't let LSN back up by using XLP_FIRST_IS_CONTRECORD.\n\nHaving thought up to this point, I now believe that we should\ncompletely prevent LSN from going back in any case. One drawback is\nthat the fix cannot be back-patched.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 01 Mar 2024 13:16:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hello Michael, Kyotaro,\n\nPlease find attached the patch fixing the problem and the updated TAP test\nthat addresses Nit.\n\n-- \nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Tue, 5 Mar 2024 09:36:44 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Tue, 5 Mar 2024 09:36:44 +0100, Alexander Kukushkin <[email protected]> wrote in \n> Please find attached the patch fixing the problem and the updated TAP test\n> that addresses Nit.\n\nRecord-level retries happen when the upper layer detects errors. In my\nprevious mail, I cited code that is intended to prevent this at\nsegment boundaries. However, the resulting code applies to all page\nboundaries, since we judged that the difference doen't significanty\naffects the outcome.\n\n> * Check the page header immediately, so that we can retry immediately if\n> * it's not valid. This may seem unnecessary, because ReadPageInternal()\n> * validates the page header anyway, and would propagate the failure up to\n\nSo, the following (tentative) change should also work.\n\nxlogrecovery.c:\n@@ -3460,8 +3490,10 @@ retry:\n \t * responsible for the validation.\n \t */\n \tif (StandbyMode &&\n+\t\ttargetPagePtr % 0x100000 == 0 &&\n \t\t!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n \t{\n\nThus, I managed to reproduce precisely the same situation as you\ndescribed utilizing your script with modifications and some core\ntweaks, and with the change above, I saw that the behavior was\nfixed. However, for reasons unclear to me, it shows another issue, and\nI am running out of time and need more caffeine. I'll continue\ninvestigating this tomorrow.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Mar 2024 17:57:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hi Kyotaro,\n\nOh, now I understand what you mean. Is the retry supposed to happen only\nwhen we are reading the very first page from the WAL file?\n\nOn Wed, 6 Mar 2024 at 09:57, Kyotaro Horiguchi <[email protected]>\nwrote:\n\n>\n> xlogrecovery.c:\n> @@ -3460,8 +3490,10 @@ retry:\n> * responsible for the validation.\n> */\n> if (StandbyMode &&\n> + targetPagePtr % 0x100000 == 0 &&\n> !XLogReaderValidatePageHeader(xlogreader, targetPagePtr,\n> readBuf))\n> {\n>\n>\nHmm, I think you meant to use wal_segment_size, because 0x100000 is just\n1MB. As a result, currently it works for you by accident.\n\n\n> Thus, I managed to reproduce precisely the same situation as you\n> described utilizing your script with modifications and some core\n> tweaks, and with the change above, I saw that the behavior was\n> fixed. However, for reasons unclear to me, it shows another issue, and\n> I am running out of time and need more caffeine. I'll continue\n> investigating this tomorrow.\n>\n\nThank you for spending your time on it!\n\n-- \nRegards,\n--\nAlexander Kukushkin\n\nHi Kyotaro,Oh, now I understand what you mean. Is the retry supposed to happen only when we are reading the very first page from the WAL file?On Wed, 6 Mar 2024 at 09:57, Kyotaro Horiguchi <[email protected]> wrote:\nxlogrecovery.c:\n@@ -3460,8 +3490,10 @@ retry:\n * responsible for the validation.\n */\n if (StandbyMode &&\n+ targetPagePtr % 0x100000 == 0 &&\n !XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n {\nHmm, I think you meant to use wal_segment_size, because 0x100000 is just 1MB. As a result, currently it works for you by accident. \nThus, I managed to reproduce precisely the same situation as you\ndescribed utilizing your script with modifications and some core\ntweaks, and with the change above, I saw that the behavior was\nfixed. However, for reasons unclear to me, it shows another issue, and\nI am running out of time and need more caffeine. I'll continue\ninvestigating this tomorrow.\nThank you for spending your time on it!-- Regards,--Alexander Kukushkin",
"msg_date": "Wed, 6 Mar 2024 11:34:29 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Wed, 6 Mar 2024 11:34:29 +0100, Alexander Kukushkin <[email protected]> wrote in \n> Hmm, I think you meant to use wal_segment_size, because 0x100000 is just\n> 1MB. As a result, currently it works for you by accident.\n\nOh, I once saw the fix work, but seems not to be working after some\npoint. The new issue was a corruption of received WAL records on the\nfirst standby, and it may be related to the setting.\n\n> > Thus, I managed to reproduce precisely the same situation as you\n> > described utilizing your script with modifications and some core\n> > tweaks, and with the change above, I saw that the behavior was\n> > fixed. However, for reasons unclear to me, it shows another issue, and\n> > I am running out of time and need more caffeine. I'll continue\n> > investigating this tomorrow.\n> >\n> \n> Thank you for spending your time on it!\n\nYou're welcome, but I aplogize for the delay in the work..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:43:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 04:43:32PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 6 Mar 2024 11:34:29 +0100, Alexander Kukushkin <[email protected]> wrote in \n>> Thank you for spending your time on it!\n> \n> You're welcome, but I aplogize for the delay in the work..\n\nThanks for spending time on this. Everybody is busy with the last\ncommit fest, and the next minor release set is planned for May so\nthere should still be time even after the feature freeze:\nhttps://www.postgresql.org/developer/roadmap/\n\nI should be able to come back to this thread around the beginning of\nApril. If you are able to do some progress in-between, that would be\nsurely helpful.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 12 Mar 2024 08:20:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "At Mon, 11 Mar 2024 16:43:32 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Oh, I once saw the fix work, but seems not to be working after some\n> point. The new issue was a corruption of received WAL records on the\n> first standby, and it may be related to the setting.\n\nI identified the cause of the second issue. When I tried to replay the\nissue, the second standby accidentally received the old timeline's\nlast page-spanning record till the end while the first standby was\npromoting (but it had not been read by recovery). In addition to that,\non the second standby, there's a time window where the timeline\nincreased but the first segment of the new timeline is not available\nyet. In this case, the second standby successfully reads the\npage-spanning record in the old timeline even after the second standby\nnoticed that the timeline ID has been increased, thanks to the\nrobustness of XLogFileReadAnyTLI().\n\nI think the primary change to XLogPageRead that I suggested is correct\n(assuming the use of wal_segment_size instead of the\nconstant). However, still XLogFileReadAnyTLI() has a chance to read\nthe segment from the old timeline after the second standby notices a\ntimeline switch, leading to the second issue. The second issue was\nfixed by preventing XLogFileReadAnyTLI from reading segments from\nolder timelines than those suggested by the latest timeline\nhistory. (In other words, disabling the \"AnyTLI\" part).\n\nI recall that there was a discussion for commit 4bd0ad9e44, about the\nobjective of allowing reading segments from older timelines than the\ntimeline history suggests. In my faint memory, we concluded to\npostpone making the decision to remove the feature due to uncertainity\nabout the objective. If there's no clear reason to continue using\nXLogFileReadAnyTLI(), I suggest we stop its use and instead adopt\nXLogFileReadOnTLHistory(), which reads segments that align precisely\nwith the timeline history.\n\nOf course, regardless of the changes above, if recovery on the second\nstandby had reached the end of the page-spanning record before\nredirection to the first standby, it would need pg_rewind to connect\nto the first standby.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Mar 2024 11:56:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hi Kyotaro,\n\nOn Wed, 13 Mar 2024 at 03:56, Kyotaro Horiguchi <[email protected]>\nwrote:\n\nI identified the cause of the second issue. When I tried to replay the\n> issue, the second standby accidentally received the old timeline's\n> last page-spanning record till the end while the first standby was\n> promoting (but it had not been read by recovery). In addition to that,\n> on the second standby, there's a time window where the timeline\n> increased but the first segment of the new timeline is not available\n> yet. In this case, the second standby successfully reads the\n> page-spanning record in the old timeline even after the second standby\n> noticed that the timeline ID has been increased, thanks to the\n> robustness of XLogFileReadAnyTLI().\n>\n\nHmm, I don't think it could really be prevented.\nThere are always chances that the standby that is not ahead of other\nstandbys could be promoted due to reasons like:\n1. HA configuration doesn't let certain nodes to be promoted.\n2. This is an async standby (name isn't listed in\nsynchronous_standby_names) and it was ahead of promoted sync standby. No\ndata loss from the client point of view.\n\n\n> Of course, regardless of the changes above, if recovery on the second\n> standby had reached the end of the page-spanning record before\n> redirection to the first standby, it would need pg_rewind to connect\n> to the first standby.\n>\n\nCorrect, IMO pg_rewind is a right way of solving it.\n\nRegards,\n--\nAlexander Kukushkin\n\nHi Kyotaro,On Wed, 13 Mar 2024 at 03:56, Kyotaro Horiguchi <[email protected]> wrote:\nI identified the cause of the second issue. When I tried to replay the\nissue, the second standby accidentally received the old timeline's\nlast page-spanning record till the end while the first standby was\npromoting (but it had not been read by recovery). In addition to that,\non the second standby, there's a time window where the timeline\nincreased but the first segment of the new timeline is not available\nyet. In this case, the second standby successfully reads the\npage-spanning record in the old timeline even after the second standby\nnoticed that the timeline ID has been increased, thanks to the\nrobustness of XLogFileReadAnyTLI().Hmm, I don't think it could really be prevented.There are always chances that the standby that is not ahead of other standbys could be promoted due to reasons like:1. HA configuration doesn't let certain nodes to be promoted.2. This is an async standby (name isn't listed in synchronous_standby_names) and it was ahead of promoted sync standby. No data loss from the client point of view. \nOf course, regardless of the changes above, if recovery on the second\nstandby had reached the end of the page-spanning record before\nredirection to the first standby, it would need pg_rewind to connect\nto the first standby.Correct, IMO pg_rewind is a right way of solving it.Regards,--Alexander Kukushkin",
"msg_date": "Fri, 15 Mar 2024 08:20:15 +0100",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 04:56, Kyotaro Horiguchi <[email protected]> wrote:\n>\n> At Mon, 11 Mar 2024 16:43:32 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n> > Oh, I once saw the fix work, but seems not to be working after some\n> > point. The new issue was a corruption of received WAL records on the\n> > first standby, and it may be related to the setting.\n>\n> I identified the cause of the second issue. When I tried to replay the\n> issue, the second standby accidentally received the old timeline's\n> last page-spanning record till the end while the first standby was\n> promoting (but it had not been read by recovery). In addition to that,\n> on the second standby, there's a time window where the timeline\n> increased but the first segment of the new timeline is not available\n> yet. In this case, the second standby successfully reads the\n> page-spanning record in the old timeline even after the second standby\n> noticed that the timeline ID has been increased, thanks to the\n> robustness of XLogFileReadAnyTLI().\n>\n> I think the primary change to XLogPageRead that I suggested is correct\n> (assuming the use of wal_segment_size instead of the\n> constant). However, still XLogFileReadAnyTLI() has a chance to read\n> the segment from the old timeline after the second standby notices a\n> timeline switch, leading to the second issue. The second issue was\n> fixed by preventing XLogFileReadAnyTLI from reading segments from\n> older timelines than those suggested by the latest timeline\n> history. (In other words, disabling the \"AnyTLI\" part).\n>\n> I recall that there was a discussion for commit 4bd0ad9e44, about the\n> objective of allowing reading segments from older timelines than the\n> timeline history suggests. In my faint memory, we concluded to\n> postpone making the decision to remove the feature due to uncertainity\n> about the objective. If there's no clear reason to continue using\n> XLogFileReadAnyTLI(), I suggest we stop its use and instead adopt\n> XLogFileReadOnTLHistory(), which reads segments that align precisely\n> with the timeline history.\n\n\nThis sounds very similar to the problem described in [1]. And I think\nboth will be resolved by that change.\n\n[1] https://postgr.es/m/CANwKhkMN3QwAcvuDZHb6wsvLRtkweBiYso-KLFykkQVWuQLcOw%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 15 Mar 2024 12:43:41 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "Hi Michael and Kyotaro,\n\nNow that beta1 was released I hope you are not so busy and hence would like\nto follow up on this problem.\n\nRegards,\n--\nAlexander Kukushkin\n\nHi Michael and Kyotaro,Now that beta1 was released I hope you are not so busy and hence would like to follow up on this problem.Regards,--Alexander Kukushkin",
"msg_date": "Tue, 4 Jun 2024 16:16:43 +0200",
"msg_from": "Alexander Kukushkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
},
{
"msg_contents": "On Tue, Jun 04, 2024 at 04:16:43PM +0200, Alexander Kukushkin wrote:\n> Now that beta1 was released I hope you are not so busy and hence would like\n> to follow up on this problem.\n\nI am still working on something for the v18 cycle that I'd like to\npresent before the beginning of the next commit fest, so I am a bit\nbusy to get that out first. Fingers crossed to not have open items to\nlook at.. This thread is one of the things I have marked as an item\nto look at, yes.\n--\nMichael",
"msg_date": "Wed, 5 Jun 2024 14:09:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite loop in XLogPageRead() on standby"
}
] |
[
{
"msg_contents": "Improve performance of subsystems on top of SLRU\n\nMore precisely, what we do here is make the SLRU cache sizes\nconfigurable with new GUCs, so that sites with high concurrency and big\nranges of transactions in flight (resp. multixacts/subtransactions) can\nbenefit from bigger caches. In order for this to work with good\nperformance, two additional changes are made:\n\n1. the cache is divided in \"banks\" (to borrow terminology from CPU\n caches), and algorithms such as eviction buffer search only affect\n one specific bank. This forestalls the problem that linear searching\n for a specific buffer across the whole cache takes too long: we only\n have to search the specific bank, whose size is small. This work is\n authored by Andrey Borodin.\n\n2. Change the locking regime for the SLRU banks, so that each bank uses\n a separate LWLock. This allows for increased scalability. This work\n is authored by Dilip Kumar. (A part of this was previously committed as\n d172b717c6f4.)\n\nSpecial care is taken so that the algorithms that can potentially\ntraverse more than one bank release one bank's lock before acquiring the\nnext. This should happen rarely, but particularly clog.c's group commit\nfeature needed code adjustment to cope with this. I (Álvaro) also added\nlots of comments to make sure the design is sound.\n\nThe new GUCs match the names introduced by bcdfa5f2e2f2 in the\npg_stat_slru view.\n\nThe default values for these parameters are similar to the previous\nsizes of each SLRU. commit_ts, clog and subtrans accept value 0, which\nmeans to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB\nof shared_buffers), with a cap of 8MB. (A new slru.c function\nSimpleLruAutotuneBuffers() was added to support this.) The cap was\npreviously 1MB for clog, so for sites with more than 512MB of shared\nmemory the total memory used increases, which is likely a good tradeoff.\nHowever, other SLRUs (notably multixact ones) retain smaller sizes and\ndon't support a configured value of 0. These values based on\nshared_buffers may need to be revisited, but that's an easy change.\n\nThere was some resistance to adding these new GUCs: it would be better\nto adjust to memory pressure automatically somehow, for example by\nstealing memory from shared_buffers (where the caches can grow and\nshrink naturally). However, doing that seems to be a much larger\nproject and one which has made virtually no progress in several years,\nand because this is such a pain point for so many users, here we take\nthe pragmatic approach.\n\nAuthor: Andrey Borodin <[email protected]>\nAuthor: Dilip Kumar <[email protected]>\nReviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova,\n Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra,\n Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev).\nDiscussion: https://postgr.es/m/[email protected]\nDiscussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/53c2a97a92665be6bd7d70bd62ae6158fe4db96e\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 139 +++++++++\ndoc/src/sgml/monitoring.sgml | 9 +-\nsrc/backend/access/transam/clog.c | 243 +++++++++++-----\nsrc/backend/access/transam/commit_ts.c | 88 ++++--\nsrc/backend/access/transam/multixact.c | 190 +++++++++----\nsrc/backend/access/transam/slru.c | 357 +++++++++++++++++-------\nsrc/backend/access/transam/subtrans.c | 110 ++++++--\nsrc/backend/commands/async.c | 61 ++--\nsrc/backend/storage/lmgr/lwlock.c | 9 +-\nsrc/backend/storage/lmgr/lwlocknames.txt | 14 +-\nsrc/backend/storage/lmgr/predicate.c | 34 ++-\nsrc/backend/utils/activity/wait_event_names.txt | 15 +-\nsrc/backend/utils/init/globals.c | 9 +\nsrc/backend/utils/misc/guc_tables.c | 78 ++++++\nsrc/backend/utils/misc/postgresql.conf.sample | 9 +\nsrc/include/access/clog.h | 1 -\nsrc/include/access/commit_ts.h | 1 -\nsrc/include/access/multixact.h | 4 -\nsrc/include/access/slru.h | 86 ++++--\nsrc/include/access/subtrans.h | 3 -\nsrc/include/commands/async.h | 5 -\nsrc/include/miscadmin.h | 8 +\nsrc/include/storage/lwlock.h | 7 +\nsrc/include/storage/predicate.h | 4 -\nsrc/include/utils/guc_hooks.h | 11 +\nsrc/test/modules/test_slru/test_slru.c | 35 +--\n26 files changed, 1177 insertions(+), 353 deletions(-)",
"msg_date": "Wed, 28 Feb 2024 16:07:13 +0000",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Improve performance of subsystems on top of SLRU"
},
{
"msg_contents": "On 2024-Feb-28, Alvaro Herrera wrote:\n\n> Improve performance of subsystems on top of SLRU\n\nCoverity had the following complaint about this commit:\n\n________________________________________________________________________________________________________\n*** CID NNNNNNN: Control flow issues (DEADCODE)\n/srv/coverity/git/pgsql-git/postgresql/src/backend/access/transam/multixact.c: 1375 in GetMultiXactIdMembers()\n1369 * and acquire the lock of the new bank.\n1370 */\n1371 lock = SimpleLruGetBankLock(MultiXactOffsetCtl, pageno);\n1372 if (lock != prevlock)\n1373 {\n1374 if (prevlock != NULL)\n>>> CID 1592913: Control flow issues (DEADCODE) \n>>> Execution cannot reach this statement: \"LWLockRelease(prevlock);\". \n1375 LWLockRelease(prevlock);\n1376 LWLockAcquire(lock, LW_EXCLUSIVE);\n1377 prevlock = lock;\n1378 }\n1379\n1380 slotno = SimpleLruReadPage(MultiXactOffsetCtl, pageno, true, multi);\n\nAnd I think it's correct that this is somewhat bogus, or at least\nconfusing: the only way to have control back here on line 1371 after\nhaving executed once is via the \"goto retry\" line below; and there we\nrelease \"prevlock\" and set it to NULL beforehand, so it's impossible for\nprevlock to be NULL. Looking closer I think this code is all confused,\nso I suggest to rework it as shown in the attached patch.\n\nI'll have a look at the other places where we use this \"prevlock\" coding\npattern tomorrow.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Sun, 3 Mar 2024 15:29:22 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Improve performance of subsystems on top of SLRU"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> And I think it's correct that this is somewhat bogus, or at least\n> confusing: the only way to have control back here on line 1371 after\n> having executed once is via the \"goto retry\" line below; and there we\n> release \"prevlock\" and set it to NULL beforehand, so it's impossible for\n> prevlock to be NULL. Looking closer I think this code is all confused,\n> so I suggest to rework it as shown in the attached patch.\n\nThis is certainly simpler, but I notice that it holds the current\nLWLock across the line\n\n \tptr = (MultiXactMember *) palloc(length * sizeof(MultiXactMember));\n\nwhere the old code did not. Could the palloc take long enough that\nholding the lock is bad?\n\nAlso, with this coding the \"lock = NULL;\" assignment just before\n\"goto retry\" is a dead store. Not sure if Coverity or other static\nanalyzers would whine about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Mar 2024 16:14:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Improve performance of subsystems on top of SLRU"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 1:56 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-28, Alvaro Herrera wrote:\n>\n> > Improve performance of subsystems on top of SLRU\n>\n> Coverity had the following complaint about this commit:\n>\n> ________________________________________________________________________________________________________\n> *** CID NNNNNNN: Control flow issues (DEADCODE)\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/access/transam/multixact.c: 1375 in GetMultiXactIdMembers()\n> 1369 * and acquire the lock of the new bank.\n> 1370 */\n> 1371 lock = SimpleLruGetBankLock(MultiXactOffsetCtl, pageno);\n> 1372 if (lock != prevlock)\n> 1373 {\n> 1374 if (prevlock != NULL)\n> >>> CID 1592913: Control flow issues (DEADCODE)\n> >>> Execution cannot reach this statement: \"LWLockRelease(prevlock);\".\n> 1375 LWLockRelease(prevlock);\n> 1376 LWLockAcquire(lock, LW_EXCLUSIVE);\n> 1377 prevlock = lock;\n> 1378 }\n> 1379\n> 1380 slotno = SimpleLruReadPage(MultiXactOffsetCtl, pageno, true, multi);\n>\n> And I think it's correct that this is somewhat bogus, or at least\n> confusing: the only way to have control back here on line 1371 after\n> having executed once is via the \"goto retry\" line below; and there we\n> release \"prevlock\" and set it to NULL beforehand, so it's impossible for\n> prevlock to be NULL. Looking closer I think this code is all confused,\n> so I suggest to rework it as shown in the attached patch.\n>\n> I'll have a look at the other places where we use this \"prevlock\" coding\n> pattern tomorrow.\n\n\n+ /* Acquire the bank lock for the page we need. */\n lock = SimpleLruGetBankLock(MultiXactOffsetCtl, pageno);\n- if (lock != prevlock)\n- {\n- if (prevlock != NULL)\n- LWLockRelease(prevlock);\n- LWLockAcquire(lock, LW_EXCLUSIVE);\n- prevlock = lock;\n- }\n+ LWLockAcquire(lock, LW_EXCLUSIVE);\n\nThis part is definitely an improvement.\n\nI am not sure about the other changes, I mean that makes the code much\nsimpler but now we are not releasing the 'MultiXactOffsetCtl' related\nbank lock, and later in the following loop, we are comparing that lock\nagainst 'MultiXactMemberCtl' related bank lock. This makes code\nsimpler because now in the loop we are sure that we are always holding\nthe lock but I do not like comparing the bank locks for 2 different\nSLRUs, although there is no problem as there would not be a common\nlock address, anyway, I do not have any strong objection to what you\nhave done here.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 11:44:48 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Improve performance of subsystems on top of SLRU"
},
{
"msg_contents": "On 2024-Mar-03, Tom Lane wrote:\n\n> This is certainly simpler, but I notice that it holds the current\n> LWLock across the line\n> \n> \tptr = (MultiXactMember *) palloc(length * sizeof(MultiXactMember));\n> \n> where the old code did not. Could the palloc take long enough that\n> holding the lock is bad?\n\nHmm, I guess most of the time it shouldn't be much of a problem (if the\nlength is small so we can palloc without malloc'ing); but it could be in\nthe worst case. But the fix is simple: just release the lock before.\nThere's no correctness argument for holding it all the way down. I was\njust confused about how the original code worked.\n\n> Also, with this coding the \"lock = NULL;\" assignment just before\n> \"goto retry\" is a dead store. Not sure if Coverity or other static\n> analyzers would whine about that.\n\nOh, right. I removed that.\n\nOn 2024-Mar-04, Dilip Kumar wrote:\n\n> I am not sure about the other changes, I mean that makes the code much\n> simpler but now we are not releasing the 'MultiXactOffsetCtl' related\n> bank lock, and later in the following loop, we are comparing that lock\n> against 'MultiXactMemberCtl' related bank lock. This makes code\n> simpler because now in the loop we are sure that we are always holding\n> the lock but I do not like comparing the bank locks for 2 different\n> SLRUs, although there is no problem as there would not be a common\n> lock address,\n\nTrue. This can be addressed in the same way Tom's first comment is:\njust release the lock before entering the second loop, and setting lock\nto NULL. This brings the code to a similar state than before, except\nthat the additional LWLock * variables are in a tighter scope. That's\nin 0001.\n\n\nNow, I had a look at the other users of slru.c and noticed in subtrans.c\nthat StartupSUBTRANS we have some duplicate code that I think could be\nrewritten by making the \"while\" block test the condition at the end\ninstead of at the start; changed that in 0002. I'll leave this one for\nlater, because I want to add some test code for it -- right now it's\npretty much test-uncovered code.\n\n\nI also looked at slru.c for uses of shared->bank_locks and noticed a\ncouple that could be made simpler. That's 0003 here.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)",
"msg_date": "Mon, 4 Mar 2024 16:17:58 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Improve performance of subsystems on top of SLRU"
},
{
"msg_contents": "FWIW there's a stupid bug in 0002, which is fixed here. I'm writing a\nsimple test for it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"",
"msg_date": "Mon, 4 Mar 2024 18:36:55 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Improve performance of subsystems on top of SLRU"
}
] |
[
{
"msg_contents": "Hackers,\n\nThis patch adds checkpoint/redo LSNs to recovery error messages where \nthey may be useful for debugging.\n\nWhen backup_label is present the LSNs are already output in a log \nmessage, but it still seems like a good idea to repeat them.\n\nWhen backup_label is not present, the checkpoint LSN is not logged \nunless backup recovery is in progress or the checkpoint is found. In the \ncase where a backup is restored but the backup_label is missing, the \ncheckpoint LSN is not logged so it is useful for debugging to have it in \nthe error message.\n\nRegards,\n-David",
"msg_date": "Thu, 29 Feb 2024 10:53:15 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add checkpoint/redo LSNs to recovery errors."
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 10:53:15AM +1300, David Steele wrote:\n> This patch adds checkpoint/redo LSNs to recovery error messages where they\n> may be useful for debugging.\n\nThanks for following up on that!\n\n> When backup_label is not present, the checkpoint LSN is not logged unless\n> backup recovery is in progress or the checkpoint is found. In the case where\n> a backup is restored but the backup_label is missing, the checkpoint LSN is\n> not logged so it is useful for debugging to have it in the error message.\n\n ereport(PANIC,\n- (errmsg(\"could not locate a valid checkpoint record\")));\n+ (errmsg(\"could not locate a valid checkpoint record at %X/%X\",\n+ LSN_FORMAT_ARGS(CheckPointLoc))));\n }\n\nI've seen this one in the field occasionally, so that's really a\nwelcome addition IMO.\n\nI've scanned a bit xlogrecovery.c, and I have spotted a couple of that\ncould gain more information.\n\n ereport(PANIC,\n (errmsg(\"invalid redo record in shutdown checkpoint\")));\n[...]\n ereport(PANIC,\n (errmsg(\"invalid redo in checkpoint record\")));\nThese two could mention CheckPointLoc and checkPoint.redo.\n\n ereport(PANIC,\n (errmsg(\"invalid next transaction ID\")));\nPerhaps some XID information could be added here?\n\n ereport(FATAL,\n (errmsg(\"WAL ends before consistent recovery point\")));\n[...]\n ereport(FATAL,\n (errmsg(\"WAL ends before end of online backup\"),\n\nThese two are in xlog.c, and don't mention backupStartPoint for the\nfirst one. Perhaps there's a point in adding some information about\nEndOfLog and LocalMinRecoveryPoint as well?\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 12:42:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add checkpoint/redo LSNs to recovery errors."
},
{
"msg_contents": "On 2/29/24 16:42, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 10:53:15AM +1300, David Steele wrote:\n>> This patch adds checkpoint/redo LSNs to recovery error messages where they\n>> may be useful for debugging.\n> \n> I've scanned a bit xlogrecovery.c, and I have spotted a couple of that\n> could gain more information.\n> \n> ereport(PANIC,\n> (errmsg(\"invalid redo record in shutdown checkpoint\")));\n> [...]\n> ereport(PANIC,\n> (errmsg(\"invalid redo in checkpoint record\")));\n> These two could mention CheckPointLoc and checkPoint.redo.\n> \n> ereport(PANIC,\n> (errmsg(\"invalid next transaction ID\")));\n> Perhaps some XID information could be added here?\n> \n> ereport(FATAL,\n> (errmsg(\"WAL ends before consistent recovery point\")));\n> [...]\n> ereport(FATAL,\n> (errmsg(\"WAL ends before end of online backup\"),\n> \n> These two are in xlog.c, and don't mention backupStartPoint for the\n> first one. Perhaps there's a point in adding some information about\n> EndOfLog and LocalMinRecoveryPoint as well?\n\nFor now I'd like to just focus on these three messages that are related \nto a missing backup_label or a misconfiguration of restore_command when \nbackup_label is present.\n\nNo doubt there are many other recovery log messages that could be \nimproved, but I'd rather not bog down the patch.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sun, 10 Mar 2024 16:58:19 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add checkpoint/redo LSNs to recovery errors."
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 04:58:19PM +1300, David Steele wrote:\n> No doubt there are many other recovery log messages that could be improved,\n> but I'd rather not bog down the patch.\n\nFair argument, and these additions are useful when taken\nindependently. I've applied your suggestions for now.\n--\nMichael",
"msg_date": "Mon, 11 Mar 2024 09:23:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add checkpoint/redo LSNs to recovery errors."
}
] |
[
{
"msg_contents": "While checking some recently pushed changes [1] I noticed\ndocumentation [2] that includes the abbreviation \"aka\".\n\nIMO it is preferable to avoid informal abbreviations like \"aka\" in the\ndocuments, because not everyone will understand the meaning.\nFurthermore, I think this is reinforced by the fact this was the\n*only* example of \"aka\" that I could find in all of the .sgml. Indeed,\nassuming that \"aka\" is short for \"also known as\" then the sentence\nstill doesn't seem correct even after those words are substituted.\n\nHEAD\nFor the synchronization to work, it is mandatory to have a physical\nreplication slot between the primary and the standby aka\nprimary_slot_name should be configured on the standby, and\nhot_standby_feedback must be enabled on the standby.\n\nSUGGESTION\nFor the synchronization to work, it is mandatory to have a physical\nreplication slot between the primary and the standby (i.e.,\nprimary_slot_name should be configured on the standby), and\nhot_standby_feedback must be enabled on the standby.\n\n~\n\nI found that the \"aka\" was introduced in v86-0001 [3]. So my\nreplacement text above restores to something similar to how it was in\nv85-0001.\n\nPSA a patch for the same.\n\n----------\n[1] https://github.com/postgres/postgres/commit/ddd5f4f54a026db6a6692876d0d44aef902ab686#diff-29c2d2e0480177b04f9c3d82c1454f8c00a11b8e761a9c9f5f4f6d61e6f19252\n[2] https://www.postgresql.org/docs/devel/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS-SYNCHRONIZATION\n[3] [1] https://www.postgresql.org/message-id/OS0PR01MB5716E581B4227DDEB4DE6C30944F2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 29 Feb 2024 16:51:50 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 04:51:50PM +1100, Peter Smith wrote:\n> HEAD\n> For the synchronization to work, it is mandatory to have a physical\n> replication slot between the primary and the standby aka\n> primary_slot_name should be configured on the standby, and\n> hot_standby_feedback must be enabled on the standby.\n> \n> SUGGESTION\n> For the synchronization to work, it is mandatory to have a physical\n> replication slot between the primary and the standby (i.e.,\n> primary_slot_name should be configured on the standby), and\n> hot_standby_feedback must be enabled on the standby.\n\nI agree that this is not a good practice in user-visible docs, and\nthat your suggested is more pleasant to read.\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 15:32:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 12:02 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 29, 2024 at 04:51:50PM +1100, Peter Smith wrote:\n> > HEAD\n> > For the synchronization to work, it is mandatory to have a physical\n> > replication slot between the primary and the standby aka\n> > primary_slot_name should be configured on the standby, and\n> > hot_standby_feedback must be enabled on the standby.\n> >\n> > SUGGESTION\n> > For the synchronization to work, it is mandatory to have a physical\n> > replication slot between the primary and the standby (i.e.,\n> > primary_slot_name should be configured on the standby), and\n> > hot_standby_feedback must be enabled on the standby.\n>\n> I agree that this is not a good practice in user-visible docs, and\n> that your suggested is more pleasant to read.\n>\n\n+1. LGTM as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 29 Feb 2024 14:42:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Thu, 2024-02-29 at 16:51 +1100, Peter Smith wrote:\n> While checking some recently pushed changes [1] I noticed\n> documentation [2] that includes the abbreviation \"aka\".\n> \n> IMO it is preferable to avoid informal abbreviations like \"aka\" in the\n> documents, because not everyone will understand the meaning.\n> Furthermore, I think this is reinforced by the fact this was the\n> *only* example of \"aka\" that I could find in all of the .sgml. Indeed,\n> assuming that \"aka\" is short for \"also known as\" then the sentence\n> still doesn't seem correct even after those words are substituted.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:14:36 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 02:42:08PM +0530, Amit Kapila wrote:\n> +1. LGTM as well.\n\nThis has been introduced by ddd5f4f54a02, so if you wish to fix it\nyourself, please feel free. If you'd prefer that I take care of it,\nI'm OK to do so as well.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 07:55:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 4:25 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 29, 2024 at 02:42:08PM +0530, Amit Kapila wrote:\n> > +1. LGTM as well.\n>\n> This has been introduced by ddd5f4f54a02, so if you wish to fix it\n> yourself, please feel free. If you'd prefer that I take care of it,\n> I'm OK to do so as well.\n>\n\nI wanted to wait for two or three days to see if any other fixes in\ndocs, typos, or cosmetic stuff are reported in this functionality then\nI can combine and push them. However, there is no harm in pushing them\nseparately, so if you want to go ahead please feel free to do so.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Mar 2024 11:08:21 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 11:08:21AM +0530, Amit Kapila wrote:\n> I wanted to wait for two or three days to see if any other fixes in\n> docs, typos, or cosmetic stuff are reported in this functionality then\n> I can combine and push them. However, there is no harm in pushing them\n> separately, so if you want to go ahead please feel free to do so.\n\nNah, feel free to :)\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 14:59:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 11:29 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Mar 01, 2024 at 11:08:21AM +0530, Amit Kapila wrote:\n> > I wanted to wait for two or three days to see if any other fixes in\n> > docs, typos, or cosmetic stuff are reported in this functionality then\n> > I can combine and push them. However, there is no harm in pushing them\n> > separately, so if you want to go ahead please feel free to do so.\n>\n> Nah, feel free to :)\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Mar 2024 10:02:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 10:02:36AM +0530, Amit Kapila wrote:\n> Pushed.\n\nThanks.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2024 14:19:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DOCS: Avoid using abbreviation \"aka\""
}
] |
[
{
"msg_contents": "Hi all,\n\nIt's been brought to me that an extension may finish by breaking the\nassumptions ProcessUtility() relies on when calling\nstandard_ProcessUtility(), causing breakages when passing down data to\ncascading utility hooks.\n\nIsn't the state of the arguments given something we should check not\nonly in the main entry point ProcessUtility() but also in\nstandard_ProcessUtility(), to prevent issues if an extension\nincorrectly manipulates the arguments it needs to pass down to other\nmodules that use the utility hook, like using a NULL query string?\n\nSee the attached for the idea.\nThanks,\n--\nMichael",
"msg_date": "Thu, 29 Feb 2024 16:20:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 3:21 PM Michael Paquier <[email protected]> wrote:\n>\n> Hi all,\n>\n> It's been brought to me that an extension may finish by breaking the\n> assumptions ProcessUtility() relies on when calling\n> standard_ProcessUtility(), causing breakages when passing down data to\n> cascading utility hooks.\n>\n> Isn't the state of the arguments given something we should check not\n> only in the main entry point ProcessUtility() but also in\n> standard_ProcessUtility(), to prevent issues if an extension\n> incorrectly manipulates the arguments it needs to pass down to other\n> modules that use the utility hook, like using a NULL query string?\n>\n> See the attached for the idea.\n\nwhy not just shovel these to standard_ProcessUtility.\nso ProcessUtility will looking consistent with (in format)\n * ExecutorStart()\n * ExecutorRun()\n * ExecutorFinish()\n * ExecutorEnd()\n\n\n",
"msg_date": "Thu, 29 Feb 2024 16:10:26 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 04:10:26PM +0800, jian he wrote:\n> why not just shovel these to standard_ProcessUtility.\n> so ProcessUtility will looking consistent with (in format)\n> * ExecutorStart()\n> * ExecutorRun()\n> * ExecutorFinish()\n> * ExecutorEnd()\n\nThat's one of the points of the change: checking that only in\nstandard_ProcessUtility() may not be sufficient for utility hooks that\ndon't call standard_ProcessUtility(), so you'd stil want one in\nProcessUtility().\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 11:05:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 2:21 AM Michael Paquier <[email protected]> wrote:\n> It's been brought to me that an extension may finish by breaking the\n> assumptions ProcessUtility() relies on when calling\n> standard_ProcessUtility(), causing breakages when passing down data to\n> cascading utility hooks.\n>\n> Isn't the state of the arguments given something we should check not\n> only in the main entry point ProcessUtility() but also in\n> standard_ProcessUtility(), to prevent issues if an extension\n> incorrectly manipulates the arguments it needs to pass down to other\n> modules that use the utility hook, like using a NULL query string?\n\nI can't imagine a scenario where this change saves more than 5 minutes\nof debugging, so I'd rather leave things the way they are. If you do\nthis, then people will see the macro and have to go look at what it\ndoes, whereas right now, they can see the assertions themselves, which\nis better.\n\nThe usual pattern for using hooks like this is to call the next\nimplementation, or the standard implementation, and pass down the\narguments that you got. If you try to do that and mess it up, you're\ngoing to get a crash really, really quickly and it shouldn't be very\ndifficult at all to figure out what you did wrong. Honestly, that\ndoesn't seem like it would be hard even without the assertions: for\nthe most part, a quick glance at the stack backtrace ought to be\nenough. If you're trying to do something more sophisticated, like\nmutating the node tree before passing it down, the chances of your\nmistakes getting caught by these assertions are pretty darn low, since\nthey're very superficial checks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 15:53:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
},
{
"msg_contents": "On Fri, May 17, 2024 at 03:53:58PM -0400, Robert Haas wrote:\n> The usual pattern for using hooks like this is to call the next\n> implementation, or the standard implementation, and pass down the\n> arguments that you got. If you try to do that and mess it up, you're\n> going to get a crash really, really quickly and it shouldn't be very\n> difficult at all to figure out what you did wrong. Honestly, that\n> doesn't seem like it would be hard even without the assertions: for\n> the most part, a quick glance at the stack backtrace ought to be\n> enough. If you're trying to do something more sophisticated, like\n> mutating the node tree before passing it down, the chances of your\n> mistakes getting caught by these assertions are pretty darn low, since\n> they're very superficial checks.\n\nPerhaps, still that would be something.\n\nHmm. We've had in the past bugs where DDL paths were playing with the\nmanipulation of Querys in core, corrupting their contents. It seems\nlike what you would mean is to have a check at the *end* of\nProcessUtility() that does an equalFuncs() on the Query, comparing it\nwith a copy of it taken at its beginning, all that hidden within two\nUSE_ASSERT_CHECKING blocks, when we'd expect the tree to not change.\nWith readOnlyTree, that would be just changing from one policy to\nanother, which is not really interesting at this stage based on how\nProcessUtility() is shaped.\n--\nMichael",
"msg_date": "Sat, 18 May 2024 11:11:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
},
{
"msg_contents": "On Fri, May 17, 2024 at 10:11 PM Michael Paquier <[email protected]> wrote:\n> On Fri, May 17, 2024 at 03:53:58PM -0400, Robert Haas wrote:\n> > The usual pattern for using hooks like this is to call the next\n> > implementation, or the standard implementation, and pass down the\n> > arguments that you got. If you try to do that and mess it up, you're\n> > going to get a crash really, really quickly and it shouldn't be very\n> > difficult at all to figure out what you did wrong. Honestly, that\n> > doesn't seem like it would be hard even without the assertions: for\n> > the most part, a quick glance at the stack backtrace ought to be\n> > enough. If you're trying to do something more sophisticated, like\n> > mutating the node tree before passing it down, the chances of your\n> > mistakes getting caught by these assertions are pretty darn low, since\n> > they're very superficial checks.\n>\n> Perhaps, still that would be something.\n>\n> Hmm. We've had in the past bugs where DDL paths were playing with the\n> manipulation of Querys in core, corrupting their contents. It seems\n> like what you would mean is to have a check at the *end* of\n> ProcessUtility() that does an equalFuncs() on the Query, comparing it\n> with a copy of it taken at its beginning, all that hidden within two\n> USE_ASSERT_CHECKING blocks, when we'd expect the tree to not change.\n> With readOnlyTree, that would be just changing from one policy to\n> another, which is not really interesting at this stage based on how\n> ProcessUtility() is shaped.\n\nI don't think I meant to imply that. I think what I feel is that\nthere's no real problem here, and that the patch sort of superficially\nlooks useful, but isn't really. I'd suggest just dropping it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2024 13:11:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Propagate sanity checks of ProcessUtility() to\n standard_ProcessUtility()?"
}
] |
[
{
"msg_contents": "Hi!\n\nI'd like to suggest two independent patches to improve performance of type cache \ncleanup. I found a case where type cache cleanup was a reason for low \nperformance. In short, customer makes 100 thousand temporary tables in one \ntransaction.\n\n1 mapRelType.patch\n It just adds a local map between relation and its type as it was suggested in \ncomment above TypeCacheRelCallback(). Unfortunately, using syscache here was \nimpossible because this call back could be called outside transaction and it \nmakes impossible catalog lookups.\n\n2 hash_seq_init_with_hash_value.patch\n TypeCacheTypCallback() loop over type hash to find entry with given hash \nvalue. Here there are two problems: 1) there isn't interface to dynahash to \nsearch entry with given hash value and 2) hash value calculation algorithm is \ndiffer from system cache. But coming hashvalue is came from system cache. Patch \nis addressed to both issues. It suggests hash_seq_init_with_hash_value() call \nwhich inits hash sequential scan over the single bucket which could contain \nentry with given hash value, and hash_seq_search() will iterate only over such \nentries. Anf patch changes hash algorithm to match syscache. Actually, patch \nmakes small refactoring of dynahash, it makes common function hash_do_lookup() \nwhich does initial lookup in hash.\n\nSome artificial performance test is in attachment, command to test is 'time psql \n< custom_types_and_array.sql', here I show only last rollback time and total \nexecution time:\n1) master 92d2ab7554f92b841ea71bcc72eaa8ab11aae662\nTime: 33353,288 ms (00:33,353)\npsql < custom_types_and_array.sql 0,82s user 0,71s system 1% cpu 1:28,36 total\n\n2) mapRelType.patch\nTime: 7455,581 ms (00:07,456)\npsql < custom_types_and_array.sql 1,39s user 1,19s system 6% cpu 41,220 total\n\n3) hash_seq_init_with_hash_value.patch\nTime: 24975,886 ms (00:24,976)\npsql < custom_types_and_array.sql 1,33s user 1,25s system 3% cpu 1:19,77 total\n\n4) both\nTime: 89,446 ms\npsql < custom_types_and_array.sql 0,72s user 0,52s system 10% cpu 12,137 total\n\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/",
"msg_date": "Thu, 29 Feb 2024 13:26:07 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "type cache cleanup improvements"
},
{
"msg_contents": "Hi Teodor,\n\n> I'd like to suggest two independent patches to improve performance of type cache\n> cleanup. I found a case where type cache cleanup was a reason for low\n> performance. In short, customer makes 100 thousand temporary tables in one\n> transaction.\n>\n> 1 mapRelType.patch\n> It just adds a local map between relation and its type as it was suggested in\n> comment above TypeCacheRelCallback(). Unfortunately, using syscache here was\n> impossible because this call back could be called outside transaction and it\n> makes impossible catalog lookups.\n>\n> 2 hash_seq_init_with_hash_value.patch\n> TypeCacheTypCallback() loop over type hash to find entry with given hash\n> value. Here there are two problems: 1) there isn't interface to dynahash to\n> search entry with given hash value and 2) hash value calculation algorithm is\n> differ from system cache. But coming hashvalue is came from system cache. Patch\n> is addressed to both issues. It suggests hash_seq_init_with_hash_value() call\n> which inits hash sequential scan over the single bucket which could contain\n> entry with given hash value, and hash_seq_search() will iterate only over such\n> entries. Anf patch changes hash algorithm to match syscache. Actually, patch\n> makes small refactoring of dynahash, it makes common function hash_do_lookup()\n> which does initial lookup in hash.\n>\n> Some artificial performance test is in attachment, command to test is 'time psql\n> < custom_types_and_array.sql', here I show only last rollback time and total\n> execution time:\n> 1) master 92d2ab7554f92b841ea71bcc72eaa8ab11aae662\n> Time: 33353,288 ms (00:33,353)\n> psql < custom_types_and_array.sql 0,82s user 0,71s system 1% cpu 1:28,36 total\n>\n> 2) mapRelType.patch\n> Time: 7455,581 ms (00:07,456)\n> psql < custom_types_and_array.sql 1,39s user 1,19s system 6% cpu 41,220 total\n>\n> 3) hash_seq_init_with_hash_value.patch\n> Time: 24975,886 ms (00:24,976)\n> psql < custom_types_and_array.sql 1,33s user 1,25s system 3% cpu 1:19,77 total\n>\n> 4) both\n> Time: 89,446 ms\n> psql < custom_types_and_array.sql 0,72s user 0,52s system 10% cpu 12,137 total\n\nThese changes look very promising. Unfortunately the proposed patches\nconflict with each other regardless the order of applying:\n\n```\nerror: patch failed: src/backend/utils/cache/typcache.c:356\nerror: src/backend/utils/cache/typcache.c: patch does not apply\n```\n\nSo it's difficult to confirm case 4, not to mention the fact that we\nare unable to test the patches on cfbot.\n\nCould you please rebase the patches against the recent master branch\n(in any order) and submit the result of `git format-patch` ?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:01:53 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi!\n\nThank you for interesting in it!\n\n> These changes look very promising. Unfortunately the proposed patches\n> conflict with each other regardless the order of applying:\n> \n> ```\n> error: patch failed: src/backend/utils/cache/typcache.c:356\n> error: src/backend/utils/cache/typcache.c: patch does not apply\n> ```\nTry increase -F option of patch.\n\nAnyway, union of both patches in attachment\n\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/",
"msg_date": "Tue, 5 Mar 2024 12:51:37 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi,\n\n> Thank you for interesting in it!\n>\n> > These changes look very promising. Unfortunately the proposed patches\n> > conflict with each other regardless the order of applying:\n> >\n> > ```\n> > error: patch failed: src/backend/utils/cache/typcache.c:356\n> > error: src/backend/utils/cache/typcache.c: patch does not apply\n> > ```\n> Try increase -F option of patch.\n>\n> Anyway, union of both patches in attachment\n\nThanks for the quick update.\n\nI tested the patch on an Intel MacBook. A release build was used with\nmy typical configuration, TWIMC see single-install-meson.sh [1]. The\nspeedup I got on the provided benchmark is about 150 times. cfbot\nseems to be happy with the patch.\n\nI would like to tweak the patch a little bit - change some comments,\nadd some Asserts, etc. Don't you mind?\n\n[1]: https://github.com/afiskon/pgscripts/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 5 Mar 2024 15:31:45 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "\n\n> I would like to tweak the patch a little bit - change some comments,\n> add some Asserts, etc. Don't you mind?\nYou are welcome!\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:16:30 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi,\n\n> > I would like to tweak the patch a little bit - change some comments,\n> > add some Asserts, etc. Don't you mind?\n> You are welcome!\n\nThanks. PFA the updated patch with some tweaks by me. I added the\ncommit message as well.\n\nOne thing that I couldn't immediately figure out is why 0 hash value\nis treated as a magic invalid value in TypeCacheTypCallback():\n\n```\n- hash_seq_init(&status, TypeCacheHash);\n+ if (hashvalue == 0)\n+ hash_seq_init(&status, TypeCacheHash);\n+ else\n+ hash_seq_init_with_hash_value(&status, TypeCacheHash,\nhashvalue);\n```\n\nIs there anything that prevents the actual hash value from being zero?\nI don't think so, but maybe I missed something.\n\nIf zero is indeed an invalid hash value I would like to reference the\ncorresponding code. If zero is a correct hash value we should either\nchange this by adding something like `if(!hash) hash++` or use an\nadditional boolean argument here.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 8 Mar 2024 18:31:45 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> One thing that I couldn't immediately figure out is why 0 hash value\n> is treated as a magic invalid value in TypeCacheTypCallback():\n\nI've not read this patch, but IIRC in some places we have a convention\nthat hash value zero is passed for an sinval reset event (that is,\n\"flush all cache entries\").\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Mar 2024 10:35:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Yep, exacly. One time from 2^32 we reset whole cache instead of one (or several) \nentry with hash value = 0.\n\nOn 08.03.2024 18:35, Tom Lane wrote:\n> Aleksander Alekseev <[email protected]> writes:\n>> One thing that I couldn't immediately figure out is why 0 hash value\n>> is treated as a magic invalid value in TypeCacheTypCallback():\n> \n> I've not read this patch, but IIRC in some places we have a convention\n> that hash value zero is passed for an sinval reset event (that is,\n> \"flush all cache entries\").\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n\n\n",
"msg_date": "Fri, 8 Mar 2024 23:27:07 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi,\n\n> Yep, exacly. One time from 2^32 we reset whole cache instead of one (or several)\n> entry with hash value = 0.\n\nGot it. Here is an updated patch where I added a corresponding comment.\n\nNow the patch LGTM. I'm going to change its status to RfC unless\nanyone wants to review it too.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 11 Mar 2024 15:24:57 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "> Got it. Here is an updated patch where I added a corresponding comment.\nThank you!\n\nPlaying around I found one more place which could easily modified with \nhash_seq_init_with_hash_value() call.\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/",
"msg_date": "Tue, 12 Mar 2024 18:55:41 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 06:55:41PM +0300, Teodor Sigaev wrote:\n> Playing around I found one more place which could easily modified with\n> hash_seq_init_with_hash_value() call.\n\nI think that this patch should be split for clarity, as there are a\nfew things that are independently useful. I guess something like\nthat:\n- Introduction of hash_initial_lookup(), that simplifies 3 places of\ndynahash.c where the same code is used. The routine should be\ninlined.\n- The split in hash_seq_search to force a different type of search is\nweird, complicating the dynahash interface by hiding what seems like a\nsearch mode. Rather than hasHashvalue that's hidden in the middle of\nHASH_SEQ_STATUS, could it be better to have an entirely different API\nfor the search? That should be a patch on its own, as well.\n- The typcache changes.\n--\nMichael",
"msg_date": "Wed, 13 Mar 2024 14:47:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "> I think that this patch should be split for clarity, as there are a\n> few things that are independently useful. I guess something like\n> that:\nDone, all patches should be applied consequentially.\n\n > - The typcache changes.\n01-map_rel_to_type.v5.patch adds map relation to its type\n\n> - Introduction of hash_initial_lookup(), that simplifies 3 places of\n> dynahash.c where the same code is used. The routine should be\n> inlined.\n> - The split in hash_seq_search to force a different type of search is\n> weird, complicating the dynahash interface by hiding what seems like a\n> search mode. Rather than hasHashvalue that's hidden in the middle of\n> HASH_SEQ_STATUS, could it be better to have an entirely different API\n> for the search? That should be a patch on its own, as well.\n\n02-hash_seq_init_with_hash_value.v5.patch - introduces a \nhash_seq_init_with_hash_value() method. hash_initial_lookup() is marked as \ninline, but I suppose, modern compilers are smart enough to inline it automatically.\n\nUsing separate interface for scanning hash with hash value will make scan code \nmore ugly in case when we need to use special value of hash value as it is done \nin cache's scans. Look, instead of this simplified code:\n if (hashvalue == 0)\n hash_seq_init(&status, TypeCacheHash);\n else\n hash_seq_init_with_hash_value(&status, TypeCacheHash, hashvalue);\n while ((typentry = hash_seq_search(&status)) != NULL) {\n ...\n }\nwe will need to code something like that:\n if (hashvalue == 0)\n {\n hash_seq_init(&status, TypeCacheHash);\n\n \twhile ((typentry = hash_seq_search(&status)) != NULL) {\n \t\t...\n \t}\n }\n else\n {\n hash_seq_init_with_hash_value(&status, TypeCacheHash, hashvalue);\n \twhile ((typentry = hash_seq_search_with_hash_value(&status)) != NULL) {\n \t\t...\n \t}\n }\nOr I didn't understand you.\n\nI thought about integrate check inside existing loop in hash_seq_search() :\n+ rerun:\n if ((curElem = status->curEntry) != NULL)\n {\n /* Continuing scan of curBucket... */\n status->curEntry = curElem->link;\n if (status->curEntry == NULL) /* end of this bucket */\n+\t{\n+\t if (status->hasHashvalue)\n+\t\thash_seq_term(status);\t\t\n\t\n ++status->curBucket;\n+\t}\n+\telse if (status->hasHashvalue && status->hashvalue !=\n+ curElem->hashvalue)\n+\t\tgoto rerun;\n return (void *) ELEMENTKEY(curElem);\n }\n\nBut for me it looks weird and adds some checks which will takes some CPU time.\n\n\n03-att_with_hash_value.v5.patch - adds usage of previous patch.\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/",
"msg_date": "Wed, 13 Mar 2024 16:40:38 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 04:40:38PM +0300, Teodor Sigaev wrote:\n> Done, all patches should be applied consequentially.\n\nOne thing that first pops out to me is that we can do the refactor of\nhash_initial_lookup() as an independent piece, without the extra paths\nintroduced. But rather than returning the bucket hash and have the\nbucket number as an in/out argument of hash_initial_lookup(), there is\nan argument for reversing them: hash_search_with_hash_value() does not\ncare about the bucket number.\n\n> 02-hash_seq_init_with_hash_value.v5.patch - introduces a\n> hash_seq_init_with_hash_value() method. hash_initial_lookup() is marked as\n> inline, but I suppose, modern compilers are smart enough to inline it\n> automatically.\n\nLikely so, though that does not hurt to show the intention to the\nreader.\n\nSo I would like to suggest the attached patch for this first piece.\nWhat do you think?\n\nIt may also be an idea to use `git format-patch` when generating a\nseries of patches. That makes for easier reviews.\n--\nMichael",
"msg_date": "Thu, 14 Mar 2024 16:42:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "> One thing that first pops out to me is that we can do the refactor of\n> hash_initial_lookup() as an independent piece, without the extra paths\n> introduced. But rather than returning the bucket hash and have the\n> bucket number as an in/out argument of hash_initial_lookup(), there is\n> an argument for reversing them: hash_search_with_hash_value() does not\n> care about the bucket number.\nOk, no problem\n\n> \n>> 02-hash_seq_init_with_hash_value.v5.patch - introduces a\n>> hash_seq_init_with_hash_value() method. hash_initial_lookup() is marked as\n>> inline, but I suppose, modern compilers are smart enough to inline it\n>> automatically.\n> \n> Likely so, though that does not hurt to show the intention to the\n> reader.\nAgree\n\n> \n> So I would like to suggest the attached patch for this first piece.\n> What do you think?\nI have not any objections\n\n> \n> It may also be an idea to use `git format-patch` when generating a\n> series of patches. That makes for easier reviews.\nThanks, will try\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:27:43 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 04:27:43PM +0300, Teodor Sigaev wrote:\n>> So I would like to suggest the attached patch for this first piece.\n>> What do you think?\n>\n> I have not any objections\n\nOkay, I've applied this piece for now. Not sure I'll have much room\nto look at the rest.\n--\nMichael",
"msg_date": "Fri, 15 Mar 2024 07:58:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "> Okay, I've applied this piece for now. Not sure I'll have much room\n> to look at the rest.\n\nThank you very much!\n\nRest of patches, rebased.\n\n-- \nTeodor Sigaev E-mail: [email protected]\n WWW: http://www.sigaev.ru/",
"msg_date": "Fri, 15 Mar 2024 13:57:13 +0300",
"msg_from": "Teodor Sigaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "> Rest of patches, rebased.\n\nHello,\nI read and tested only the first patch so far. Creation of temp\ntables and rollback in your script work 3-4 times faster with\n0001-type-cache.patch on my windows laptop.\n\nIn the patch I found a copy of the comment \"If it's domain over\ncomposite, reset flags...\". Can we move the reset flags operation\nand its comment into the invalidateCompositeTypeCacheEntry()\nfunction? This simplify the TypeCacheRelCallback() func, but\nadds two more IF statements when we need to clean up a cache\nentry for a specific relation. (diff attached).\n--\nRoman Zharkov",
"msg_date": "Fri, 29 Mar 2024 07:49:55 +0300",
"msg_from": "\n =?utf-8?q?=D0=96=D0=B0=D1=80=D0=BA=D0=BE=D0=B2_=D0=A0=D0=BE=D0=BC=D0=B0=D0=BD?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?utf-8?q?Re=3A?= type cache cleanup improvements"
},
{
"msg_contents": "On 3/15/24 17:57, Teodor Sigaev wrote:\n>> Okay, I've applied this piece for now. Not sure I'll have much room\n>> to look at the rest.\n> \n> Thank you very much!\nI have spent some time reviewing this feature. I think we can discuss \nand apply it step-by-step. So, the 0001-* patch is at this moment.\nThe feature addresses the issue of TypCache being bloated by intensive \nusage of non-standard types and domains. It adds significant overhead \nduring relcache invalidation by thoroughly scanning this hash table.\nIMO, this feature will be handy soon, as we already see some patches \nwhere TypCache is intensively used for storing composite types—for \nexample, look into solutions proposed in [1].\nOne of my main concerns with this feature is the possibility of lost \nentries, which could be mistakenly used by relations with the same oid \nin the future. This seems particularly possible in cases with multiple \ntemporary tables. The author has attempted to address this by replacing \nthe typrelid and type_id fields in the mapRelType on each call of \nlookup_type_cache. However, I believe we could further improve this by \nremoving the entry from mapRelType on invalidation, thus avoiding this \npotential issue.\nWhile reviewing the patch, I made some minor changes (see attachment) \nthat you're free to adopt or reject. However, it's crucial that the \npatch includes a detailed explanation, not just a single sentence, to \nensure everyone understands the changes.\nUpon closer inspection, I noticed that the current implementation only \ninvalidates the cache entry. While this is acceptable for standard \ntypes, it may not be sufficient to maintain numerous custom types (as in \nthe example in the initial letter) or in cases where whole-row vars are \nheavily used. In such scenarios, removing the entry and reducing the \nhash table's size might be more efficient.\nIn toto, the 0001-* patch looks good, and I would be glad to see it in \nthe core.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAKcux6ktu-8tefLWtQuuZBYFaZA83vUzuRd7c1YHC-yEWyYFpg%40mail.gmail.com\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Wed, 3 Apr 2024 13:07:27 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi!\n\nOn Wed, Apr 3, 2024 at 9:07 AM Andrei Lepikhov\n<[email protected]> wrote:\n> On 3/15/24 17:57, Teodor Sigaev wrote:\n> >> Okay, I've applied this piece for now. Not sure I'll have much room\n> >> to look at the rest.\n> >\n> > Thank you very much!\n> I have spent some time reviewing this feature. I think we can discuss\n> and apply it step-by-step. So, the 0001-* patch is at this moment.\n> The feature addresses the issue of TypCache being bloated by intensive\n> usage of non-standard types and domains. It adds significant overhead\n> during relcache invalidation by thoroughly scanning this hash table.\n> IMO, this feature will be handy soon, as we already see some patches\n> where TypCache is intensively used for storing composite types—for\n> example, look into solutions proposed in [1].\n> One of my main concerns with this feature is the possibility of lost\n> entries, which could be mistakenly used by relations with the same oid\n> in the future. This seems particularly possible in cases with multiple\n> temporary tables. The author has attempted to address this by replacing\n> the typrelid and type_id fields in the mapRelType on each call of\n> lookup_type_cache. However, I believe we could further improve this by\n> removing the entry from mapRelType on invalidation, thus avoiding this\n> potential issue.\n> While reviewing the patch, I made some minor changes (see attachment)\n> that you're free to adopt or reject. However, it's crucial that the\n> patch includes a detailed explanation, not just a single sentence, to\n> ensure everyone understands the changes.\n> Upon closer inspection, I noticed that the current implementation only\n> invalidates the cache entry. While this is acceptable for standard\n> types, it may not be sufficient to maintain numerous custom types (as in\n> the example in the initial letter) or in cases where whole-row vars are\n> heavily used. In such scenarios, removing the entry and reducing the\n> hash table's size might be more efficient.\n> In toto, the 0001-* patch looks good, and I would be glad to see it in\n> the core.\n\nI've revised the patchset. First of all, I've re-ordered the patches.\n\n0001-0002 (former 0002-0003)\nComprises hash_search_with_hash_value() function and its application\nto avoid full hash iteration in InvalidateAttoptCacheCallback() and\nTypeCacheTypCallback(). I think this is quite straightforward\noptimization without negative side effects. I've revised comments,\ncommit message and did some code beautification. I'm going to push\nthis if no objections.\n\n0003 (former 0001)\nI've revised this patch. I think main concerns expressed in the\nthread about this path is that we don't have invalidation mechanism\nfor relid => typid map. Finally due to oid wraparound same relids\ncould get reused. That could lead to invalid entries in the map about\nexisting relids and typeids. This is rather messy, but I don't think\nthis could cause a material bug. The maps items are used only for\ncache invalidation. Extra invalidation doesn't cause a bug. If type\nwith same relid will be cached, then correspoding map item will be\noverridden, so no missing invalidation. However, I see the following\nreasons for keeping consistent state of relid => typid map.\n\n1) As the main use-case for this optimization is flood of temporary\ntables, it would be nice not let relid => typid map bloat in this\ncase. I see that TypeCacheHash would get bloated, because its entries\nare never deleted. However, I would prefer to not get this situation\neven worse.\n2) In future we may find some more use-cases for relid => typid map\nbesides cache invalidation. Keeping that in consistent state could be\nadvantage then.\n\nIn the attached patch, I'm keeping relid => typid map when\ncorresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or\nTCFLAGS_OPERATOR_FLAGS, or tupdesc. Thus, when temporary table gets\ndeleted, we would invalidate the map item.\n\nIt will be also nice to get rid of iteration over all the cached\ndomain types in TypeCacheRelCallback(). However, this typically\nshouldn't be a problem since domain types are less tended to bloat.\nDomain types are created manually, unlike composite types which are\nautomatically created for every temporary table. We will probably\nneed to optimize this in future, but I don't feel this to be necessary\nin present patch.\n\nI think the revised 0003 requires review.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Mon, 5 Aug 2024 04:16:07 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Mon, Aug 5, 2024 at 4:16 AM Alexander Korotkov <[email protected]> wrote:\n> I've revised the patchset. First of all, I've re-ordered the patches.\n>\n> 0001-0002 (former 0002-0003)\n> Comprises hash_search_with_hash_value() function and its application\n> to avoid full hash iteration in InvalidateAttoptCacheCallback() and\n> TypeCacheTypCallback(). I think this is quite straightforward\n> optimization without negative side effects. I've revised comments,\n> commit message and did some code beautification. I'm going to push\n> this if no objections.\n>\n> 0003 (former 0001)\n> I've revised this patch. I think main concerns expressed in the\n> thread about this path is that we don't have invalidation mechanism\n> for relid => typid map. Finally due to oid wraparound same relids\n> could get reused. That could lead to invalid entries in the map about\n> existing relids and typeids. This is rather messy, but I don't think\n> this could cause a material bug. The maps items are used only for\n> cache invalidation. Extra invalidation doesn't cause a bug. If type\n> with same relid will be cached, then correspoding map item will be\n> overridden, so no missing invalidation. However, I see the following\n> reasons for keeping consistent state of relid => typid map.\n>\n> 1) As the main use-case for this optimization is flood of temporary\n> tables, it would be nice not let relid => typid map bloat in this\n> case. I see that TypeCacheHash would get bloated, because its entries\n> are never deleted. However, I would prefer to not get this situation\n> even worse.\n> 2) In future we may find some more use-cases for relid => typid map\n> besides cache invalidation. Keeping that in consistent state could be\n> advantage then.\n>\n> In the attached patch, I'm keeping relid => typid map when\n> corresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or\n> TCFLAGS_OPERATOR_FLAGS, or tupdesc. Thus, when temporary table gets\n> deleted, we would invalidate the map item.\n>\n> It will be also nice to get rid of iteration over all the cached\n> domain types in TypeCacheRelCallback(). However, this typically\n> shouldn't be a problem since domain types are less tended to bloat.\n> Domain types are created manually, unlike composite types which are\n> automatically created for every temporary table. We will probably\n> need to optimize this in future, but I don't feel this to be necessary\n> in present patch.\n>\n> I think the revised 0003 requires review.\n\nThe rebased remaining patch is attached.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Tue, 20 Aug 2024 22:00:53 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, 20 Aug 2024 at 23:01, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Mon, Aug 5, 2024 at 4:16 AM Alexander Korotkov <[email protected]>\n> wrote:\n> > I've revised the patchset. First of all, I've re-ordered the patches.\n> >\n> > 0001-0002 (former 0002-0003)\n> > Comprises hash_search_with_hash_value() function and its application\n> > to avoid full hash iteration in InvalidateAttoptCacheCallback() and\n> > TypeCacheTypCallback(). I think this is quite straightforward\n> > optimization without negative side effects. I've revised comments,\n> > commit message and did some code beautification. I'm going to push\n> > this if no objections.\n> >\n> > 0003 (former 0001)\n> > I've revised this patch. I think main concerns expressed in the\n> > thread about this path is that we don't have invalidation mechanism\n> > for relid => typid map. Finally due to oid wraparound same relids\n> > could get reused. That could lead to invalid entries in the map about\n> > existing relids and typeids. This is rather messy, but I don't think\n> > this could cause a material bug. The maps items are used only for\n> > cache invalidation. Extra invalidation doesn't cause a bug. If type\n> > with same relid will be cached, then correspoding map item will be\n> > overridden, so no missing invalidation. However, I see the following\n> > reasons for keeping consistent state of relid => typid map.\n> >\n> > 1) As the main use-case for this optimization is flood of temporary\n> > tables, it would be nice not let relid => typid map bloat in this\n> > case. I see that TypeCacheHash would get bloated, because its entries\n> > are never deleted. However, I would prefer to not get this situation\n> > even worse.\n> > 2) In future we may find some more use-cases for relid => typid map\n> > besides cache invalidation. Keeping that in consistent state could be\n> > advantage then.\n> >\n> > In the attached patch, I'm keeping relid => typid map when\n> > corresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or\n> > TCFLAGS_OPERATOR_FLAGS, or tupdesc. Thus, when temporary table gets\n> > deleted, we would invalidate the map item.\n> >\n> > It will be also nice to get rid of iteration over all the cached\n> > domain types in TypeCacheRelCallback(). However, this typically\n> > shouldn't be a problem since domain types are less tended to bloat.\n> > Domain types are created manually, unlike composite types which are\n> > automatically created for every temporary table. We will probably\n> > need to optimize this in future, but I don't feel this to be necessary\n> > in present patch.\n> >\n> > I think the revised 0003 requires review.\n>\n> The rebased remaining patch is attached.\n>\nI've looked at patch v8.\n\n1.\nIn function check_insert_rel_type_cache() the block:\n\n+#ifdef USE_ASSERT_CHECKING\n+\n+ /*\n+ * In assert-enabled builds otherwise check for\nRelIdToTypeIdCacheHash\n+ * entry if it should exist.\n+ */\n+ if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n+ typentry->tupDesc == NULL)\n+ {\n+ bool found;\n+\n+ (void) hash_search(RelIdToTypeIdCacheHash,\n+ &typentry->typrelid,\n+ HASH_FIND, &found);\n+ Assert(found);\n+ }\n+#endif\n\nAs I understand it does HASH_FIND after the same value just inserted by\nHASH_ENT\nER above under the same if condition:\n\nif (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n+ typentry->tupDesc == NULL)\n\nWhy do we need to do this re-check HASH_ENTER? Also I see \"otherwise\" in\ncomment in a quoted block, but if condition is the same.\n\n2.\nFor function check_delete_rel_type_cache():\nI'd modify the block:\n+#ifdef USE_ASSERT_CHECKING\n+\n+ /*\n+ * In assert-enabled builds otherwise check for\nRelIdToTypeIdCacheHash\n+ * entry if it should exist.\n+ */\n+ if ((typentry->flags & TCFLAGS_HAVE_PG_TYPE_DATA) ||\n+ (typentry->flags & TCFLAGS_OPERATOR_FLAGS) ||\n+ typentry->tupDesc != NULL)\n+ {\n+ bool found;\n+\n+ (void) hash_search(RelIdToTypeIdCacheHash,\n+ &typentry->typrelid,\n+ HASH_FIND, &found);\n+ Assert(found);\n+ }\n+#endif\n\nas:\n+\n+ /*\n+ * In assert-enabled builds otherwise check for\nRelIdToTypeIdCacheHash\n+ * entry if it should exist.\n+ */\n+ else\n+{\n+ #ifdef USE_ASSERT_CHECKING\n+ bool found;\n+\n+ (void) hash_search(RelIdToTypeIdCacheHash,\n+ &typentry->typrelid,\n+ HASH_FIND, &found);\n+ Assert(found);\n+#endif\n+}\n\n3. I think check_delete_rel_type_cache and check_insert_rel_type_cache are\nbetter to be renamed to be more clear, though I don't have exact proposals\nyet,\n4. I haven't looked into comments, though I'd recommend oid -> OID\nreplacement in the comments.\n\nThank you for working on this patchset!\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!On Tue, 20 Aug 2024 at 23:01, Alexander Korotkov <[email protected]> wrote:On Mon, Aug 5, 2024 at 4:16 AM Alexander Korotkov <[email protected]> wrote:\n> I've revised the patchset. First of all, I've re-ordered the patches.\n>\n> 0001-0002 (former 0002-0003)\n> Comprises hash_search_with_hash_value() function and its application\n> to avoid full hash iteration in InvalidateAttoptCacheCallback() and\n> TypeCacheTypCallback(). I think this is quite straightforward\n> optimization without negative side effects. I've revised comments,\n> commit message and did some code beautification. I'm going to push\n> this if no objections.\n>\n> 0003 (former 0001)\n> I've revised this patch. I think main concerns expressed in the\n> thread about this path is that we don't have invalidation mechanism\n> for relid => typid map. Finally due to oid wraparound same relids\n> could get reused. That could lead to invalid entries in the map about\n> existing relids and typeids. This is rather messy, but I don't think\n> this could cause a material bug. The maps items are used only for\n> cache invalidation. Extra invalidation doesn't cause a bug. If type\n> with same relid will be cached, then correspoding map item will be\n> overridden, so no missing invalidation. However, I see the following\n> reasons for keeping consistent state of relid => typid map.\n>\n> 1) As the main use-case for this optimization is flood of temporary\n> tables, it would be nice not let relid => typid map bloat in this\n> case. I see that TypeCacheHash would get bloated, because its entries\n> are never deleted. However, I would prefer to not get this situation\n> even worse.\n> 2) In future we may find some more use-cases for relid => typid map\n> besides cache invalidation. Keeping that in consistent state could be\n> advantage then.\n>\n> In the attached patch, I'm keeping relid => typid map when\n> corresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or\n> TCFLAGS_OPERATOR_FLAGS, or tupdesc. Thus, when temporary table gets\n> deleted, we would invalidate the map item.\n>\n> It will be also nice to get rid of iteration over all the cached\n> domain types in TypeCacheRelCallback(). However, this typically\n> shouldn't be a problem since domain types are less tended to bloat.\n> Domain types are created manually, unlike composite types which are\n> automatically created for every temporary table. We will probably\n> need to optimize this in future, but I don't feel this to be necessary\n> in present patch.\n>\n> I think the revised 0003 requires review.\n\nThe rebased remaining patch is attached.I've looked at patch v8.1.In function check_insert_rel_type_cache() the block:+#ifdef USE_ASSERT_CHECKING++ /*+ * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash+ * entry if it should exist.+ */+ if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&+ typentry->tupDesc == NULL)+ {+ bool found;++ (void) hash_search(RelIdToTypeIdCacheHash,+ &typentry->typrelid,+ HASH_FIND, &found);+ Assert(found);+ }+#endifAs I understand it does HASH_FIND after the same value just inserted by HASH_ENTER above under the same if condition:if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&+ typentry->tupDesc == NULL) Why do we need to do this re-check HASH_ENTER? Also I see \"otherwise\" in comment in a quoted block, but if condition is the same. 2.For function check_delete_rel_type_cache():I'd modify the block:+#ifdef USE_ASSERT_CHECKING++ /*+ * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash+ * entry if it should exist.+ */+ if ((typentry->flags & TCFLAGS_HAVE_PG_TYPE_DATA) ||+ (typentry->flags & TCFLAGS_OPERATOR_FLAGS) ||+ typentry->tupDesc != NULL)+ {+ bool found;++ (void) hash_search(RelIdToTypeIdCacheHash,+ &typentry->typrelid,+ HASH_FIND, &found);+ Assert(found);+ }+#endifas:++ /*+ * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash+ * entry if it should exist.+ */+ else+{+ #ifdef USE_ASSERT_CHECKING+ bool found;++ (void) hash_search(RelIdToTypeIdCacheHash,+ &typentry->typrelid,+ HASH_FIND, &found);+ Assert(found);+#endif+}3. I think check_delete_rel_type_cache and check_insert_rel_type_cache are better to be renamed to be more clear, though I don't have exact proposals yet,4. I haven't looked into comments, though I'd recommend oid -> OID replacement in the comments.Thank you for working on this patchset!Regards,Pavel BorisovSupabase",
"msg_date": "Wed, 21 Aug 2024 17:28:13 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi, Pavel!\n\n\nOn Wed, Aug 21, 2024 at 4:28 PM Pavel Borisov <[email protected]> wrote:\n> I've looked at patch v8.\n>\n> 1.\n> In function check_insert_rel_type_cache() the block:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> + typentry->tupDesc == NULL)\n> + {\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> + }\n> +#endif\n>\n> As I understand it does HASH_FIND after the same value just inserted by HASH_ENT\n> ER above under the same if condition:\n>\n> if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> + typentry->tupDesc == NULL)\n>\n> Why do we need to do this re-check HASH_ENTER? Also I see \"otherwise\" in comment in a quoted block, but if condition is the same.\n\nYep, these are remains from one of my previous attempt. No sense to\ncheck for HASH_FIND right after HASH_ENTER. Removed.\n\n> 2.\n> For function check_delete_rel_type_cache():\n> I'd modify the block:\n> +#ifdef USE_ASSERT_CHECKING\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + if ((typentry->flags & TCFLAGS_HAVE_PG_TYPE_DATA) ||\n> + (typentry->flags & TCFLAGS_OPERATOR_FLAGS) ||\n> + typentry->tupDesc != NULL)\n> + {\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> + }\n> +#endif\n>\n> as:\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + else\n> +{\n> + #ifdef USE_ASSERT_CHECKING\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> +#endif\n> +}\n\nChanged in the way you proposed, except I put the comment inside the\n#ifdef. I this it's easier to understand this way.\n\n> 3. I think check_delete_rel_type_cache and check_insert_rel_type_cache are better to be renamed to be more clear, though I don't have exact proposals yet,\n\nRenamed to delete_rel_type_cache_if_needed and\ninsert_rel_type_cache_if_needed. I've checked that\n\n> 4. I haven't looked into comments, though I'd recommend oid -> OID replacement in the comments.\n\nI've changed oid -> OID in the comments and in the commit message.\n\n> Thank you for working on this patchset!\n\nThank you for review!\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Wed, 21 Aug 2024 18:28:57 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Wed, 21 Aug 2024 at 19:29, Alexander Korotkov <[email protected]>\nwrote:\n\n> Hi, Pavel!\n>\n>\n> On Wed, Aug 21, 2024 at 4:28 PM Pavel Borisov <[email protected]>\n> wrote:\n> > I've looked at patch v8.\n> >\n> > 1.\n> > In function check_insert_rel_type_cache() the block:\n> >\n> > +#ifdef USE_ASSERT_CHECKING\n> > +\n> > + /*\n> > + * In assert-enabled builds otherwise check for\n> RelIdToTypeIdCacheHash\n> > + * entry if it should exist.\n> > + */\n> > + if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> > + typentry->tupDesc == NULL)\n> > + {\n> > + bool found;\n> > +\n> > + (void) hash_search(RelIdToTypeIdCacheHash,\n> > + &typentry->typrelid,\n> > + HASH_FIND, &found);\n> > + Assert(found);\n> > + }\n> > +#endif\n> >\n> > As I understand it does HASH_FIND after the same value just inserted by\n> HASH_ENT\n> > ER above under the same if condition:\n> >\n> > if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> > + typentry->tupDesc == NULL)\n> >\n> > Why do we need to do this re-check HASH_ENTER? Also I see \"otherwise\" in\n> comment in a quoted block, but if condition is the same.\n>\n> Yep, these are remains from one of my previous attempt. No sense to\n> check for HASH_FIND right after HASH_ENTER. Removed.\n>\n> > 2.\n> > For function check_delete_rel_type_cache():\n> > I'd modify the block:\n> > +#ifdef USE_ASSERT_CHECKING\n> > +\n> > + /*\n> > + * In assert-enabled builds otherwise check for\n> RelIdToTypeIdCacheHash\n> > + * entry if it should exist.\n> > + */\n> > + if ((typentry->flags & TCFLAGS_HAVE_PG_TYPE_DATA) ||\n> > + (typentry->flags & TCFLAGS_OPERATOR_FLAGS) ||\n> > + typentry->tupDesc != NULL)\n> > + {\n> > + bool found;\n> > +\n> > + (void) hash_search(RelIdToTypeIdCacheHash,\n> > + &typentry->typrelid,\n> > + HASH_FIND, &found);\n> > + Assert(found);\n> > + }\n> > +#endif\n> >\n> > as:\n> > +\n> > + /*\n> > + * In assert-enabled builds otherwise check for\n> RelIdToTypeIdCacheHash\n> > + * entry if it should exist.\n> > + */\n> > + else\n> > +{\n> > + #ifdef USE_ASSERT_CHECKING\n> > + bool found;\n> > +\n> > + (void) hash_search(RelIdToTypeIdCacheHash,\n> > + &typentry->typrelid,\n> > + HASH_FIND, &found);\n> > + Assert(found);\n> > +#endif\n> > +}\n>\n> Changed in the way you proposed, except I put the comment inside the\n> #ifdef. I this it's easier to understand this way.\n>\n> > 3. I think check_delete_rel_type_cache and check_insert_rel_type_cache\n> are better to be renamed to be more clear, though I don't have exact\n> proposals yet,\n>\n> Renamed to delete_rel_type_cache_if_needed and\n> insert_rel_type_cache_if_needed. I've checked that\n>\n> > 4. I haven't looked into comments, though I'd recommend oid -> OID\n> replacement in the comments.\n>\n> I've changed oid -> OID in the comments and in the commit message.\n>\n> > Thank you for working on this patchset!\n>\n> Thank you for review!\n>\n\nLooked at v9:\nPatch looks good to me. I'd only suggest comments changes:\n\n\"The map from relation's OID to the corresponding composite type OID\" ->\n\"The mapping of relation's OID to the corresponding composite type OID\"\n\"We're keeping the map entry when corresponding typentry have either\nTCFLAGS_HAVE_PG_TYPE_DATA, or TCFLAGS_OPERATOR_FLAGS, or tupdesc. That is\nwe're keeping map entry if the entry has something to clear.\" -> \"We're\nkeeping the map entry when the corresponding typentry has something to\nclear i.e it has either TCFLAGS_HAVE_PG_TYPE_DATA, or\nTCFLAGS_OPERATOR_FLAGS, or tupdesc.\"\n\"Invalidate particular TypeCacheEntry on Relcache inval callback\" - remove\nextra tabs before. Maybe also add empty line above.\n\"Typically shouldn't be a problem\" -> \"Typically this shouldn't affect\nperformance\"\n\"Relid = 0, so we need\" -> \"Relid is invalid. By convention we need\"\n\"if cleaned TCFLAGS_HAVE_PG_TYPE_DATA flag\" -> \"if we cleaned\nTCFLAGS_HAVE_PG_TYPE_DATA flag previously\"\n\"+/*\n+ * Delete entry RelIdToTypeIdCacheHash if needed after resetting of the\n+ * TCFLAGS_HAVE_PG_TYPE_DATA flag, or any of TCFLAGS_OPERATOR_FLAGS flags,\n+ * or tupDesc if needed.\" - remove one \"if needed\"\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!On Wed, 21 Aug 2024 at 19:29, Alexander Korotkov <[email protected]> wrote:Hi, Pavel!\n\n\nOn Wed, Aug 21, 2024 at 4:28 PM Pavel Borisov <[email protected]> wrote:\n> I've looked at patch v8.\n>\n> 1.\n> In function check_insert_rel_type_cache() the block:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> + typentry->tupDesc == NULL)\n> + {\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> + }\n> +#endif\n>\n> As I understand it does HASH_FIND after the same value just inserted by HASH_ENT\n> ER above under the same if condition:\n>\n> if (!(typentry->flags & TCFLAGS_OPERATOR_FLAGS) &&\n> + typentry->tupDesc == NULL)\n>\n> Why do we need to do this re-check HASH_ENTER? Also I see \"otherwise\" in comment in a quoted block, but if condition is the same.\n\nYep, these are remains from one of my previous attempt. No sense to\ncheck for HASH_FIND right after HASH_ENTER. Removed.\n\n> 2.\n> For function check_delete_rel_type_cache():\n> I'd modify the block:\n> +#ifdef USE_ASSERT_CHECKING\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + if ((typentry->flags & TCFLAGS_HAVE_PG_TYPE_DATA) ||\n> + (typentry->flags & TCFLAGS_OPERATOR_FLAGS) ||\n> + typentry->tupDesc != NULL)\n> + {\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> + }\n> +#endif\n>\n> as:\n> +\n> + /*\n> + * In assert-enabled builds otherwise check for RelIdToTypeIdCacheHash\n> + * entry if it should exist.\n> + */\n> + else\n> +{\n> + #ifdef USE_ASSERT_CHECKING\n> + bool found;\n> +\n> + (void) hash_search(RelIdToTypeIdCacheHash,\n> + &typentry->typrelid,\n> + HASH_FIND, &found);\n> + Assert(found);\n> +#endif\n> +}\n\nChanged in the way you proposed, except I put the comment inside the\n#ifdef. I this it's easier to understand this way.\n\n> 3. I think check_delete_rel_type_cache and check_insert_rel_type_cache are better to be renamed to be more clear, though I don't have exact proposals yet,\n\nRenamed to delete_rel_type_cache_if_needed and\ninsert_rel_type_cache_if_needed. I've checked that\n\n> 4. I haven't looked into comments, though I'd recommend oid -> OID replacement in the comments.\n\nI've changed oid -> OID in the comments and in the commit message.\n\n> Thank you for working on this patchset!\n\nThank you for review!Looked at v9:Patch looks good to me. I'd only suggest comments changes: \"The map from relation's OID to the corresponding composite type OID\" -> \"The mapping of relation's OID to the corresponding composite type OID\"\"We're keeping the map entry when corresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or TCFLAGS_OPERATOR_FLAGS, or tupdesc. That is we're keeping map entry if the entry has something to clear.\" -> \"We're keeping the map entry when the corresponding typentry has something to clear i.e it has either TCFLAGS_HAVE_PG_TYPE_DATA, or TCFLAGS_OPERATOR_FLAGS, or tupdesc.\"\"Invalidate particular TypeCacheEntry on Relcache inval callback\" - remove extra tabs before. Maybe also add empty line above.\"Typically shouldn't be a problem\" -> \"Typically this shouldn't affect performance\"\"Relid = 0, so we need\" -> \"Relid is invalid. By convention we need\"\"if cleaned TCFLAGS_HAVE_PG_TYPE_DATA flag\" -> \"if we cleaned TCFLAGS_HAVE_PG_TYPE_DATA flag previously\"\"+/*+ * Delete entry RelIdToTypeIdCacheHash if needed after resetting of the+ * TCFLAGS_HAVE_PG_TYPE_DATA flag, or any of TCFLAGS_OPERATOR_FLAGS flags,+ * or tupDesc if needed.\" - remove one \"if needed\"Regards,Pavel BorisovSupabase",
"msg_date": "Thu, 22 Aug 2024 14:02:21 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On 21/8/2024 17:28, Alexander Korotkov wrote:\n> \n> I've changed oid -> OID in the comments and in the commit message.\nI passed through the patch again: no objections and +1 to the changes of \ncomments proposed by Pavel.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 22 Aug 2024 14:02:42 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hi!\n\nOn Thu, Aug 22, 2024 at 1:02 PM Pavel Borisov <[email protected]> wrote:\nLooked at v9:\n> Patch looks good to me. I'd only suggest comments changes:\n>\n> \"The map from relation's OID to the corresponding composite type OID\" -> \"The mapping of relation's OID to the corresponding composite type OID\"\n> \"We're keeping the map entry when corresponding typentry have either TCFLAGS_HAVE_PG_TYPE_DATA, or TCFLAGS_OPERATOR_FLAGS, or tupdesc. That is we're keeping map entry if the entry has something to clear.\" -> \"We're keeping the map entry when the corresponding typentry has something to clear i.e it has either TCFLAGS_HAVE_PG_TYPE_DATA, or TCFLAGS_OPERATOR_FLAGS, or tupdesc.\"\n> \"Invalidate particular TypeCacheEntry on Relcache inval callback\" - remove extra tabs before. Maybe also add empty line above.\n> \"Typically shouldn't be a problem\" -> \"Typically this shouldn't affect performance\"\n> \"Relid = 0, so we need\" -> \"Relid is invalid. By convention we need\"\n> \"if cleaned TCFLAGS_HAVE_PG_TYPE_DATA flag\" -> \"if we cleaned TCFLAGS_HAVE_PG_TYPE_DATA flag previously\"\n> \"+/*\n> + * Delete entry RelIdToTypeIdCacheHash if needed after resetting of the\n> + * TCFLAGS_HAVE_PG_TYPE_DATA flag, or any of TCFLAGS_OPERATOR_FLAGS flags,\n> + * or tupDesc if needed.\" - remove one \"if needed\"\n\nThank you for your feedback. I've integrated all your edits except\nthe formatting change of InvalidateCompositeTypeCacheEntry() header\ncomment. I think the functions below have the same formatting of\nheader comments, and it's not necessary to change format.\n\nIf no objections, I'm planning to push this after reverting PARTITION\nSPLIT/MERGE.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Thu, 22 Aug 2024 19:52:32 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "Hello Alexander,\n\n22.08.2024 19:52, Alexander Korotkov wrotd:\n> If no objections, I'm planning to push this after reverting PARTITION\n> SPLIT/MERGE.\n>\n\nPlease try to perform `make check` against a CLOBBER_CACHE_ALWAYS build.\ntrilobite failed it:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-08-25%2005%3A22%3A07\n\nand I'm observing the same locally:\n...\n#5 0x00005636d37555f8 in ExceptionalCondition (conditionName=0x5636d39b1940 \"found\",\n fileName=0x5636d39b1308 \"typcache.c\", lineNumber=3077) at assert.c:66\n#6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n#7 0x00005636d3754063 in InvalidateCompositeTypeCacheEntry (typentry=0x5636d41d5d10) at typcache.c:2355\n#8 0x00005636d37541d3 in TypeCacheRelCallback (arg=0, relid=0) at typcache.c:2441\n...\n\n(gdb) f 6\n#6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n3077 Assert(found);\n(gdb) p found\n$1 = false\n\n(This Assert is introduced by c14d4acb8.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 25 Aug 2024 22:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Sun, Aug 25, 2024 at 10:00 PM Alexander Lakhin <[email protected]> wrote:\n> 22.08.2024 19:52, Alexander Korotkov wrotd:\n> > If no objections, I'm planning to push this after reverting PARTITION\n> > SPLIT/MERGE.\n> >\n>\n> Please try to perform `make check` against a CLOBBER_CACHE_ALWAYS build.\n> trilobite failed it:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-08-25%2005%3A22%3A07\n>\n> and I'm observing the same locally:\n> ...\n> #5 0x00005636d37555f8 in ExceptionalCondition (conditionName=0x5636d39b1940 \"found\",\n> fileName=0x5636d39b1308 \"typcache.c\", lineNumber=3077) at assert.c:66\n> #6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n> #7 0x00005636d3754063 in InvalidateCompositeTypeCacheEntry (typentry=0x5636d41d5d10) at typcache.c:2355\n> #8 0x00005636d37541d3 in TypeCacheRelCallback (arg=0, relid=0) at typcache.c:2441\n> ...\n>\n> (gdb) f 6\n> #6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n> 3077 Assert(found);\n> (gdb) p found\n> $1 = false\n>\n> (This Assert is introduced by c14d4acb8.)\n\nThank you for noticing. I'm checking this.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sun, 25 Aug 2024 22:21:19 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Sun, Aug 25, 2024 at 10:21 PM Alexander Korotkov\n<[email protected]> wrote:\n> On Sun, Aug 25, 2024 at 10:00 PM Alexander Lakhin <[email protected]> wrote:\n> > 22.08.2024 19:52, Alexander Korotkov wrotd:\n> > > If no objections, I'm planning to push this after reverting PARTITION\n> > > SPLIT/MERGE.\n> > >\n> >\n> > Please try to perform `make check` against a CLOBBER_CACHE_ALWAYS build.\n> > trilobite failed it:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-08-25%2005%3A22%3A07\n> >\n> > and I'm observing the same locally:\n> > ...\n> > #5 0x00005636d37555f8 in ExceptionalCondition (conditionName=0x5636d39b1940 \"found\",\n> > fileName=0x5636d39b1308 \"typcache.c\", lineNumber=3077) at assert.c:66\n> > #6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n> > #7 0x00005636d3754063 in InvalidateCompositeTypeCacheEntry (typentry=0x5636d41d5d10) at typcache.c:2355\n> > #8 0x00005636d37541d3 in TypeCacheRelCallback (arg=0, relid=0) at typcache.c:2441\n> > ...\n> >\n> > (gdb) f 6\n> > #6 0x00005636d37554a4 in delete_rel_type_cache_if_needed (typentry=0x5636d41d5d10) at typcache.c:3077\n> > 3077 Assert(found);\n> > (gdb) p found\n> > $1 = false\n> >\n> > (This Assert is introduced by c14d4acb8.)\n>\n> Thank you for noticing. I'm checking this.\n\nI didn't take into account that TypeCacheEntry could be invalidated\nwhile lookup_type_cache() does syscache lookups. When I realized that\nI was curious on how does it currently work. It appears that type\ncache invalidation mostly only clears the flags while values are\nremaining in place and still available for lookup_type_cache() caller.\nTypeCacheEntry.tupDesc is invalidated directly, and it has guarantee\nto survive only because we don't do any syscache lookups for composite\ndata types later in lookup_type_cache(). I'm becoming less fan of how\nthis works... I think these aspects needs to be at least documented\nin details.\n\nRegarding c14d4acb8, it appears to require redesign. I'm going to revert it.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Mon, 26 Aug 2024 00:22:03 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On 25/8/2024 23:22, Alexander Korotkov wrote:\n> On Sun, Aug 25, 2024 at 10:21 PM Alexander Korotkov\n>>> (This Assert is introduced by c14d4acb8.)\n>>\n>> Thank you for noticing. I'm checking this.\n> \n> I didn't take into account that TypeCacheEntry could be invalidated\n> while lookup_type_cache() does syscache lookups. When I realized that\n> I was curious on how does it currently work. It appears that type\n> cache invalidation mostly only clears the flags while values are\n> remaining in place and still available for lookup_type_cache() caller.\n> TypeCacheEntry.tupDesc is invalidated directly, and it has guarantee\n> to survive only because we don't do any syscache lookups for composite\n> data types later in lookup_type_cache(). I'm becoming less fan of how\n> this works... I think these aspects needs to be at least documented\n> in details.\n> \n> Regarding c14d4acb8, it appears to require redesign. I'm going to revert it.\nSorry, but I don't understand your point.\nLet's refocus on the problem at hand. The issue arose when the \nTypeCacheTypCallback and the TypeCacheRelCallback were executed in \nsequence within InvalidateSystemCachesExtended.\nThe first callback cleaned the flags TCFLAGS_HAVE_PG_TYPE_DATA and \nTCFLAGS_CHECKED_DOMAIN_CONSTRAINTS. But the call of the second callback \nchecks the typentry->tupDesc and, because it wasn't NULL, attempted to \nremove this record a second time.\nI think there is no case for redesign, but we have a mess in \ninsertion/deletion conditions.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 08:37:50 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 9:37 AM Andrei Lepikhov <[email protected]> wrote:\n> On 25/8/2024 23:22, Alexander Korotkov wrote:\n> > On Sun, Aug 25, 2024 at 10:21 PM Alexander Korotkov\n> >>> (This Assert is introduced by c14d4acb8.)\n> >>\n> >> Thank you for noticing. I'm checking this.\n> >\n> > I didn't take into account that TypeCacheEntry could be invalidated\n> > while lookup_type_cache() does syscache lookups. When I realized that\n> > I was curious on how does it currently work. It appears that type\n> > cache invalidation mostly only clears the flags while values are\n> > remaining in place and still available for lookup_type_cache() caller.\n> > TypeCacheEntry.tupDesc is invalidated directly, and it has guarantee\n> > to survive only because we don't do any syscache lookups for composite\n> > data types later in lookup_type_cache(). I'm becoming less fan of how\n> > this works... I think these aspects needs to be at least documented\n> > in details.\n> >\n> > Regarding c14d4acb8, it appears to require redesign. I'm going to revert it.\n> Sorry, but I don't understand your point.\n> Let's refocus on the problem at hand. The issue arose when the\n> TypeCacheTypCallback and the TypeCacheRelCallback were executed in\n> sequence within InvalidateSystemCachesExtended.\n> The first callback cleaned the flags TCFLAGS_HAVE_PG_TYPE_DATA and\n> TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS. But the call of the second callback\n> checks the typentry->tupDesc and, because it wasn't NULL, attempted to\n> remove this record a second time.\n> I think there is no case for redesign, but we have a mess in\n> insertion/deletion conditions.\n\nYes, it's possible to repair the current approach. But we need to do\nthis correct, not just \"not failing with current usages\". Then we\nneed to call insert_rel_type_cache_if_needed() not just when we set\nTCFLAGS_HAVE_PG_TYPE_DATA flag, but every time we set any of\nTCFLAGS_OPERATOR_FLAGS or tupDesc. That's a lot of places, not as\nsimple and elegant as it was planned. This is why I wonder if there\nis a better approach.\n\nSecondly, I'm not terribly happy with current state of type cache.\nThe caller of lookup_type_cache() might get already invalidated data.\nThis probably OK, because caller probably hold locks on dependent\nobjects to guarantee that relevant properties of type actually\npersists. At very least this should be documented, but it doesn't\nseem so. Setting of tupdesc is sensitive to its order of execution.\nThat feels quite fragile to me, and not documented either. I think\nthis area needs improvements before we push additional functionality\nthere.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:26:26 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 11:26 AM Alexander Korotkov\n<[email protected]> wrote:\n>\n> On Mon, Aug 26, 2024 at 9:37 AM Andrei Lepikhov <[email protected]> wrote:\n> > On 25/8/2024 23:22, Alexander Korotkov wrote:\n> > > On Sun, Aug 25, 2024 at 10:21 PM Alexander Korotkov\n> > >>> (This Assert is introduced by c14d4acb8.)\n> > >>\n> > >> Thank you for noticing. I'm checking this.\n> > >\n> > > I didn't take into account that TypeCacheEntry could be invalidated\n> > > while lookup_type_cache() does syscache lookups. When I realized that\n> > > I was curious on how does it currently work. It appears that type\n> > > cache invalidation mostly only clears the flags while values are\n> > > remaining in place and still available for lookup_type_cache() caller.\n> > > TypeCacheEntry.tupDesc is invalidated directly, and it has guarantee\n> > > to survive only because we don't do any syscache lookups for composite\n> > > data types later in lookup_type_cache(). I'm becoming less fan of how\n> > > this works... I think these aspects needs to be at least documented\n> > > in details.\n> > >\n> > > Regarding c14d4acb8, it appears to require redesign. I'm going to revert it.\n> > Sorry, but I don't understand your point.\n> > Let's refocus on the problem at hand. The issue arose when the\n> > TypeCacheTypCallback and the TypeCacheRelCallback were executed in\n> > sequence within InvalidateSystemCachesExtended.\n> > The first callback cleaned the flags TCFLAGS_HAVE_PG_TYPE_DATA and\n> > TCFLAGS_CHECKED_DOMAIN_CONSTRAINTS. But the call of the second callback\n> > checks the typentry->tupDesc and, because it wasn't NULL, attempted to\n> > remove this record a second time.\n> > I think there is no case for redesign, but we have a mess in\n> > insertion/deletion conditions.\n>\n> Yes, it's possible to repair the current approach. But we need to do\n> this correct, not just \"not failing with current usages\". Then we\n> need to call insert_rel_type_cache_if_needed() not just when we set\n> TCFLAGS_HAVE_PG_TYPE_DATA flag, but every time we set any of\n> TCFLAGS_OPERATOR_FLAGS or tupDesc. That's a lot of places, not as\n> simple and elegant as it was planned. This is why I wonder if there\n> is a better approach.\n>\n> Secondly, I'm not terribly happy with current state of type cache.\n> The caller of lookup_type_cache() might get already invalidated data.\n> This probably OK, because caller probably hold locks on dependent\n> objects to guarantee that relevant properties of type actually\n> persists. At very least this should be documented, but it doesn't\n> seem so. Setting of tupdesc is sensitive to its order of execution.\n> That feels quite fragile to me, and not documented either. I think\n> this area needs improvements before we push additional functionality\n> there.\n\nI see fdd965d074 added a proper handling for concurrent invalidation\nfor relation cache. If a concurrent invalidation occurs, we retry\nbuilding a relation descriptor. Thus, we end up with returning of a\nvalid relation descriptor to caller. I wonder if we can take the same\napproach to type cache. That would make the whole type cache more\nconsistent and less fragile. Also, this patch will be simpler.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Thu, 29 Aug 2024 12:01:56 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On 29/8/2024 11:01, Alexander Korotkov wrote:\n> On Mon, Aug 26, 2024 at 11:26 AM Alexander Korotkov\n>> Secondly, I'm not terribly happy with current state of type cache.\n>> The caller of lookup_type_cache() might get already invalidated data.\n>> This probably OK, because caller probably hold locks on dependent\n>> objects to guarantee that relevant properties of type actually\n>> persists. At very least this should be documented, but it doesn't\n>> seem so. Setting of tupdesc is sensitive to its order of execution.\n>> That feels quite fragile to me, and not documented either. I think\n>> this area needs improvements before we push additional functionality\n>> there.\n> \n> I see fdd965d074 added a proper handling for concurrent invalidation\n> for relation cache. If a concurrent invalidation occurs, we retry\n> building a relation descriptor. Thus, we end up with returning of a\n> valid relation descriptor to caller. I wonder if we can take the same\n> approach to type cache. That would make the whole type cache more\n> consistent and less fragile. Also, this patch will be simpler.\nI think I understand the solution from the commit fdd965d074.\nJust for the record, you mentioned invalidation inside the \nlookup_type_cache above. Passing through the code, I found the only \nplace for such a case - the call of the GetDefaultOpClass, which \ntriggers the opening of the relation pg_opclass, which can cause an \nAcceptInvalidationMessages call. Did you mean this case, or does a wider \nfield of cases exist here?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Sat, 31 Aug 2024 21:33:23 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Sat, Aug 31, 2024 at 10:33 PM Andrei Lepikhov <[email protected]> wrote:\n> On 29/8/2024 11:01, Alexander Korotkov wrote:\n> > On Mon, Aug 26, 2024 at 11:26 AM Alexander Korotkov\n> >> Secondly, I'm not terribly happy with current state of type cache.\n> >> The caller of lookup_type_cache() might get already invalidated data.\n> >> This probably OK, because caller probably hold locks on dependent\n> >> objects to guarantee that relevant properties of type actually\n> >> persists. At very least this should be documented, but it doesn't\n> >> seem so. Setting of tupdesc is sensitive to its order of execution.\n> >> That feels quite fragile to me, and not documented either. I think\n> >> this area needs improvements before we push additional functionality\n> >> there.\n> >\n> > I see fdd965d074 added a proper handling for concurrent invalidation\n> > for relation cache. If a concurrent invalidation occurs, we retry\n> > building a relation descriptor. Thus, we end up with returning of a\n> > valid relation descriptor to caller. I wonder if we can take the same\n> > approach to type cache. That would make the whole type cache more\n> > consistent and less fragile. Also, this patch will be simpler.\n> I think I understand the solution from the commit fdd965d074.\n> Just for the record, you mentioned invalidation inside the\n> lookup_type_cache above. Passing through the code, I found the only\n> place for such a case - the call of the GetDefaultOpClass, which\n> triggers the opening of the relation pg_opclass, which can cause an\n> AcceptInvalidationMessages call. Did you mean this case, or does a wider\n> field of cases exist here?\n\nI've tried to implement handling of concurrent invalidation similar to\ncommit fdd965d074. However that appears to be more difficult that I\nthought, because for some datatypes like arrays, ranges etc we might\nneed fill the element type and reference it. So, I decided to\ncontinue with the current approach but borrowing some ideas from\nfdd965d074. The revised patchset attached.\n\n0001 - adds comment about concurrent invalidation handling\n0002 - revised c14d4acb8. Now we track type oids, whose\nTypeCacheEntry's filing is in-progress. Add entry to\nRelIdToTypeIdCacheHash at the end of lookup_type_cache() or on the\ntransaction abort. During invalidation don't assert\nRelIdToTypeIdCacheHash to be here if TypeCacheEntry is in-progress.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Fri, 13 Sep 2024 02:38:40 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On 13/9/2024 01:38, Alexander Korotkov wrote:\n> I've tried to implement handling of concurrent invalidation similar to\n> commit fdd965d074. However that appears to be more difficult that I\n> thought, because for some datatypes like arrays, ranges etc we might\n> need fill the element type and reference it. So, I decided to\n> continue with the current approach but borrowing some ideas from\n> fdd965d074. The revised patchset attached.\nLet me rephrase the issue in more straightforward terms to ensure we are \nall clear on the problem:\nThe critical problem of the typcache lookup on not-yet-locked data is \nthat it can lead to an inconsistent state of the TypEntry, potentially \ncausing disruptions in the DBMS's operations, correct?\nLet's exemplify this statement. By filling typentry's lt_opr, eq_opr, \nand gt_opr fields, we access the AMOPSTRATEGY cache. One operation can \nsuccessfully fetch data from the cache, but another can miss data and \ntouch the catalogue table, causing invalidations. In this case, we can \nget an inconsistent set of operators. Do I understand the problem \nstatement correctly?\n\nIf this view is correct, your derived approach should work fine if all \nnecessary callbacks are registered. I see that at least AMOPSTRATEGY and \nPROCOID were missed at the moment of the typcache initialization.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 16:10:08 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 5:10 PM Andrei Lepikhov <[email protected]> wrote:\n> On 13/9/2024 01:38, Alexander Korotkov wrote:\n> > I've tried to implement handling of concurrent invalidation similar to\n> > commit fdd965d074. However that appears to be more difficult that I\n> > thought, because for some datatypes like arrays, ranges etc we might\n> > need fill the element type and reference it. So, I decided to\n> > continue with the current approach but borrowing some ideas from\n> > fdd965d074. The revised patchset attached.\n> Let me rephrase the issue in more straightforward terms to ensure we are\n> all clear on the problem:\n> The critical problem of the typcache lookup on not-yet-locked data is\n> that it can lead to an inconsistent state of the TypEntry, potentially\n> causing disruptions in the DBMS's operations, correct?\n> Let's exemplify this statement. By filling typentry's lt_opr, eq_opr,\n> and gt_opr fields, we access the AMOPSTRATEGY cache. One operation can\n> successfully fetch data from the cache, but another can miss data and\n> touch the catalogue table, causing invalidations. In this case, we can\n> get an inconsistent set of operators. Do I understand the problem\n> statement correctly?\n\nActually, I didn't research much if there is a material problem. So,\nI didn't try to concurrently delete some operator class members\nconcurrently to lookup_type_cache(). There are probably some bugs,\nbut they likely have low impact in practice, given that type/opclass\nchanges are very rare.\n\nYet I was concentrated on why do lookup_type_cache() returns\nTypeCacheEntry filled with whatever caller asked given there could be\nconcurrent invalidations.\n\nSo, my approach was to\n1) Document how we currently handle concurrent invalidations.\n2) Maintain RelIdToTypeIdCacheHash correctly with concurrent invalidations.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Mon, 23 Sep 2024 13:53:47 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: type cache cleanup improvements"
}
] |
[
{
"msg_contents": "Currently, cancel request key is a 32-bit token, which isn't very much \nentropy. If you want to cancel another session's query, you can \nbrute-force it. In most environments, an unauthorized cancellation of a \nquery isn't very serious, but it nevertheless would be nice to have more \nprotection from it. The attached patch makes it longer. It is an \noptional protocol feature, so it's fully backwards-compatible with \nclients that don't support longer keys.\n\nIf the client requests the \"_pq_.extended_query_cancel\" protocol \nfeature, the server will generate a longer 256-bit cancellation key. \nHowever, the new longer key length is no longer hardcoded in the \nprotocol. The client is expected to deal with variable length keys, up \nto some reasonable upper limit (TODO: document the maximum). This \nflexibility allows e.g. a connection pooler to add more information to \nthe cancel key, which could be useful. If the client doesn't request the \nprotocol feature, the server generates a 32-bit key like before.\n\nOne complication with this was that because we no longer know how long \nthe key should be, 4-bytes or something longer, until the backend has \nperformed the protocol negotiation, we cannot generate the key in the \npostmaster before forking the process anymore. The first patch here \nchanges things so that the cancellation key is generated later, in the \nbackend, and the backend advertises the key in the PMSignalState array. \nThis is similar to how this has always worked in EXEC_BACKEND mode with \nthe ShmemBackendArray, but instead of having a separate array, I added \nfields to the PMSignalState slots. This removes a bunch of \nEXEC_BACKEND-specific code, which is nice.\n\nAny thoughts on this? Documentation is still missing, and there's one \nTODO on adding a portable time-constant memcmp() function; I'll add \nthose if there's agreement on this otherwise.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 29 Feb 2024 23:25:43 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make query cancellation keys longer"
},
{
"msg_contents": "This is a preliminary review. I'll look at this more closely soon.\n\nOn Thu, 29 Feb 2024 at 22:26, Heikki Linnakangas <[email protected]> wrote:\n> If the client requests the \"_pq_.extended_query_cancel\" protocol\n> feature, the server will generate a longer 256-bit cancellation key.\n\nHuge +1 for this general idea. This is a problem I ran into with\nPgBouncer, and had to make some concessions when fitting the\ninformation I wanted into the available bits. Also from a security\nperspective I don't think the current amount of bits have stood the\ntest of time.\n\n+ ADD_STARTUP_OPTION(\"_pq_.extended_query_cancel\", \"\");\n\nSince this parameter doesn't actually take a value (just empty\nstring). I think this behaviour makes more sense as a minor protocol\nversion bump instead of a parameter.\n\n+ if (strcmp(conn->workBuffer.data, \"_pq_.extended_query_cancel\") == 0)\n+ {\n+ /* that's ok */\n+ continue;\n+ }\n\nPlease see this thread[1], which in the first few patches makes\npqGetNegotiateProtocolVersion3 actually usable for extending the\nprotocol. I started that, because very proposed protocol change that's\nproposed on the list has similar changes to\npqGetNegotiateProtocolVersion3 and I think we shouldn't make these\nchanges ad-hoc hacked into the current code, but actually do them once\nin a way that makes sense for all protocol changes.\n\n> Documentation is still missing\n\nI think at least protocol message type documentation would be very\nhelpful in reviewing, because that is really a core part of this\nchange. Based on the current code I think it should have a few\nchanges:\n\n+ int cancel_key_len = 5 + msgLength - (conn->inCursor - conn->inStart);\n+\n+ conn->be_cancel_key = malloc(cancel_key_len);\n+ if (conn->be_cancel_key == NULL)\n\nThis is using the message length to determine the length of the cancel\nkey in BackendKey. That is not something we generally do in the\nprotocol. It's even documented: \"Notice that although each message\nincludes a byte count at the beginning, the message format is defined\nso that the message end can be found without reference to the byte\ncount.\" So I think the patch should be changed to include the length\nof the cancel key explicitly in the message.\n\n[1]: https://www.postgresql.org/message-id/flat/CAGECzQSr2%3DJPJHNN06E_jTF2%2B0E60K%3DhotyBw5wY%3Dq9Wvmt7DQ%40mail.gmail.com#359e4222eb161da37124be1a384f8d92\n\n\n",
"msg_date": "Fri, 1 Mar 2024 06:19:53 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 29.02.24 22:25, Heikki Linnakangas wrote:\n> Currently, cancel request key is a 32-bit token, which isn't very much \n> entropy. If you want to cancel another session's query, you can \n> brute-force it. In most environments, an unauthorized cancellation of a \n> query isn't very serious, but it nevertheless would be nice to have more \n> protection from it. The attached patch makes it longer. It is an \n> optional protocol feature, so it's fully backwards-compatible with \n> clients that don't support longer keys.\n\nMy intuition would be to make this a protocol version bump, not an \noptional feature. I think this is something that everyone should \neventually be using, not a niche feature that you explicitly want to \nopt-in for.\n\n> One complication with this was that because we no longer know how long \n> the key should be, 4-bytes or something longer, until the backend has \n> performed the protocol negotiation, we cannot generate the key in the \n> postmaster before forking the process anymore.\n\nMaybe this would be easier if it's a protocol version number change, \nsince that is sent earlier than protocol extensions?\n\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:19:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 15:19, Peter Eisentraut <[email protected]> wrote:\n> > One complication with this was that because we no longer know how long\n> > the key should be, 4-bytes or something longer, until the backend has\n> > performed the protocol negotiation, we cannot generate the key in the\n> > postmaster before forking the process anymore.\n>\n> Maybe this would be easier if it's a protocol version number change,\n> since that is sent earlier than protocol extensions?\n\nProtocol version and protocol extensions are both sent in the\nStartupMessage, so the same complication applies. (But I do agree that\na protocol version bump is more appropriate for this type of change)\n\n\n",
"msg_date": "Sun, 3 Mar 2024 07:59:15 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 06:19, Jelte Fennema-Nio <[email protected]> wrote:\n> This is a preliminary review. I'll look at this more closely soon.\n\nSome more thoughts after looking some more at the proposed changes\n\n+#define EXTENDED_CANCEL_REQUEST_CODE PG_PROTOCOL(1234,5677)\n\nnit: I think the code should be 1234,5679 because it's the newer\nmessage type, so to me it makes more sense if the code is a larger\nnumber.\n\n+ * FIXME: we used to use signal_child. I believe kill() is\n+ * maybe even more correct, but verify that.\n\nsignal_child seems to be the correct one, not kill. signal_child has\nthis relevant comment explaining why it's better than plain kill:\n\n * On systems that have setsid(), each child process sets itself up as a\n * process group leader. For signals that are generally interpreted in the\n * appropriate fashion, we signal the entire process group not just the\n * direct child process. This allows us to, for example, SIGQUIT a blocked\n * archive_recovery script, or SIGINT a script being run by a backend via\n * system().\n\n+SendCancelRequest(int backendPID, int32 cancelAuthCode)\n\nI think this name of the function is quite confusing, it's not sending\na cancel request, it is processing one. It sends a SIGINT.\n\n> While we're at it, switch to using atomics in pmsignal.c for the state\n> field. That feels easier to reason about than volatile\n> pointers.\n\nI feel like this refactor would benefit from being a separate commit.\nThat would make it easier to follow which change to pmsignal.c is\nnecessary for what.\n\n+ MyCancelKeyLength = (MyProcPort != NULL &&\nMyProcPort->extended_query_cancel) ? MAX_CANCEL_KEY_LENGTH : 4;\n\nI think we should be doing this check the opposite way, i.e. only fall\nback to the smaller key when explicitly requested:\n\n+ MyCancelKeyLength = (MyProcPort != NULL &&\nMyProcPort->old_query_cancel) ? 4 : MAX_CANCEL_KEY_LENGTH;\n\nThat way we'd get the more secure, longer key length for non-backend\nprocesses such as background workers.\n\n+ case EOF:\n+ /* We'll come back when there is more data */\n+ return PGRES_POLLING_READING;\n\nNice catch, I'll go steal this for my patchset which adds all the\nnecessary changes to be able to do a protocol bump[1].\n\n+ int be_pid; /* PID of backend --- needed for XX cancels */\n\nSeems like you accidentally added XX to the comment in this line.\n\n[1]: https://www.postgresql.org/message-id/flat/CAGECzQSr2%3DJPJHNN06E_jTF2%2B0E60K%3DhotyBw5wY%3Dq9Wvmt7DQ%40mail.gmail.com#359e4222eb161da37124be1a384f8d92\n\n\n",
"msg_date": "Sun, 3 Mar 2024 15:27:35 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Sun, 3 Mar 2024 at 15:27, Jelte Fennema-Nio <[email protected]> wrote:\n> + case EOF:\n> + /* We'll come back when there is more data */\n> + return PGRES_POLLING_READING;\n>\n> Nice catch, I'll go steal this for my patchset which adds all the\n> necessary changes to be able to do a protocol bump[1].\n\nActually, it turns out your change to return PGRES_POLLING_READING on\nEOF is incorrect (afaict). A little bit above there is this code\ncomment above a check to see if the whole body was received:\n\n * Can't process if message body isn't all here yet.\n *\n * After this check passes, any further EOF during parsing\n * implies that the server sent a bad/truncated message.\n * Reading more bytes won't help in that case, so don't return\n * PGRES_POLLING_READING after this point.\n\nSo I'll leave my patchset as is.\n\n\n",
"msg_date": "Sun, 3 Mar 2024 18:27:56 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 03:19:23PM +0100, Peter Eisentraut wrote:\n> On 29.02.24 22:25, Heikki Linnakangas wrote:\n> > Currently, cancel request key is a 32-bit token, which isn't very much\n> > entropy. If you want to cancel another session's query, you can\n> > brute-force it. In most environments, an unauthorized cancellation of a\n> > query isn't very serious, but it nevertheless would be nice to have more\n> > protection from it. The attached patch makes it longer. It is an\n> > optional protocol feature, so it's fully backwards-compatible with\n> > clients that don't support longer keys.\n> \n> My intuition would be to make this a protocol version bump, not an optional\n> feature. I think this is something that everyone should eventually be\n> using, not a niche feature that you explicitly want to opt-in for.\n\nAgreed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 5 Mar 2024 20:12:18 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 01/03/2024 07:19, Jelte Fennema-Nio wrote:\n> I think this behaviour makes more sense as a minor protocol version\n> bump instead of a parameter.\nOk, the consensus is to bump the minor protocol version for this. Works \nfor me.\n\n> + if (strcmp(conn->workBuffer.data, \"_pq_.extended_query_cancel\") == 0)\n> + {\n> + /* that's ok */\n> + continue;\n> + }\n> \n> Please see this thread[1], which in the first few patches makes\n> pqGetNegotiateProtocolVersion3 actually usable for extending the\n> protocol. I started that, because very proposed protocol change that's\n> proposed on the list has similar changes to\n> pqGetNegotiateProtocolVersion3 and I think we shouldn't make these\n> changes ad-hoc hacked into the current code, but actually do them once\n> in a way that makes sense for all protocol changes.\n\nThanks, I will take a look and respond on that thread.\n\n>> Documentation is still missing\n> \n> I think at least protocol message type documentation would be very\n> helpful in reviewing, because that is really a core part of this\n> change.\n\nAdded some documentation. There's more work to be done there, but at \nleast the message type descriptions are now up-to-date.\n\n> Based on the current code I think it should have a few changes:\n> \n> + int cancel_key_len = 5 + msgLength - (conn->inCursor - conn->inStart);\n> +\n> + conn->be_cancel_key = malloc(cancel_key_len);\n> + if (conn->be_cancel_key == NULL)\n> \n> This is using the message length to determine the length of the cancel\n> key in BackendKey. That is not something we generally do in the\n> protocol. It's even documented: \"Notice that although each message\n> includes a byte count at the beginning, the message format is defined\n> so that the message end can be found without reference to the byte\n> count.\" So I think the patch should be changed to include the length\n> of the cancel key explicitly in the message.\n\nThe nice thing about relying on the message length was that we could \njust redefine the CancelRequest message to have a variable length \nsecret, and old CancelRequest with 4-byte secret was compatible with the \nnew definition too. But it doesn't matter much, so added an explicit \nlength field.\n\nFWIW I don't think that restriction makes sense. Any code that parses \nthe messages needs to have the message length available, and I don't \nthink it helps with sanity checking that much. I think the documentation \nis a little anachronistic. The real reason that all the message types \ninclude enough information to find the message end is that in protocol \nversion 2, there was no message length field. The only exception that \ndoesn't have that property is CopyData, and it's no coincidence that it \nwas added in protocol version 3.\n\nOn 03/03/2024 16:27, Jelte Fennema-Nio wrote:\n> On Fri, 1 Mar 2024 at 06:19, Jelte Fennema-Nio <[email protected]> wrote:\n>> This is a preliminary review. I'll look at this more closely soon.\n> \n> Some more thoughts after looking some more at the proposed changes\n> \n> +#define EXTENDED_CANCEL_REQUEST_CODE PG_PROTOCOL(1234,5677)\n> \n> nit: I think the code should be 1234,5679 because it's the newer\n> message type, so to me it makes more sense if the code is a larger\n> number.\n\nUnfortunately 1234,5679 already in use by NEGOTIATE_SSL_CODE. That's why \nI went the other direction. If we want to add this to \"the end\", it \nneeds to be 1234,5681. But I wanted to keep the cancel requests together.\n\n> + * FIXME: we used to use signal_child. I believe kill() is\n> + * maybe even more correct, but verify that.\n> \n> signal_child seems to be the correct one, not kill. signal_child has\n> this relevant comment explaining why it's better than plain kill:\n> \n> * On systems that have setsid(), each child process sets itself up as a\n> * process group leader. For signals that are generally interpreted in the\n> * appropriate fashion, we signal the entire process group not just the\n> * direct child process. This allows us to, for example, SIGQUIT a blocked\n> * archive_recovery script, or SIGINT a script being run by a backend via\n> * system().\n\nI changed it to signal the process group if HAVE_SETSID, like \npg_signal_backend() does. We don't need to signal both the process group \nand the process itself like signal_child() does, because we don't have \nthe race condition with recently-forked children that signal_child() \ntalks about.\n\n> +SendCancelRequest(int backendPID, int32 cancelAuthCode)\n> \n> I think this name of the function is quite confusing, it's not sending\n> a cancel request, it is processing one. It sends a SIGINT.\n\nHeh, well, it's sending the cancel request signal to the right backend, \nbut I see your point. Renamed to ProcessCancelRequest.\n\n>> While we're at it, switch to using atomics in pmsignal.c for the state\n>> field. That feels easier to reason about than volatile\n>> pointers.\n> \n> I feel like this refactor would benefit from being a separate commit.\n> That would make it easier to follow which change to pmsignal.c is\n> necessary for what.\n\nPoint taken. I didn't do that yet, but it makes sense.\n\n> + MyCancelKeyLength = (MyProcPort != NULL &&\n> MyProcPort->extended_query_cancel) ? MAX_CANCEL_KEY_LENGTH : 4;\n> \n> I think we should be doing this check the opposite way, i.e. only fall\n> back to the smaller key when explicitly requested:\n> \n> + MyCancelKeyLength = (MyProcPort != NULL &&\n> MyProcPort->old_query_cancel) ? 4 : MAX_CANCEL_KEY_LENGTH;\n> \n> That way we'd get the more secure, longer key length for non-backend\n> processes such as background workers.\n\n+1, fixed.\n\nOn 03/03/2024 19:27, Jelte Fennema-Nio wrote:\n> On Sun, 3 Mar 2024 at 15:27, Jelte Fennema-Nio <[email protected]> wrote:\n>> + case EOF:\n>> + /* We'll come back when there is more data */\n>> + return PGRES_POLLING_READING;\n>>\n>> Nice catch, I'll go steal this for my patchset which adds all the\n>> necessary changes to be able to do a protocol bump[1].\n> \n> Actually, it turns out your change to return PGRES_POLLING_READING on\n> EOF is incorrect (afaict). A little bit above there is this code\n> comment above a check to see if the whole body was received:\n> \n> * Can't process if message body isn't all here yet.\n> *\n> * After this check passes, any further EOF during parsing\n> * implies that the server sent a bad/truncated message.\n> * Reading more bytes won't help in that case, so don't return\n> * PGRES_POLLING_READING after this point.\n> \n> So I'll leave my patchset as is.\n\nYep, thanks.\n\nOne consequence of this patch that I didn't mention earlier is that it \nmakes libpq incompatible with server version 9.2 and below, because the \nminor version negotiation was introduced in version 9.3. We could teach \nlibpq to disconnect and reconnect with minor protocol version 3.0, if \nconnecting with 3.1 fails, but IMHO it's better to not complicate this \nand accept the break in backwards-compatibility. 9.3 was released in \n2013. We dropped pg_dump support for versions older than 9.2 a few years \nago, this raises the bar for pg_dump to 9.3 as well.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sat, 9 Mar 2024 00:20:19 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, 8 Mar 2024 at 23:20, Heikki Linnakangas <[email protected]> wrote:\n> Added some documentation. There's more work to be done there, but at\n> least the message type descriptions are now up-to-date.\n\nThanks, that's very helpful.\n\n> The nice thing about relying on the message length was that we could\n> just redefine the CancelRequest message to have a variable length\n> secret, and old CancelRequest with 4-byte secret was compatible with the\n> new definition too. But it doesn't matter much, so I added an explicit\n> length field.\n>\n> FWIW I don't think that restriction makes sense. Any code that parses\n> the messages needs to have the message length available, and I don't\n> think it helps with sanity checking that much. I think the documentation\n> is a little anachronistic. The real reason that all the message types\n> include enough information to find the message end is that in protocol\n> version 2, there was no message length field. The only exception that\n> doesn't have that property is CopyData, and it's no coincidence that it\n> was added in protocol version 3.\n\nHmm, looking at the current code, I do agree that not introducing a\nnew message would simplify both client and server implementation. Now\nclients need to do different things depending on if the server\nsupports 3.1 or 3.0 (sending CancelRequestExtended instead of\nCancelRequest and having to parse BackendKeyData differently). And I\nalso agree that the extra length field doesn't add much in regards to\nsanity checking (for the CancelRequest and BackendKeyData message\ntypes at least). So, sorry for the back and forth on this, but I now\nagree with you that we should not add the length field. I think one\nreason I didn't see the benefit before was because the initial patch\n0002 was still introducing a CancelRequestExtended message type. If we\ncan get rid of this message type by not adding a length, then I think\nthat's worth losing the sanity checking.\n\n> Unfortunately 1234,5679 already in use by NEGOTIATE_SSL_CODE. That's why\n> I went the other direction. If we want to add this to \"the end\", it\n> needs to be 1234,5681. But I wanted to keep the cancel requests together.\n\nFair enough, I didn't realise that. This whole point is moot anyway if\nwe're not introducing CancelRequestExtended\n\n> We could teach\n> libpq to disconnect and reconnect with minor protocol version 3.0, if\n> connecting with 3.1 fails, but IMHO it's better to not complicate this\n> and accept the break in backwards-compatibility.\n\nYeah, implementing automatic reconnection seems a bit overkill to me\ntoo. But it might be nice to add a connection option that causes libpq\nto use protocol 3.0. Having to install an old libpq to connect to an\nold server seems quite annoying. Especially since I think that many\nother types of servers that implement the postgres protocol have not\nimplemented the minor version negotiation.\n\nI at least know PgBouncer[1] and pgcat[2] have not, but probably other\nserver implementations like CockroachDB and Google Spanner have this\nproblem too.\n\nI'll try to add such a fallback connection option to my patchset soon.\n\n[1]: https://github.com/pgbouncer/pgbouncer/pull/1007\n[2]: https://github.com/postgresml/pgcat/issues/706\n\n\n",
"msg_date": "Sat, 9 Mar 2024 13:32:38 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 5:20 PM Heikki Linnakangas <[email protected]> wrote:\n> The nice thing about relying on the message length was that we could\n> just redefine the CancelRequest message to have a variable length\n> secret, and old CancelRequest with 4-byte secret was compatible with the\n> new definition too. But it doesn't matter much, so added an explicit\n> length field.\n\nI think I liked your original idea better than this one.\n\n> One consequence of this patch that I didn't mention earlier is that it\n> makes libpq incompatible with server version 9.2 and below, because the\n> minor version negotiation was introduced in version 9.3. We could teach\n> libpq to disconnect and reconnect with minor protocol version 3.0, if\n> connecting with 3.1 fails, but IMHO it's better to not complicate this\n> and accept the break in backwards-compatibility. 9.3 was released in\n> 2013. We dropped pg_dump support for versions older than 9.2 a few years\n> ago, this raises the bar for pg_dump to 9.3 as well.\n\nI think we shouldn't underestimate the impact of bumping the minor\nprotocol version. Minor version negotiation is probably not supported\nby all drivers and Jelte says that it's not supported by any poolers,\nso for anybody but direct libpq users, there will be some breakage.\nNow, on the one hand, as Jelte has said, there's little value in\nhaving a protocol version if we're too afraid to make use of it. But\non the other hand, is this problem serious enough to justify the\nbreakage we'll cause? I'm not sure. It seems pretty silly to be using\na 32-bit value for this in 2024, but I'm sure some people aren't going\nto like it, and they may not all have noticed this thread. On the\nthird hand, if we do this, it may help to unblock a bunch of other\npending patches that also want to do protocol-related things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2024 11:25:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "Here's a new version of the first patch. In the previous version, I \nadded the pid cancellation key to pmsignal.c, but on second thoughts, I \nthink procsignal.c is a better place. The ProcSignal array already \ncontains the pid, we just need to add the cancellation key there.\n\nThis first patch just refactors the current code, without changing the \nprotocol or length of the cancellation key. I'd like to get this \nreviewed and committed first, and get back to the protocol changes after \nthat.\n\nWe currently don't do any locking on the ProcSignal array. For query \ncancellations, that's good because a query cancel packet is processed \nwithout having a PGPROC entry, so we cannot take LWLocks. We could use \nspinlocks though. In this patch, I used memory barriers to ensure that \nwe load/store the pid and the cancellation key in a sensible order, so \nthat you cannot e.g. send a cancellation signal to a backend that's just \nstarting up and hasn't advertised its cancellation key in ProcSignal \nyet. But I think this might be simpler and less error-prone by just \nadding a spinlock to each ProcSignal slot. That would also fix the \nexisting race condition where we might set the pss_signalFlags flag for \na slot, when the process concurrently terminates and the slot is reused \nfor a different process. Because of that, we currently have this caveat: \n\"... all the signals are such that no harm is done if they're mistakenly \nfired\". With a spinlock, we could eliminate that race.\n\nThoughts?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 4 Jul 2024 13:32:37 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 04/07/2024 13:32, Heikki Linnakangas wrote:\n> Here's a new version of the first patch. \n\nSorry, forgot attachment.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 4 Jul 2024 13:35:11 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, 4 Jul 2024 at 12:35, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 04/07/2024 13:32, Heikki Linnakangas wrote:\n> > Here's a new version of the first patch.\n>\n> Sorry, forgot attachment.\n\nIt seems you undid the following earlier change. Was that on purpose?\nIf not, did you undo any other earlier changes by accident?\n\n> > +SendCancelRequest(int backendPID, int32 cancelAuthCode)\n> >\n> > I think this name of the function is quite confusing, it's not sending\n> > a cancel request, it is processing one. It sends a SIGINT.\n>\n> Heh, well, it's sending the cancel request signal to the right backend,\n> but I see your point. Renamed to ProcessCancelRequest.\n\n\n",
"msg_date": "Thu, 4 Jul 2024 12:50:20 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 04/07/2024 13:50, Jelte Fennema-Nio wrote:\n> On Thu, 4 Jul 2024 at 12:35, Heikki Linnakangas <[email protected]> wrote:\n>>\n>> On 04/07/2024 13:32, Heikki Linnakangas wrote:\n>>> Here's a new version of the first patch.\n>>\n>> Sorry, forgot attachment.\n> \n> It seems you undid the following earlier change. Was that on purpose?\n> If not, did you undo any other earlier changes by accident?\n> \n>>> +SendCancelRequest(int backendPID, int32 cancelAuthCode)\n>>>\n>>> I think this name of the function is quite confusing, it's not sending\n>>> a cancel request, it is processing one. It sends a SIGINT.\n>>\n>> Heh, well, it's sending the cancel request signal to the right backend,\n>> but I see your point. Renamed to ProcessCancelRequest.\n\nAh, I made that change as part of the second patch earlier, so I didn't \nconsider it now.\n\nI don't feel strongly about it, but I think SendCancelRequest() actually \nfeels a little better, in procsignal.c. It's more consistent with the \nexisting SendProcSignal() function.\n\nThere was indeed another change in the second patch that I missed:\n\n> +\t\t\t\t/* If we have setsid(), signal the backend's whole process group */\n> +#ifdef HAVE_SETSID\n> +\t\t\t\tkill(-backendPID, SIGINT);\n> +#else\n> \t\t\t\tkill(backendPID, SIGINT);\n> +#endif\n\nI'm not sure that's really required, when sending SIGINT to a backend \nprocess. A backend process shouldn't have any child processes, and if it \ndid, it's not clear what good SIGINT will do them. But I guess it makes \nsense to do it anyway, for consistency with pg_cancel_backend(), which \nalso signals the whole process group.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 4 Jul 2024 14:35:00 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, 4 Jul 2024 at 12:32, Heikki Linnakangas <[email protected]> wrote:\n> We currently don't do any locking on the ProcSignal array. For query\n> cancellations, that's good because a query cancel packet is processed\n> without having a PGPROC entry, so we cannot take LWLocks. We could use\n> spinlocks though. In this patch, I used memory barriers to ensure that\n> we load/store the pid and the cancellation key in a sensible order, so\n> that you cannot e.g. send a cancellation signal to a backend that's just\n> starting up and hasn't advertised its cancellation key in ProcSignal\n> yet. But I think this might be simpler and less error-prone by just\n> adding a spinlock to each ProcSignal slot. That would also fix the\n> existing race condition where we might set the pss_signalFlags flag for\n> a slot, when the process concurrently terminates and the slot is reused\n> for a different process. Because of that, we currently have this caveat:\n> \"... all the signals are such that no harm is done if they're mistakenly\n> fired\". With a spinlock, we could eliminate that race.\n\nI think a spinlock would make this thing a whole concurrency stuff a\nlot easier to reason about.\n\n+ slot->pss_cancel_key_valid = false;\n+ slot->pss_cancel_key = 0;\n\nIf no spinlock is added, I think these accesses should still be made\natomic writes. Otherwise data-races on those fields are still\npossible, resulting in undefined behaviour. The memory barriers you\nadded don't prevent that afaict. With atomic operations there are\nstill race conditions, but no data-races.\n\nActually it seems like that same argument applies to the already\nexisting reading/writing of pss_pid: it's written/read using\nnon-atomic operations so data-races can occur and thus undefined\nbehaviour too.\n\n- volatile pid_t pss_pid;\n+ pid_t pss_pid;\n\nWhy remove the volatile modifier?\n\n\n",
"msg_date": "Thu, 4 Jul 2024 14:20:39 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "Hi,\n\nI don't have any immediate feedback regarding this patch, but I'm\nwondering about one thing related to cancellations - we talk cancelling\na query, but we really target a PID (or a particular backend, no matter\nhow we identify it).\n\nI occasionally want to only cancel a particular query, but I don't think\nthat's really possible - the query may complete meanwhile, and the\nbackend may even get used for a different user connection (e.g. with a\nconnection pool)? Or am I missing something important?\n\nAnyway, I wonder if making the cancellation key longer (or variable\nlength) might be useful for this too - it would allow including some\nsort of optional \"query ID\" in the request, not just the PID. (Maybe\nst_xact_start_timestamp would work?)\n\nObviously, that's not up to this patch, but it's somewhat related.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Jul 2024 14:43:20 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, 4 Jul 2024 at 14:43, Tomas Vondra <[email protected]> wrote:\n> I don't have any immediate feedback regarding this patch, but I'm\n> wondering about one thing related to cancellations - we talk cancelling\n> a query, but we really target a PID (or a particular backend, no matter\n> how we identify it).\n>\n> I occasionally want to only cancel a particular query, but I don't think\n> that's really possible - the query may complete meanwhile, and the\n> backend may even get used for a different user connection (e.g. with a\n> connection pool)? Or am I missing something important?\n\nNo, you're not missing anything. Having the target of the cancel\nrequest be the backend instead of a specific query is really annoying\nand can cause all kinds of race conditions. I had to redesign and\ncomplicate the cancellation logic in PgBouncer significantly, to make\nsure that one client could not cancel a connection from another client\nanymore: https://github.com/pgbouncer/pgbouncer/pull/717\n\n> Anyway, I wonder if making the cancellation key longer (or variable\n> length) might be useful for this too - it would allow including some\n> sort of optional \"query ID\" in the request, not just the PID. (Maybe\n> st_xact_start_timestamp would work?)\n\nYeah, some query ID would be necessary. I think we'd want a dedicated\nfield for it though. Instead of encoding it in the secret. Getting the\nxact_start_timestamp would require communication with the server to\nget it, which you would like to avoid since the server might be\nunresponsive. So I think a command counter that both sides keep track\nof would be better. This counter could then be incremented after every\nQuery and Sync message.\n\n> Obviously, that's not up to this patch, but it's somewhat related.\n\nYeah, let's postpone more discussion on this until we have the\ncurrently proposed much simpler change in, which only changes the\nsecret length.\n\n\n",
"msg_date": "Thu, 4 Jul 2024 15:31:49 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 04/07/2024 15:20, Jelte Fennema-Nio wrote:\n> On Thu, 4 Jul 2024 at 12:32, Heikki Linnakangas <[email protected]> wrote:\n>> We currently don't do any locking on the ProcSignal array. For query\n>> cancellations, that's good because a query cancel packet is processed\n>> without having a PGPROC entry, so we cannot take LWLocks. We could use\n>> spinlocks though. In this patch, I used memory barriers to ensure that\n>> we load/store the pid and the cancellation key in a sensible order, so\n>> that you cannot e.g. send a cancellation signal to a backend that's just\n>> starting up and hasn't advertised its cancellation key in ProcSignal\n>> yet. But I think this might be simpler and less error-prone by just\n>> adding a spinlock to each ProcSignal slot. That would also fix the\n>> existing race condition where we might set the pss_signalFlags flag for\n>> a slot, when the process concurrently terminates and the slot is reused\n>> for a different process. Because of that, we currently have this caveat:\n>> \"... all the signals are such that no harm is done if they're mistakenly\n>> fired\". With a spinlock, we could eliminate that race.\n> \n> I think a spinlock would make this thing a whole concurrency stuff a\n> lot easier to reason about.\n> \n> + slot->pss_cancel_key_valid = false;\n> + slot->pss_cancel_key = 0;\n> \n> If no spinlock is added, I think these accesses should still be made\n> atomic writes. Otherwise data-races on those fields are still\n> possible, resulting in undefined behaviour. The memory barriers you\n> added don't prevent that afaict. With atomic operations there are\n> still race conditions, but no data-races.\n> \n> Actually it seems like that same argument applies to the already\n> existing reading/writing of pss_pid: it's written/read using\n> non-atomic operations so data-races can occur and thus undefined\n> behaviour too.\n\nOk, here's a version with spinlocks.\n\nI went back and forth on what exactly is protected by the spinlock. I \nkept the \"volatile sig_atomic_t\" type for pss_signalFlags, so that it \ncan still be safely read without holding the spinlock in \nCheckProcSignal, but all the functions that set the flags now hold the \nspinlock. That removes the race condition that you might set the flag \nfor wrong slot, if the target backend exits and the slot is recycled. \nThe race condition was harmless and there were comments to note it, but \nit doesn't occur anymore with the spinlock.\n\n(Note: Thomas's \"Interrupts vs signals\" patch will remove ps_signalFlags \naltogether. I'm looking forward to that.)\n\n> - volatile pid_t pss_pid;\n> + pid_t pss_pid;\n> \n> Why remove the volatile modifier?\n\nBecause I introduced a memory barrier to ensure the reads/writes of \npss_pid become visible to other processes in right order. That makes the \n'volatile' unnecessary IIUC. With the spinlock, the point is moot however.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 24 Jul 2024 19:12:21 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 24/07/2024 19:12, Heikki Linnakangas wrote:\n> On 04/07/2024 15:20, Jelte Fennema-Nio wrote:\n>> On Thu, 4 Jul 2024 at 12:32, Heikki Linnakangas <[email protected]> wrote:\n>>> We currently don't do any locking on the ProcSignal array. For query\n>>> cancellations, that's good because a query cancel packet is processed\n>>> without having a PGPROC entry, so we cannot take LWLocks. We could use\n>>> spinlocks though. In this patch, I used memory barriers to ensure that\n>>> we load/store the pid and the cancellation key in a sensible order, so\n>>> that you cannot e.g. send a cancellation signal to a backend that's just\n>>> starting up and hasn't advertised its cancellation key in ProcSignal\n>>> yet. But I think this might be simpler and less error-prone by just\n>>> adding a spinlock to each ProcSignal slot. That would also fix the\n>>> existing race condition where we might set the pss_signalFlags flag for\n>>> a slot, when the process concurrently terminates and the slot is reused\n>>> for a different process. Because of that, we currently have this caveat:\n>>> \"... all the signals are such that no harm is done if they're mistakenly\n>>> fired\". With a spinlock, we could eliminate that race.\n>>\n>> I think a spinlock would make this thing a whole concurrency stuff a\n>> lot easier to reason about.\n>>\n>> + slot->pss_cancel_key_valid = false;\n>> + slot->pss_cancel_key = 0;\n>>\n>> If no spinlock is added, I think these accesses should still be made\n>> atomic writes. Otherwise data-races on those fields are still\n>> possible, resulting in undefined behaviour. The memory barriers you\n>> added don't prevent that afaict. With atomic operations there are\n>> still race conditions, but no data-races.\n>>\n>> Actually it seems like that same argument applies to the already\n>> existing reading/writing of pss_pid: it's written/read using\n>> non-atomic operations so data-races can occur and thus undefined\n>> behaviour too.\n> \n> Ok, here's a version with spinlocks.\n> \n> I went back and forth on what exactly is protected by the spinlock. I\n> kept the \"volatile sig_atomic_t\" type for pss_signalFlags, so that it\n> can still be safely read without holding the spinlock in\n> CheckProcSignal, but all the functions that set the flags now hold the\n> spinlock. That removes the race condition that you might set the flag\n> for wrong slot, if the target backend exits and the slot is recycled.\n> The race condition was harmless and there were comments to note it, but\n> it doesn't occur anymore with the spinlock.\n> \n> (Note: Thomas's \"Interrupts vs signals\" patch will remove ps_signalFlags\n> altogether. I'm looking forward to that.)\n> \n>> - volatile pid_t pss_pid;\n>> + pid_t pss_pid;\n>>\n>> Why remove the volatile modifier?\n> \n> Because I introduced a memory barrier to ensure the reads/writes of\n> pss_pid become visible to other processes in right order. That makes the\n> 'volatile' unnecessary IIUC. With the spinlock, the point is moot however.\n\nI:\n- fixed a few comments,\n- fixed a straightforward failure with EXEC_BACKEND (ProcSignal needs to \nbe passed down in BackendParameters now),\n- put back the snippet to signal the whole process group if supported, \nwhich you pointed out earlier\n\nand committed this refactoring patch.\n\nThanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 16:19:15 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "+ * See if we have a matching backend. Reading the pss_pid and\n+ * pss_cancel_key fields is racy, a backend might die and remove itself\n+ * from the array at any time. The probability of the cancellation key\n+ * matching wrong process is miniscule, however, so we can live with that.\n+ * PIDs are reused too, so sending the signal based on PID is inherently\n+ * racy anyway, although OS's avoid reusing PIDs too soon.\n\nJust BTW, we know that Windows sometimes recycles PIDs very soon,\nsometimes even immediately, to the surprise of us Unix hackers. It can\nmake for some very confusing build farm animal logs. My patch will\npropose to change that particular thing to proc numbers anyway so I'm\nnot asking for a change here, I just didn't want that assumption to go\nun-footnoted. I suppose that's actually (another) good reason to want\nto widen the cancellation key, so that we don't have to worry about\nproc number allocation order being less protective than traditional\nUnix PID allocation...\n\n\n",
"msg_date": "Mon, 12 Aug 2024 23:45:44 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "I'm back to working on the main patch here, to make cancellation keys \nlonger. New rebased version attached, with all the FIXMEs and TODOs from \nthe earlier version fixed. There was a lot of bitrot, too.\n\nThe first patch now introduces timingsafe_bcmp(), a function borrowed \nfrom OpenBSD to perform a constant-time comparison. There's a configure \ncheck to use the function from the OS if it's available, and includes a \ncopy of OpenBSD's implementation otherwise. Similar functions exist with \ndifferent names in OpenSSL (CRYPTO_memcmp) and NetBSD \n(consttime_memequal), but it's a pretty simple function so I don't think \nwe need to work too hard to pick up those other native implementations.\n\nI used it for checking if the cancellation key matches, now that it's \nnot a single word anymore. It feels paranoid to worry about timing \nattacks here, a few instructions is unlikely to give enough signal to an \nattacker and query cancellation is not a very interesting target anyway. \nBut better safe than sorry. You can still get information about whether \na backend with the given PID exists at all, the constant-time comparison \nonly applies to comparing the key. We probably should be using this in \nsome other places in the backend, but I haven't gone around looking for \nthem.\n\n> Hmm, looking at the current code, I do agree that not introducing a\n> new message would simplify both client and server implementation. Now\n> clients need to do different things depending on if the server\n> supports 3.1 or 3.0 (sending CancelRequestExtended instead of\n> CancelRequest and having to parse BackendKeyData differently). And I\n> also agree that the extra length field doesn't add much in regards to\n> sanity checking (for the CancelRequest and BackendKeyData message\n> types at least). So, sorry for the back and forth on this, but I now\n> agree with you that we should not add the length field. I think one\n> reason I didn't see the benefit before was because the initial patch\n> 0002 was still introducing a CancelRequestExtended message type. If we\n> can get rid of this message type by not adding a length, then I think\n> that's worth losing the sanity checking.\n\nOk, I went back to the original scheme that just redefines the secret \nkey in the CancelRequest message to be variable length, with the length \ndeduced from the message length.\n\n>> We could teach\n>> libpq to disconnect and reconnect with minor protocol version 3.0, if\n>> connecting with 3.1 fails, but IMHO it's better to not complicate this\n>> and accept the break in backwards-compatibility.\n> \n> Yeah, implementing automatic reconnection seems a bit overkill to me\n> too. But it might be nice to add a connection option that causes libpq\n> to use protocol 3.0. Having to install an old libpq to connect to an\n> old server seems quite annoying.\n\nAdded a \"protocol_version\" libpq option for that. It defaults to \"auto\", \nbut you can set it to \"3.1\" or \"3.0\" to force the version. It makes it \neasier to test that the backwards-compatibility works, too.\n\n> Especially since I think that many other types of servers that\n> implement the postgres protocol have not implemented the minor\n> version negotiation.\n> \n> I at least know PgBouncer[1] and pgcat[2] have not, but probably\n> other server implementations like CockroachDB and Google Spanner have\n> this problem too.\nGood point.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 15 Aug 2024 20:13:49 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 1:13 PM Heikki Linnakangas <[email protected]> wrote:\n> Added a \"protocol_version\" libpq option for that. It defaults to \"auto\",\n> but you can set it to \"3.1\" or \"3.0\" to force the version. It makes it\n> easier to test that the backwards-compatibility works, too.\n\nOver on the \"Add new protocol message to change GUCs for usage with\nfuture protocol-only GUCs\" there is a lot of relevant discussion about\nhow bumping the protocol version should work. This thread shouldn't\nignore all that discussion. Just to take one example, Jelte wants to\nbump the protocol version to 3.2, not 3.1, for some reasons that are\nin the commit message for the relevant patch over there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Aug 2024 16:20:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 15/08/2024 23:20, Robert Haas wrote:\n> On Thu, Aug 15, 2024 at 1:13 PM Heikki Linnakangas <[email protected]> wrote:\n>> Added a \"protocol_version\" libpq option for that. It defaults to \"auto\",\n>> but you can set it to \"3.1\" or \"3.0\" to force the version. It makes it\n>> easier to test that the backwards-compatibility works, too.\n> \n> Over on the \"Add new protocol message to change GUCs for usage with\n> future protocol-only GUCs\" there is a lot of relevant discussion about\n> how bumping the protocol version should work. This thread shouldn't\n> ignore all that discussion. Just to take one example, Jelte wants to\n> bump the protocol version to 3.2, not 3.1, for some reasons that are\n> in the commit message for the relevant patch over there.\n\nOk, I've read through that thread now, and opined there too. One \ndifference is with libpq option name: My patch adds \"protocol_version\", \nwhile Jelte proposes \"max_protocol_version\". I don't have strong \nopinions on that. I hope the ecosystem catches up to support \nNegotiateProtocolVersion quickly, so that only few people will need to \nset this option. In particular, I hope that there will never be need to \nuse \"max_protocol_version=3.2\", because by the time we introduce version \n3.3, all the connection poolers that support 3.2 will also implement \nNegotiateProtocolVersion.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 01:07:25 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 6:07 PM Heikki Linnakangas <[email protected]> wrote:\n> Ok, I've read through that thread now, and opined there too. One\n> difference is with libpq option name: My patch adds \"protocol_version\",\n> while Jelte proposes \"max_protocol_version\". I don't have strong\n> opinions on that. I hope the ecosystem catches up to support\n> NegotiateProtocolVersion quickly, so that only few people will need to\n> set this option. In particular, I hope that there will never be need to\n> use \"max_protocol_version=3.2\", because by the time we introduce version\n> 3.3, all the connection poolers that support 3.2 will also implement\n> NegotiateProtocolVersion.\n\nIn Jelte's design, there end up being two connection parameters. We\ntell the server we want max_protocol_version, but we accept that it\nmight give us something older. If, however, it tries to degrade us to\nsomething lower than min_protocol_version, we bail out. I see you've\ngone for a simpler design: you ask the server for protocol_version and\nyou get that or you die. To be honest, right up until exactly now, I\nwas assuming we wanted a two-parameter system like that, just because\nbeing able to tolerate a range of protocol versions seems useful.\nHowever, maybe we don't need it. Alternatively, we could do this for\nnow, and then later we could adjust the parameter so that you can say\nprotocol_version=3.2-3.7 and the client will ask for 3.7 but tolerate\nanything >= 3.2. Hmm, I kind of like that idea.\n\nI think it's likely that the ecosystem will catch up with\nNegotiateProtocolVersion once things start breaking. However, I feel\npretty confident that there are going to be glitches. Clients are\ngoing to want to force newer protocol versions to make sure they get\nnew features, or to make sure that security features that they want to\nhave (like this one) are enabled. Some users are going to be running\nold poolers that can't handle 3.2, or there will be weirder things\nwhere the pooler says it supports it but it doesn't actually work\nproperly in all cases. There are also non-PG servers that reimplement\nthe PG wire protocol. I can't really enumerate all the things that go\nwrong, but I think there are a number of wire protocol changes that\nvarious people have been wanting for a long while now, and when we\nstart to get the infrastructure in place to make that practical,\npeople are going to take advantage of it. So I think we can expect a\nnumber of protocol enhancements and changes -- Peter's transparent\ncolumn encryption stuff is another example -- and there will be\nmistakes by us and mistakes by others along the way. Allowing users to\nspecify what protocol version they want is probably an important part\nof coping with that.\n\nThe documentation in the patch you attached still seems to think\nthere's an explicit length field for the cancel key. Also, I think it\nwould be good to split this into two patches, one to bump the protocol\nversion and a second to change the cancel key stuff. It would\nfacilitate review, and I also think that bumping the protocol version\nis a big enough deal that it should have its own entry in the commit\nlog.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 08:31:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On 16/08/2024 15:31, Robert Haas wrote:\n> On Thu, Aug 15, 2024 at 6:07 PM Heikki Linnakangas <[email protected]> wrote:\n>> Ok, I've read through that thread now, and opined there too. One\n>> difference is with libpq option name: My patch adds \"protocol_version\",\n>> while Jelte proposes \"max_protocol_version\". I don't have strong\n>> opinions on that. I hope the ecosystem catches up to support\n>> NegotiateProtocolVersion quickly, so that only few people will need to\n>> set this option. In particular, I hope that there will never be need to\n>> use \"max_protocol_version=3.2\", because by the time we introduce version\n>> 3.3, all the connection poolers that support 3.2 will also implement\n>> NegotiateProtocolVersion.\n> \n> In Jelte's design, there end up being two connection parameters. We\n> tell the server we want max_protocol_version, but we accept that it\n> might give us something older. If, however, it tries to degrade us to\n> something lower than min_protocol_version, we bail out. I see you've\n> gone for a simpler design: you ask the server for protocol_version and\n> you get that or you die. To be honest, right up until exactly now, I\n> was assuming we wanted a two-parameter system like that, just because\n> being able to tolerate a range of protocol versions seems useful.\n> However, maybe we don't need it. Alternatively, we could do this for\n> now, and then later we could adjust the parameter so that you can say\n> protocol_version=3.2-3.7 and the client will ask for 3.7 but tolerate\n> anything >= 3.2. Hmm, I kind of like that idea.\n\nWorks for me.\n\nIf we envision accepting ranges like that in the future, it would be \ngood to do now rather than later. Otherwise, if someone wants to require \nfeatures from protocol 3.2 today, they will have to put \n\"protocol_version=3.2\" in the connection string, and later when 3.3 \nversion is released, their connection string will continue to force the \nthen-old 3.2 version.\n\n> I think it's likely that the ecosystem will catch up with\n> NegotiateProtocolVersion once things start breaking. However, I feel\n> pretty confident that there are going to be glitches. Clients are\n> going to want to force newer protocol versions to make sure they get\n> new features, or to make sure that security features that they want to\n> have (like this one) are enabled. Some users are going to be running\n> old poolers that can't handle 3.2, or there will be weirder things\n> where the pooler says it supports it but it doesn't actually work\n> properly in all cases. There are also non-PG servers that reimplement\n> the PG wire protocol. I can't really enumerate all the things that go\n> wrong, but I think there are a number of wire protocol changes that\n> various people have been wanting for a long while now, and when we\n> start to get the infrastructure in place to make that practical,\n> people are going to take advantage of it. So I think we can expect a\n> number of protocol enhancements and changes -- Peter's transparent\n> column encryption stuff is another example -- and there will be\n> mistakes by us and mistakes by others along the way. Allowing users to\n> specify what protocol version they want is probably an important part\n> of coping with that.\n\nYes, it's a good escape hatch to have.\n\n> The documentation in the patch you attached still seems to think\n> there's an explicit length field for the cancel key.\n\nok thanks\n\n> Also, I think it\n> would be good to split this into two patches, one to bump the protocol\n> version and a second to change the cancel key stuff. It would\n> facilitate review, and I also think that bumping the protocol version\n> is a big enough deal that it should have its own entry in the commit\n> log.\n\nRight. That's what Jelte's first patches did too. Those changes are more \nor less the same between this patch and his. These clearly need to be \nmerged into one \"introduce protocol version 3.2\" patch.\n\nI'll split this patch like that, to make it easier to compare and merge \nwith Jelte's corresponding patches.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 17:37:06 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 10:37 AM Heikki Linnakangas <[email protected]> wrote:\n> If we envision accepting ranges like that in the future, it would be\n> good to do now rather than later. Otherwise, if someone wants to require\n> features from protocol 3.2 today, they will have to put\n> \"protocol_version=3.2\" in the connection string, and later when 3.3\n> version is released, their connection string will continue to force the\n> then-old 3.2 version.\n\nI'm totally cool with doing it now rather than later if you or someone\nelse is willing to do the work. But I don't see why we'd need a\nprotocol bump to change it later. If you write protocol_version=3.7 or\nprotocol_version=3.2-3.7 we send the same thing to the server either\nway. It's only a difference in whether we slam the connection shut if\nthe server comes back and say it can only do 3.0.\n\n> I'll split this patch like that, to make it easier to compare and merge\n> with Jelte's corresponding patches.\n\nThat sounds great. IMHO, comparing and merging the patches is the next\nstep here and would be great to see.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:29:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 10:14 AM Heikki Linnakangas <[email protected]> wrote:\n> I'm back to working on the main patch here, to make cancellation keys\n> longer. New rebased version attached, with all the FIXMEs and TODOs from\n> the earlier version fixed. There was a lot of bitrot, too.\n\nI have a couple of questions/comments separate from the protocol changes:\n\nHas there been any work/discussion around not sending the cancel key\nin plaintext from psql? It's not a prerequisite or anything (the\nlonger length is a clear improvement either way), but it seems odd\nthat this longer \"secret\" is still just going to be exposed on the\nwire when you press Ctrl+C.\n\n> The first patch now introduces timingsafe_bcmp(), a function borrowed\n> from OpenBSD to perform a constant-time comparison. There's a configure\n> check to use the function from the OS if it's available, and includes a\n> copy of OpenBSD's implementation otherwise. Similar functions exist with\n> different names in OpenSSL (CRYPTO_memcmp) and NetBSD\n> (consttime_memequal), but it's a pretty simple function so I don't think\n> we need to work too hard to pick up those other native implementations.\n\nOne advantage to using other implementations is that _they're_ on the\nhook for keeping constant-time guarantees, which is getting trickier\ndue to weird architectural optimizations [1]. CRYPTO_memcmp() has\nalmost the same implementation as 0001 here, except they made the\ndecision to mark the pointers volatile, and they also provide\nhand-crafted assembly versions. This patch has OpenBSD's version, but\nthey've also turned on data-independent timing by default across their\nARM64 processors [2]. And Intel may require the same tweak, but it\ndoesn't look like userspace has access to that setting yet, and the\nkernel thread [3] appears to have just withered...\n\nFor the cancel key implementation in particular, I agree with you that\nit's probably not a serious problem. But if other security code starts\nusing timingsafe_bcmp() then it might be something to be concerned\nabout. Are there any platform/architecture combos that don't provide a\nnative timingsafe_bcmp() *and* need a DIT bit for safety?\n\nThanks,\n--Jacob\n\n[1] https://github.com/golang/go/issues/66450\n[2] https://github.com/openbsd/src/commit/cf1440f11c\n[3] https://lore.kernel.org/lkml/[email protected]/\n\n\n",
"msg_date": "Thu, 5 Sep 2024 08:43:31 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> Has there been any work/discussion around not sending the cancel key\n> in plaintext from psql? It's not a prerequisite or anything (the\n> longer length is a clear improvement either way), but it seems odd\n> that this longer \"secret\" is still just going to be exposed on the\n> wire when you press Ctrl+C.\n\nWasn't this already addressed in v17, by\n\nAuthor: Alvaro Herrera <[email protected]>\n2024-03-12 [61461a300] libpq: Add encrypted and non-blocking query cancellation\n\n? Perhaps we need to run around and make sure none of our standard\nclients use the old API anymore, but the libpq infrastructure is\nthere already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2024 12:21:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 9:21 AM Tom Lane <[email protected]> wrote:\n> Wasn't this already addressed in v17, by\n>\n> Author: Alvaro Herrera <[email protected]>\n> 2024-03-12 [61461a300] libpq: Add encrypted and non-blocking query cancellation\n>\n> ? Perhaps we need to run around and make sure none of our standard\n> clients use the old API anymore, but the libpq infrastructure is\n> there already.\n\nRight. From a quick grep, it looks like we have seven binaries using\nthe signal-based cancel handler.\n\n(For programs that only send a cancel request right before they break\nthe connection, it's probably not worth a huge amount of effort to\nchange it right away, but for psql in particular I think the status\nquo is a little weird.)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 5 Sep 2024 09:33:52 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, 5 Sept 2024 at 17:43, Jacob Champion\n<[email protected]> wrote:\n> Has there been any work/discussion around not sending the cancel key\n> in plaintext from psql? It's not a prerequisite or anything (the\n> longer length is a clear improvement either way), but it seems odd\n> that this longer \"secret\" is still just going to be exposed on the\n> wire when you press Ctrl+C.\n\nTotally agreed that it would be good to update psql to use the new\nmuch more secure libpq function introduced in PG17[1]. This is not a\ntrivial change though because it requires refactoring the way we\nhandle signals (which is why I didn't do it as part of introducing\nthese new APIs). I had hoped that the work in [2] would either do that\nor at least make it a lot easier, but that thread seems to have\nstalled. So +1 for doing this, but I think it's a totally separate\nchange and so should be discussed on a separate thread.\n\n[1]: https://www.postgresql.org/docs/17/libpq-cancel.html#LIBPQ-CANCEL-FUNCTIONS\n[2]: https://www.postgresql.org/message-id/flat/20240331222502.03b5354bc6356bc5c388919d%40sraoss.co.jp#1450c8fee45408acaa5b5a1b9a6f70fc\n\n> For the cancel key implementation in particular, I agree with you that\n> it's probably not a serious problem. But if other security code starts\n> using timingsafe_bcmp() then it might be something to be concerned\n> about. Are there any platform/architecture combos that don't provide a\n> native timingsafe_bcmp() *and* need a DIT bit for safety?\n\nIt sounds to me like we should at least use OpenSSL's CRYPTO_memcmp if\nwe linked against it and the OS doesn't provide a timingsafe_bcmp.\nWould that remove your concerns? I expect anyone that cares about\nsecurity to link against some TLS library. That way our \"fallback\"\nimplementation is only used on the rare systems where that's not the\ncase.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 18:36:10 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 9:36 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Totally agreed that it would be good to update psql to use the new\n> much more secure libpq function introduced in PG17[1]. This is not a\n> trivial change though because it requires refactoring the way we\n> handle signals (which is why I didn't do it as part of introducing\n> these new APIs).\n\nYeah, I figured it wasn't a quick fix.\n\n> I had hoped that the work in [2] would either do that\n> or at least make it a lot easier, but that thread seems to have\n> stalled. So +1 for doing this, but I think it's a totally separate\n> change and so should be discussed on a separate thread.\n\nAs long as the new thread doesn't also stall out/get forgotten, I'm happy.\n\n> It sounds to me like we should at least use OpenSSL's CRYPTO_memcmp if\n> we linked against it and the OS doesn't provide a timingsafe_bcmp.\n> Would that remove your concerns?\n\nIf we go that direction, I'd still like to know which platforms we\nexpect to have a suboptimal port, if for no other reason than\ndocumenting that those users should try to get OpenSSL into their\nbuilds. (I agree that they probably will already, if they care.)\n\nAnd if I'm being really picky, I'm not sure we should call our port\n\"timingsafe_bcmp\" (vs. pg_secure_bcmp or something) if we know it's\nnot actually timing-safe for some. But I won't die on that hill.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 5 Sep 2024 10:08:23 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 11:29 AM Robert Haas <[email protected]> wrote:\n> > I'll split this patch like that, to make it easier to compare and merge\n> > with Jelte's corresponding patches.\n>\n> That sounds great. IMHO, comparing and merging the patches is the next\n> step here and would be great to see.\n\nHeikki, do you have any update on this work?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:58:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make query cancellation keys longer"
}
] |
[
{
"msg_contents": "We are now hours away from starting the last commitfest for v17 and AFAICS\nthere have been no volunteers for the position of Commitfest manager (cfm) yet.\nAs per usual it's likely beneficial if the CFM of the last CF before freeze is\nsomeone with an seasoned eye to what can make it and what can't, but the\nimportant part is that we get someone with the time and energy to invest.\n\nAnyone interested?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 22:26:38 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest Manager for March"
},
{
"msg_contents": "> On 29 Feb 2024, at 22:26, Daniel Gustafsson <[email protected]> wrote:\n\n> We are now hours away from starting the last commitfest for v17\n\nIt is now March 1 in all timezones, so I have switched 202403 to In Progress\nand 202307 to Open. There are a total of 331 patches registered with 286 of\nthose in an open state, 24 of those have been around for 10 CF's or more.\n\nThe call for a CFM volunteer is still open.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 1 Mar 2024 13:29:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "\n\n> On 1 Mar 2024, at 17:29, Daniel Gustafsson <[email protected]> wrote:\n> \n> The call for a CFM volunteer is still open.\n\nI always wanted to try. And most of the stuff I'm interested in is already committed.\n\nBut given importance of last commitfest before feature freeze, we might be interested in more experienced CFM.\nIf I can do something useful - I'm up for it.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 1 Mar 2024 18:57:35 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "> On 1 Mar 2024, at 14:57, Andrey M. Borodin <[email protected]> wrote:\n> \n>> On 1 Mar 2024, at 17:29, Daniel Gustafsson <[email protected]> wrote:\n>> \n>> The call for a CFM volunteer is still open.\n> \n> I always wanted to try. And most of the stuff I'm interested in is already committed.\n> \n> But given importance of last commitfest before feature freeze, we might be interested in more experienced CFM.\n> If I can do something useful - I'm up for it.\n\nI'm absolutely convinced you have more than enough experience with postgres\nhacking to do an excellent job. I'm happy to give a hand as well.\n\nThanks for volunteering!\n\n(someone from pginfra will give you the required admin permissions on the CF\napp)\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 2 Mar 2024 18:34:10 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "Hi,\n\n> >> The call for a CFM volunteer is still open.\n> >\n> > I always wanted to try. And most of the stuff I'm interested in is already committed.\n> >\n> > But given importance of last commitfest before feature freeze, we might be interested in more experienced CFM.\n> > If I can do something useful - I'm up for it.\n>\n> I'm absolutely convinced you have more than enough experience with postgres\n> hacking to do an excellent job. I'm happy to give a hand as well.\n>\n> Thanks for volunteering!\n>\n> (someone from pginfra will give you the required admin permissions on the CF\n> app)\n\nThanks for volunteering, Andrey!\n\nIf you need any help please let me know.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:09:53 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "\n\n> On 4 Mar 2024, at 17:09, Aleksander Alekseev <[email protected]> wrote:\n> \n> If you need any help please let me know.\n\nAleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\nCurrently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 8 Mar 2024 16:09:45 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "Hi Andrey,\n\n> > If you need any help please let me know.\n>\n> Aleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\n> Currently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n\nOK, I'll take care of the \"Performance\" and \"Server Features\"\nsections. I submitted my summaries of the entries triaged so far to\nthe corresponding thread [1].\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TN9SnYdq%3DkfP-txgo5AaT%2Bt9YU%2BvQHfLBZqOBiHwoipAg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:40:59 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "Hi Aleksander Alekseev\n Could you take a look at the patch (\nhttps://commitfest.postgresql.org/47/4284/),How about your opinion\n\nThanks\n\nOn Tue, 12 Mar 2024 at 21:41, Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi Andrey,\n>\n> > > If you need any help please let me know.\n> >\n> > Aleksander, I would greatly appreciate if you join me in managing CF.\n> Together we can move more stuff :)\n> > Currently, I'm going through \"SQL Commands\". And so far I had not come\n> to \"Performance\" and \"Server Features\" at all... So if you can handle\n> updating statuses of that sections - that would be great.\n>\n> OK, I'll take care of the \"Performance\" and \"Server Features\"\n> sections. I submitted my summaries of the entries triaged so far to\n> the corresponding thread [1].\n>\n> [1]:\n> https://www.postgresql.org/message-id/CAJ7c6TN9SnYdq%3DkfP-txgo5AaT%2Bt9YU%2BvQHfLBZqOBiHwoipAg%40mail.gmail.com\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\n Hi Aleksander Alekseev Could you take a look at the patch (https://commitfest.postgresql.org/47/4284/),How about your opinionThanks On Tue, 12 Mar 2024 at 21:41, Aleksander Alekseev <[email protected]> wrote:Hi Andrey,\n\n> > If you need any help please let me know.\n>\n> Aleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\n> Currently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n\nOK, I'll take care of the \"Performance\" and \"Server Features\"\nsections. I submitted my summaries of the entries triaged so far to\nthe corresponding thread [1].\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TN9SnYdq%3DkfP-txgo5AaT%2Bt9YU%2BvQHfLBZqOBiHwoipAg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 20 Mar 2024 21:13:17 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "\n\n> On 1 Mar 2024, at 17:29, Daniel Gustafsson <[email protected]> wrote:\n> \n> It is now March 1 in all timezones, so I have switched 202403 to In Progress\n> and 202307 to Open. There are a total of 331 patches registered with 286 of\n> those in an open state, 24 of those have been around for 10 CF's or more.\n\n\nAs of April 1 there are 97 committed patches. Incredible amount of work! Thanks to all committers, patch authors and reviewers.\nStill there are 205 open patches, 19 of them are there for 10+ CFs. I considered reducing this number by rejecting couple of my patches. But that author-part of my brain resisted in an offlist conversation.\n\nI think it makes sense to close this commitfest only after Feature Freeze on April 8, 2024 at 0:00 AoE.\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 1 Apr 2024 11:05:10 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "\n\n> On 20 Mar 2024, at 18:13, wenhui qiu <[email protected]> wrote:\n> \n> Could you take a look at the patch (https://commitfest.postgresql.org/47/4284/),How about your opinion\n\nThe patch is currently in the \"Ready for Committer\" state. It's up to the committer to decide which one to pick. There are 22 proposed patches and many committers are dealing with the consequences of what has already been committed (build farm failures, open issues, post-commit notifications, etc.).\nIf I was a committer, I wouldn't commit a new join kind at the door of feature freeze. If I was the author of this patch, I'd consider attracting more reviewers and testers to increase chances of the patch in July CF.\nThanks for your work!\n\n\nBest regards, Andrey Borodin.\n\nPS. Plz post the answer below quoted text. That’s common practice here.\n\n",
"msg_date": "Mon, 1 Apr 2024 11:16:50 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "On 01/04/2024 09:05, Andrey M. Borodin wrote:\n> I think it makes sense to close this commitfest only after Feature Freeze on April 8, 2024 at 0:00 AoE.\n> What do you think?\n\n+1. IIRC that's how it's been done in last commitfest in previous years too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 1 Apr 2024 11:57:27 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Commitfest Manager for March\" on Mon, 1 Apr 2024 11:57:27 +0300,\n Heikki Linnakangas <[email protected]> wrote:\n\n> On 01/04/2024 09:05, Andrey M. Borodin wrote:\n>> I think it makes sense to close this commitfest only after Feature\n>> Freeze on April 8, 2024 at 0:00 AoE.\n>> What do you think?\n> \n> +1. IIRC that's how it's been done in last commitfest in previous\n> years too.\n\nThanks for extending the period.\n\nCould someone review my patches for this commitfest?\n\n1. https://commitfest.postgresql.org/47/4681/\n Make COPY format extendable: Extract COPY TO format implementations\n Status: Needs review\n\n2. https://commitfest.postgresql.org/47/4791/\n meson: Specify -Wformat as a common warning flag for extensions\n Status: Ready for Committer\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Wed, 03 Apr 2024 10:36:12 +0900 (JST)",
"msg_from": "Sutou Kouhei <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
},
{
"msg_contents": "\n\n> On 1 Apr 2024, at 09:05, Andrey M. Borodin <[email protected]> wrote:\n> \n> As of April 1 there are 97 committed patches.\n\nHello everyone!\n\nMarch Commitfest is closed.\n~40 CF entries were committed since April 1st. Despite some drawbacks discussed in nearby threads, its still huge amount of work and significant progress for the project. Thanks to everyone involved!\n\nThe number is approximate, because in some cases I could not clearly determine status. I pinged authors to do so.\n\n26 entries are RWF\\Rejected\\Withdrawn. Michael noted that I moved to next CF some entries that wait for the author more then couple of weeks. I'm going to revise WoA items in 2024-07 and RwF some of them to reduce CF bloat. If authors are still interested in continuing work of returned items, they are free to provide an answer and to change status back to Needs Review.\n\nI've removed all \"target version = 17\" attribute and switched statuses of a lot of entries. I'm afraid there might be some errors. If I determined status of your patch erroneously, please accept my apologise and correct the error.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 9 Apr 2024 10:42:08 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager for March"
}
] |
[
{
"msg_contents": "Attached please find a patch to adjust the behavior of the pgbench program\nand make it behave like the other programs that connect to a database\n(namely, psql and pg_dump). Specifically, add support for using -d and\n--dbname to specify the name of the database. This means that -d can no\nlonger be used to turn on debugging mode, and the long option --debug must\nbe used instead.\n\nThis removes a long-standing footgun, in which people assume that the -d\noption behaves the same as other programs. Indeed, because it takes no\narguments, and because the first non-option argument is the database name,\nit still appears to work. However, people then wonder why pgbench is so\ndarn verbose all the time! :)\n\nThis is a breaking change, but fixing it this way seems to have the least\ntotal impact, as the number of people using the debug mode of pgbench is\nlikely quite small. Further, those already using the long option are\nunaffected, and those using the short one simply need to replace '-d' with\n'--debug', arguably making their scripts a little more self-documenting in\nthe process.\n\nCheers,\nGreg",
"msg_date": "Thu, 29 Feb 2024 19:05:13 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 07:05:13PM -0500, Greg Sabino Mullane wrote:\n> Attached please find a patch to adjust the behavior of the pgbench program\n> and make it behave like the other programs that connect to a database\n> (namely, psql and pg_dump). Specifically, add support for using -d and\n> --dbname to specify the name of the database. This means that -d can no\n> longer be used to turn on debugging mode, and the long option --debug must\n> be used instead.\n> \n> This removes a long-standing footgun, in which people assume that the -d\n> option behaves the same as other programs. Indeed, because it takes no\n> arguments, and because the first non-option argument is the database name,\n> it still appears to work. However, people then wonder why pgbench is so\n> darn verbose all the time! :)\n> \n> This is a breaking change, but fixing it this way seems to have the least\n> total impact, as the number of people using the debug mode of pgbench is\n> likely quite small. Further, those already using the long option are\n> unaffected, and those using the short one simply need to replace '-d' with\n> '--debug', arguably making their scripts a little more self-documenting in\n> the process.\n\nI think this is a generally reasonable proposal, except I don't know\nwhether this breakage is acceptable. AFAICT there are two fundamental\nbehavior changes folks would observe:\n\n* \"-d <database_name>\" would cease to emit the debugging output, and while\n enabling debug mode might've been unintentional in most cases, it might\n actually have been intentional in others.\n\n* \"-d\" with no argument or with a following option would begin failing, and\n users would need to replace \"-d\" with \"--debug\".\n\nNeither of these seems particularly severe to me, especially for a\nbenchmarking program, but I'd be curious to hear what others think.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Mar 2024 16:41:54 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On 3/1/24 23:41, Nathan Bossart wrote:\n> On Thu, Feb 29, 2024 at 07:05:13PM -0500, Greg Sabino Mullane wrote:\n>> Attached please find a patch to adjust the behavior of the pgbench program\n>> and make it behave like the other programs that connect to a database\n>> (namely, psql and pg_dump). Specifically, add support for using -d and\n>> --dbname to specify the name of the database. This means that -d can no\n>> longer be used to turn on debugging mode, and the long option --debug must\n>> be used instead.\n>>\n>> This removes a long-standing footgun, in which people assume that the -d\n>> option behaves the same as other programs. Indeed, because it takes no\n>> arguments, and because the first non-option argument is the database name,\n>> it still appears to work. However, people then wonder why pgbench is so\n>> darn verbose all the time! :)\n>>\n>> This is a breaking change, but fixing it this way seems to have the least\n>> total impact, as the number of people using the debug mode of pgbench is\n>> likely quite small. Further, those already using the long option are\n>> unaffected, and those using the short one simply need to replace '-d' with\n>> '--debug', arguably making their scripts a little more self-documenting in\n>> the process.\n> \n> I think this is a generally reasonable proposal, except I don't know\n> whether this breakage is acceptable. AFAICT there are two fundamental\n> behavior changes folks would observe:\n> \n> * \"-d <database_name>\" would cease to emit the debugging output, and while\n> enabling debug mode might've been unintentional in most cases, it might\n> actually have been intentional in others.\n> \n\nI think this is the more severe of the two issues, because it's a silent\nchange. Everything will seem to work, but the user won't get the debug\ninfo (if they actually wanted it).\n\n> * \"-d\" with no argument or with a following option would begin failing, and\n> users would need to replace \"-d\" with \"--debug\".\n> \n\nI think this is fine.\n\n> Neither of these seems particularly severe to me, especially for a\n> benchmarking program, but I'd be curious to hear what others think.\n> \n\nI agree the -d option may be confusing, but is it worth it? I don't\nknow, it depends on how often people actually get confused by it, and I\ndon't recall hitting this (nor hearing about others). To be honest I\ndidn't even realize pgbench even has a debug switch ...\n\nBut I'd like to mention this is far from our only tool using \"-d\" to\nenable debug mode. A quick git-grep shows postgres, initdb,\npg_archivecleanup and pg_combinebackup do the same thing. So maybe it's\nnot that inconsistent overall.\n\n(Note: I didn't check if the other cases may lead to the same confusion\nwhere people enable debug accidentally. Maybe not.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Mar 2024 00:07:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Fri, Mar 1, 2024, at 8:07 PM, Tomas Vondra wrote:\n> On 3/1/24 23:41, Nathan Bossart wrote:\n> > \n> > I think this is a generally reasonable proposal, except I don't know\n> > whether this breakage is acceptable. AFAICT there are two fundamental\n> > behavior changes folks would observe:\n> > \n> > * \"-d <database_name>\" would cease to emit the debugging output, and while\n> > enabling debug mode might've been unintentional in most cases, it might\n> > actually have been intentional in others.\n> > \n> \n> I think this is the more severe of the two issues, because it's a silent\n> change. Everything will seem to work, but the user won't get the debug\n> info (if they actually wanted it).\n\nIndeed. Hopefully the user will notice soon when inspecting the standard error\noutput.\n\n> > * \"-d\" with no argument or with a following option would begin failing, and\n> > users would need to replace \"-d\" with \"--debug\".\n> > \n> \n> I think this is fine.\n\nYeah. It will force the user to fix it immediately.\n\n> > Neither of these seems particularly severe to me, especially for a\n> > benchmarking program, but I'd be curious to hear what others think.\n> > \n> \n> I agree the -d option may be confusing, but is it worth it? I don't\n> know, it depends on how often people actually get confused by it, and I\n> don't recall hitting this (nor hearing about others). To be honest I\n> didn't even realize pgbench even has a debug switch ...\n\nI'm the one that has a habit to use -d to specify the database name. I\ngenerally include -d for pgbench and then realized that I don't need the debug\ninformation because it is not for database specification.\n\n> But I'd like to mention this is far from our only tool using \"-d\" to\n> enable debug mode. A quick git-grep shows postgres, initdb,\n> pg_archivecleanup and pg_combinebackup do the same thing. So maybe it's\n> not that inconsistent overall.\n\nAs Greg said none of these programs connects to the database.\n\nI don't like to break backward compatibility but in this case I suspect that it\nis ok. I don't recall the last time I saw a script that makes use of -d option.\nHow often do you need a pgbench debug information?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Mar 1, 2024, at 8:07 PM, Tomas Vondra wrote:On 3/1/24 23:41, Nathan Bossart wrote:> > I think this is a generally reasonable proposal, except I don't know> whether this breakage is acceptable. AFAICT there are two fundamental> behavior changes folks would observe:> > * \"-d <database_name>\" would cease to emit the debugging output, and while> enabling debug mode might've been unintentional in most cases, it might> actually have been intentional in others.> I think this is the more severe of the two issues, because it's a silentchange. Everything will seem to work, but the user won't get the debuginfo (if they actually wanted it).Indeed. Hopefully the user will notice soon when inspecting the standard erroroutput.> * \"-d\" with no argument or with a following option would begin failing, and> users would need to replace \"-d\" with \"--debug\".> I think this is fine.Yeah. It will force the user to fix it immediately.> Neither of these seems particularly severe to me, especially for a> benchmarking program, but I'd be curious to hear what others think.> I agree the -d option may be confusing, but is it worth it? I don'tknow, it depends on how often people actually get confused by it, and Idon't recall hitting this (nor hearing about others). To be honest Ididn't even realize pgbench even has a debug switch ...I'm the one that has a habit to use -d to specify the database name. Igenerally include -d for pgbench and then realized that I don't need the debuginformation because it is not for database specification.But I'd like to mention this is far from our only tool using \"-d\" toenable debug mode. A quick git-grep shows postgres, initdb,pg_archivecleanup and pg_combinebackup do the same thing. So maybe it'snot that inconsistent overall.As Greg said none of these programs connects to the database.I don't like to break backward compatibility but in this case I suspect that itis ok. I don't recall the last time I saw a script that makes use of -d option.How often do you need a pgbench debug information?--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 01 Mar 2024 21:41:41 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On 2024-Mar-01, Euler Taveira wrote:\n\n> I don't like to break backward compatibility but in this case I suspect that it\n> is ok. I don't recall the last time I saw a script that makes use of -d option.\n> How often do you need a pgbench debug information?\n\nI wondered what the difference actually is, so I checked. In -i mode,\nthe only difference is that if the tables don't exist before hand, we\nreceive the NOTICE that it doesn't. In normal mode, the -d switch emits\nso much junk that I would believe if somebody told me that passing -d\ndistorted the benchmark results; and it's hard to believe that such\noutput is valuable for anything other than debugging pgbench itself.\n\nAll in all, I support the original patch.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:59:36 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 04:59:36PM +0100, Alvaro Herrera wrote:\n> All in all, I support the original patch.\n\nI'll commit this in a few days if there are no objections.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 13:47:53 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 01:47:53PM -0500, Nathan Bossart wrote:\n> On Mon, Mar 11, 2024 at 04:59:36PM +0100, Alvaro Herrera wrote:\n>> All in all, I support the original patch.\n> \n> I'll commit this in a few days if there are no objections.\n\nActually, I just took a look at the patch and it appears to need a rebase\nas well as additional documentation updates for the new -d/--dbname option.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 15:22:35 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "Rebased version attached (v2), with another sentence in the sgml to explain\nthe optional use of -d\n\nCheers,\nGreg",
"msg_date": "Tue, 19 Mar 2024 21:15:22 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 09:15:22PM -0400, Greg Sabino Mullane wrote:\n> Rebased version attached (v2), with another sentence in the sgml to explain\n> the optional use of -d\n\ncfbot seems quite unhappy with this:\n\n\thttps://cirrus-ci.com/build/6429518263484416\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:16:18 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "My mistake. Attached please find version 3, which should hopefully make\ncfbot happy again.",
"msg_date": "Wed, 20 Mar 2024 11:57:25 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nDid a quick review of this one; CFbot is now happy, local regression tests all pass.\r\n\r\nI think the idea here is sane; it's particularly confusing for some tools included in the main distribution to have different semantics, and this seems low on the breakage risk here, so worth the tradeoffs IMO.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 21 Mar 2024 19:42:24 +0000",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 11:57:25AM -0400, Greg Sabino Mullane wrote:\n> My mistake. Attached please find version 3, which should hopefully make\n> cfbot happy again.\n\nHere is what I have staged for commit. I plan to commit this within the\nnext few days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 21 Mar 2024 16:08:49 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
},
{
"msg_contents": "Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 11:13:36 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding inadvertent debugging mode for pgbench"
}
] |
[
{
"msg_contents": "Hi,\n\nThis original patch made by Tomas improves the usability of extended statistics, \nso I rebased it on 362de947, and I'd like to re-start developing it.\n \nThe previous thread [1] suggested something to solve. I'll try to solve it as \nbest I can, but I think this feature is worth it with some limitations.\nPlease find the attached file.\n\n[1] https://www.postgresql.org/message-id/flat/8081617b-d80f-ae2b-b79f-ea7e926f9fcf%40enterprisedb.com\n\nRegards,\nTatsuro Yamada\nNTT Open Source Software Center",
"msg_date": "Fri, 1 Mar 2024 00:19:42 +0000",
"msg_from": "Tatsuro Yamada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "On 3/1/24 01:19, Tatsuro Yamada wrote:\n> Hi,\n> \n> This original patch made by Tomas improves the usability of extended statistics, \n> so I rebased it on 362de947, and I'd like to re-start developing it.\n> \n> The previous thread [1] suggested something to solve. I'll try to solve it as \n> best I can, but I think this feature is worth it with some limitations.\n> Please find the attached file.\n> \n\nThank you for the interest in moving this patch forward. And I agree\nit's worth to cut some of the stuff if it's necessary to make it work.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Mar 2024 01:11:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hello Yamada-san,\n\nI finally got to look at this patch again - apologies for taking so\nlong, I'm well aware it's rather frustrating to wait for feedback. I'll\ntry to pay more attention to this patch, and don't hesitate to ping me\noff-list if immediate input is needed.\n\nI looked at the patch from March 1 [1], which applies mostly with some\nminor bitrot, but no major conflicts. A couple review comments:\n\n\n1) The patch is not added to the CF app, which I think is a mistake. Can\nyou please add it to the 2024-07 commitfest? Otherwise people may not be\naware of it, won't do reviews etc. It'll require posting a rebased\npatch, but should not be a big deal.\n\n\n2) Not having the patch in a CF also means cfbot is not running tests on\nit. Which is unfortunate, because the patch actually has an a bug cfbot\nwould find - I've noticed it after running the tests through the github\nCI, see [2].\n\nFWIW I very much recommend setting up this CI and using it during\ndevelopment, it turned out to be very valuable for me as it tests on a\nrange of systems, and more convenient than the rpi5 machines I used for\nthat purposes before. See src/tools/ci/README for details.\n\n\n3) The bug has this symptom:\n\n ERROR: unrecognized node type: 268\n CONTEXT: PL/pgSQL function check_estimated_rows(text) line 7 ...\n STATEMENT: SELECT * FROM check_estimated_rows('SELECT a, b FROM ...\n\nbut it only shows on the FreeBSD machine (in CI). But that's simply\nbecause that's running tests with \"-DCOPY_PARSE_PLAN_TREES\", which\nalways copies the query plan, to make sure all the nodes can be copied.\nAnd we don't have a copy function for the StatisticExtInfo node (that's\nthe node 268), so it fails.\n\nFWIW you can have this failure even on a regular build, you just need to\ndo explain on a prepared statement (with an extended statistics):\n\n CREATE TABLE t (a int, b int);\n\n INSERT INTO t SELECT (i/100), (i/100)\n FROM generate_series(1,1000) s(i);\n\n CREATE STATISTICS ON a,b FROM t;\n\n ANALYZE t;\n\n PREPARE ps (INT, INT) AS SELECT * FROM t WHERE a = $1 AND b = $2;\n\n EXPLAIN EXECUTE ps(5,5);\n\n ERROR: unrecognized node type: 268\n\n\n4) I can think of two basic ways to fix this issue - either allow\ncopying of the StatisticExtInto node, or represent the information in a\ndifferent way (e.g. add a new node for that purpose, or use existing\nnodes to do that).\n\nI don't think we should do the first thing - the StatisticExtInfo is in\npathnodes.h, is designed to be used during path construction, has a lot\nof fields that we likely don't need to show stuff in explain - but we'd\nneed to copy those too, and that seems useless / expensive.\n\nSo I suggest we invent a new (much simpler) node, tracking only the bits\nwe actually need for in the explain part. Or alternatively, if you think\nadding a separate node is an overkill, maybe we could keep just an OID\nof the statistics we applied, and the explain would lookup the name?\n\nBut I think having a new node might might also make the patch simpler,\nas the new struct could combine the information the patch keeps in three\nseparate lists. Instead, there's be just one new list in Plan, members\nwould be the new node type, and each element would be\n\n (statistics OID, list of clauses, flag is_or)\n\nor something like that.\n\n\n5) In [3] Tom raised two possible issues with doing this - cost of\ncopying the information, and locking. For the performance concerns, I\nthink the first thing we should do is measuring how expensive it is. I\nsuggest measuring the overhead for about three basic cases:\n\n - table with no extended stats\n - table with a couple (1-10?) extended stats\n - table with a lot of (100?) extended stats\n\nAnd see the impact of the patch. That is, measure the planning time with\nmaster and with the patch applied. The table size does not matter much,\nI think - this should measure just the planning, not execute the query.\nIn practice the extra costs will get negligible as the execution time\ngrows. But we're measuring the worst case, so that's fine.\n\nFor the locking, I agree with Robert [4] that this probably should not\nbe an issue - I don't see why would this be different from indexes etc.\nBut I haven't thought about that too much, so maybe investigate and test\nthis a bit more (that plans get invalidated if the statistics changes,\nand so on).\n\n\n6) I'm not sure we want to have this under EXPLAIN (VERBOSE). It's what\nI did in the initial PoC patch, but maybe we should invent a new flag\nfor this purpose, otherwise VERBOSE will cover too much stuff? I'm\nthinking about \"STATS\" for example.\n\nThis would probably mean the patch should also add a new auto_explain\n\"log_\" flag to enable/disable this.\n\n\n7) The patch really needs some docs - I'd mention this in the EXPLAIN\ndocs, probably. There's also a chapter about estimates, maybe that\nshould mention this too? Try searching for places in the SGML docs\nmentioning extended stats and/or explain, I guess.\n\nFor tests, I guess stats_ext is the right place to test this. I'm not\nsure what's the best way to do this, though. If it's covered by VERBOSE,\nthat seems it might be unstable - and that would be an issue. But maybe\nwe might add a function similar to check_estimated_rows(), but verifying\n the query used the expected statistics to estimate expected clauses.\n\nBut maybe with the new explain \"STATS\" flag it would be easier, because\nwe could do EXPLAIN (COSTS OFF, STATS ON) and that would be stable.\n\nAs for what needs to be tested, I don't think we need to test how we\nmatch queries/clauses to statistics - that's already tested. It's fine\nto focus just on displaying the expected stuff. I'd take a couple of the\nexisting tests, and check those. And then also add a couple tests for\nprepared statements, and invalidation of a plan after an extended\nstatistics gets dropped, etc.\n\n\nSo there's stuff to do to make this committable, but hopefully this\nreview gives you some guidance regarding what/how ;-)\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/TYYPR01MB82310B308BA8770838F681619E5E2%40TYYPR01MB8231.jpnprd01.prod.outlook.com\n\n[2] https://cirrus-ci.com/build/6436352672137216\n\n[3] https://www.postgresql.org/message-id/459863.1627419001%40sss.pgh.pa.us\n\n[4]\nhttps://www.postgresql.org/message-id/CA%2BTgmoZU34zo4%3DhyqgLH16iGpHQ6%2BQAesp7k5a1cfZB%3D%2B9xtsw%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 12 Jun 2024 13:13:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hi Tomas!\n\nThanks for the comments!\n\n1) The patch is not added to the CF app, which I think is a mistake. Can\n> you please add it to the 2024-07 commitfest? Otherwise people may not be\n> aware of it, won't do reviews etc. It'll require posting a rebased\n> patch, but should not be a big deal.\n>\n\nI added the patch to the 2024-07 commitfest today.\n\n\n2) Not having the patch in a CF also means cfbot is not running tests on\n> it. Which is unfortunate, because the patch actually has an a bug cfbot\n> would find - I've noticed it after running the tests through the github\n> CI, see [2].\n> 3) The bug has this symptom:\n> ERROR: unrecognized node type: 268\n> CONTEXT: PL/pgSQL function check_estimated_rows(text) line 7 ...\n> STATEMENT: SELECT * FROM check_estimated_rows('SELECT a, b FROM ...\n> 4) I can think of two basic ways to fix this issue - either allow\n> copying of the StatisticExtInto node, or represent the information in a\n> different way (e.g. add a new node for that purpose, or use existing\n> nodes to do that).\n>\n\nThanks for the info. I'll investigate using cfbot.\nTo fix the problem, I understand we need to create a new struct like\n(statistics OID, list of clauses, flag is_or).\n\n\n5) In [3] Tom raised two possible issues with doing this - cost of\n> copying the information, and locking. For the performance concerns, I\n> think the first thing we should do is measuring how expensive it is. I\n> suggest measuring the overhead for about three basic cases:\n>\n\nOkay, I'll measure it once the patch is completed and check the overhead.\nI read [3][4] and in my opinion I agree with Robert.\nAs with indexes, there should be a mechanism for determining whether\nextended statistics are used or not. If it were available, users would be\nable to\ntune using extended statistics and get better execution plans.\n\n\n\n> 6) I'm not sure we want to have this under EXPLAIN (VERBOSE). It's what\n> I did in the initial PoC patch, but maybe we should invent a new flag\n> for this purpose, otherwise VERBOSE will cover too much stuff? I'm\n> thinking about \"STATS\" for example.\n>\n> This would probably mean the patch should also add a new auto_explain\n> \"log_\" flag to enable/disable this.\n>\n\nI thought it might be better to do this, so I'll fix it.\n\n\n\n> 7) The patch really needs some docs - I'd mention this in the EXPLAIN\n> docs, probably. There's also a chapter about estimates, maybe that\n> should mention this too? Try searching for places in the SGML docs\n> mentioning extended stats and/or explain, I guess.\n>\n\nI plan to create documentation after the specifications are finalized.\n\n\n\n> For tests, I guess stats_ext is the right place to test this. I'm not\n> sure what's the best way to do this, though. If it's covered by VERBOSE,\n> that seems it might be unstable - and that would be an issue. But maybe\n> we might add a function similar to check_estimated_rows(), but verifying\n> the query used the expected statistics to estimate expected clauses.\n>\n\nAs for testing, I think it's more convenient for reviewers to include it in\nthe patch,\nso I'm thinking of including it in the next patch.\n\n\n\nSo there's stuff to do to make this committable, but hopefully this\n> review gives you some guidance regarding what/how ;-)\n>\n\nThank you! It helps me a lot!\n\nThe attached patch does not correspond to the above comment.\nBut it does solve some of the issues mentioned in previous threads.\n\nThe next patch is planned to include:\n6) Add stats option to explain command\n8) Add regression test (stats_ext.sql)\n4) Add new node (resolve errors in cfbot and prepared statement)\n\nRegards,\nTatsuro Yamada\n\n\n\n> [1]\n>\n> https://www.postgresql.org/message-id/TYYPR01MB82310B308BA8770838F681619E5E2%40TYYPR01MB8231.jpnprd01.prod.outlook.com\n>\n> [2] https://cirrus-ci.com/build/6436352672137216\n>\n> [3]\n> https://www.postgresql.org/message-id/459863.1627419001%40sss.pgh.pa.us\n>\n> [4]\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoZU34zo4%3DhyqgLH16iGpHQ6%2BQAesp7k5a1cfZB%3D%2B9xtsw%40mail.gmail.com\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>",
"msg_date": "Wed, 26 Jun 2024 18:06:43 +0900",
"msg_from": "Tatsuro Yamada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hi Tomas,\n\nThe attached patch does not correspond to the above comment.\n> But it does solve some of the issues mentioned in previous threads.\n>\n\nOops, I made a mistake sending a patch on my previous email.\nAttached patch is the right patch.\n\nRegards,\nTatsuro Yamada",
"msg_date": "Fri, 28 Jun 2024 15:45:35 +0900",
"msg_from": "Tatsuro Yamada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hi Tomas and All,\n\nAttached file is a new patch including:\n 6) Add stats option to explain command\n 7) The patch really needs some docs (partly)\n\n >4) Add new node (resolve errors in cfbot and prepared statement)\n\nI tried adding a new node in pathnode.h, but it doesn't work well.\nSo, it needs more time to implement it successfully because this is\nthe first time to add a new node in it.\n\n\n> 8) Add regression test (stats_ext.sql)\n\n\nActually, I am not yet able to add new test cases to stats_ext.sql.\nInstead, I created a simple test (test.sql) and have attached it.\nAlso, output.txt is the test result.\n\nTo add new test cases to stats_ext.sql,\nI'd like to decide on a strategy for modifying it. In particular, there are\n381 places where the check_estimated_rows function is used, so should I\ninclude the same number of tests, or should we include the bare minimum\nof tests that cover the code path? I think only the latter would be fine.\nAny advice is appreciated. :-D\n\nP.S.\nI'm going to investigate how to use CI this weekend hopefully.\n\nRegards,\nTatsuro Yamada",
"msg_date": "Fri, 28 Jun 2024 20:16:03 +0900",
"msg_from": "Tatsuro Yamada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hi,\n\nThanks for working the feature. As a user, I find it useful, and I'd like to use\nit in v18! Although I've just started start looking into it, I have a few questions.\n\n\n(1)\n\nIs it better to make the order of output consistent? For example, even\nthough there are three clauses shown in the below case, the order does not\nmatch.\n* \"Filter\" shows that \"id1\" is first.\n* \"Ext Stats\" shows that \"id2\" is first.\n\n-- An example\nDROP TABLE IF EXISTS test;\nCREATE TABLE test (id1 int2, id2 int4, id3 int8, value varchar(32));\nINSERT INTO test (SELECT i%11, i%103, i%1009, 'hello' FROM generate_series(1,1000000) s(i));\ncreate statistics test_s1 on id1, id2 from test; analyze;\n\n=# EXPLAIN (STATS) SELECT * FROM test WHERE id1 = 1 AND (id2 = 2 OR id2 > 10);\n QUERY PLAN \n---------------------------------------------------------------------------------------\n Gather (cost=1000.00..23092.77 rows=84311 width=20)\n Workers Planned: 2\n -> Parallel Seq Scan on test (cost=0.00..13661.67 rows=35130 width=20)\n Filter: ((id1 = 1) AND ((id2 = 2) OR (id2 > 10))) -- here\n Ext Stats: public.test_s1 Clauses: (((id2 = 2) OR (id2 > 10)) AND (id1 = 1)) -- here\n(5 rows)\n\n\n\n(2)\n\nDo we really need the schema names without VERBOSE option? As in the above case,\n\"Ext Stats\" shows schema name \"public\", even though the table name \"test\" isn't\nshown with its schema name.\n\nAdditionally, if the VERBOSE option is specified, should the column names also be\nprinted with namespace?\n\n=# EXPLAIN (VERBOSE, STATS) SELECT * FROM test WHERE id1 = 1 AND (id2 = 2 OR id2 > 10);\n QUERY PLAN \n---------------------------------------------------------------------------------------\n Gather (cost=1000.00..22947.37 rows=82857 width=20)\n Output: id1, id2, id3, value\n Workers Planned: 2\n -> Parallel Seq Scan on public.test (cost=0.00..13661.67 rows=34524 width=20)\n Output: id1, id2, id3, value\n Filter: ((test.id1 = 1) AND ((test.id2 = 2) OR (test.id2 > 10)))\n Ext Stats: public.test_s1 Clauses: (((id2 = 2) OR (id2 > 10)) AND (id1 = 1)) -- here\n(7 rows)\n\n\n\n(3)\n\nI might be misunderstanding something, but do we need the clauses? Is there any\ncase where users would want to know the clauses? For example, wouldn't the\nfollowing be sufficient?\n\n> Ext Stats: id1, id2 using test_s1\n\n\n\n(4)\n\nThe extended statistics with \"dependencies\" or \"ndistinct\" option don't seem to\nbe shown in EXPLAIN output. Am I missing something? (Is this expected?)\n\nI tested the examples in the documentation. Although it might work with\n\"mcv\" option, I can't confirm that it works because \"unrecognized node type\"\nerror occurred in my environment.\nhttps://www.postgresql.org/docs/current/sql-createstatistics.html\n\n(It might be wrong since I'm beginner with extended stats codes.)\nIIUC, the reason is that the patch only handles statext_mcv_clauselist_selectivity(),\nand doesn't handle dependencies_clauselist_selectivity() and estimate_multivariate_ndistinct().\n\n\n-- doesn't work with \"dependencies\" option?\n=# \\dX\n List of extended statistics\n Schema | Name | Definition | Ndistinct | Dependencies | MCV \n--------+---------+--------------------+-----------+--------------+---------\n public | s1 | a, b FROM t1 | (null) | defined | (null)\n(2 rows)\n\n=# EXPLAIN (STATS, ANALYZE) SELECT * FROM t1 WHERE (a = 1) AND (b = 0);\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..11685.00 rows=100 width=8) (actual time=0.214..50.327 rows=100 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on t1 (cost=0.00..10675.00 rows=42 width=8) (actual time=30.300..46.610 rows=33 loops=3)\n Filter: ((a = 1) AND (b = 0))\n Rows Removed by Filter: 333300\n Planning Time: 0.246 ms\n Execution Time: 50.361 ms\n(8 rows)\n\n-- doesn't work with \"ndistinct\"?\n=# \\dX\n List of extended statistics\n Schema | Name | Definition | Ndistinct | Dependencies | MCV \n--------+------+------------------------------------------------------------------+-----------+--------------+--------\n public | s3 | date_trunc('month'::text, a), date_trunc('day'::text, a) FROM t3 | defined | (null) | (null)\n(1 row)\n\npostgres(437635)=# EXPLAIN (STATS, ANALYZE) SELECT * FROM t3\n WHERE date_trunc('month', a) = '2020-01-01'::timestamp;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Seq Scan on t3 (cost=0.00..10210.01 rows=45710 width=8) (actual time=0.027..143.199 rows=44640 loops=1)\n Filter: (date_trunc('month'::text, a) = '2020-01-01 00:00:00'::timestamp without time zone)\n Rows Removed by Filter: 480961\n Planning Time: 0.088 ms\n Execution Time: 144.590 ms\n(5 rows)\n\n-- doesn't work with \"mvc\". It might work, but the error happens in my environments\n=# \\dX\n List of extended statistics\n Schema | Name | Definition | Ndistinct | Dependencies | MCV \n--------+------+--------------+-----------+--------------+---------\n public | s2 | a, b FROM t2 | (null) | (null) | defined\n(1 row)\n\n-- I encountered the error with the query.\n=# EXPLAIN (STATS, ANALYZE) SELECT * FROM t2 WHERE (a = 1) AND (b = 1);\nERROR: unrecognized node type: 268\n\n\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Fri, 12 Jul 2024 10:09:42 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "\n\nOn 6/26/24 11:06, Tatsuro Yamada wrote:\n> Hi Tomas!\n> \n> Thanks for the comments!\n> \n> 1) The patch is not added to the CF app, which I think is a mistake. Can\n>> you please add it to the 2024-07 commitfest? Otherwise people may not be\n>> aware of it, won't do reviews etc. It'll require posting a rebased\n>> patch, but should not be a big deal.\n>>\n> \n> I added the patch to the 2024-07 commitfest today.\n> \n> \n> 2) Not having the patch in a CF also means cfbot is not running tests on\n>> it. Which is unfortunate, because the patch actually has an a bug cfbot\n>> would find - I've noticed it after running the tests through the github\n>> CI, see [2].\n>> 3) The bug has this symptom:\n>> ERROR: unrecognized node type: 268\n>> CONTEXT: PL/pgSQL function check_estimated_rows(text) line 7 ...\n>> STATEMENT: SELECT * FROM check_estimated_rows('SELECT a, b FROM ...\n>> 4) I can think of two basic ways to fix this issue - either allow\n>> copying of the StatisticExtInto node, or represent the information in a\n>> different way (e.g. add a new node for that purpose, or use existing\n>> nodes to do that).\n>>\n> \n> Thanks for the info. I'll investigate using cfbot.\n> To fix the problem, I understand we need to create a new struct like\n> (statistics OID, list of clauses, flag is_or).\n> \n\nYes, something like that, in the plannodes.h.\n\n> \n> 5) In [3] Tom raised two possible issues with doing this - cost of\n>> copying the information, and locking. For the performance concerns, I\n>> think the first thing we should do is measuring how expensive it is. I\n>> suggest measuring the overhead for about three basic cases:\n>>\n> \n> Okay, I'll measure it once the patch is completed and check the overhead.\n> I read [3][4] and in my opinion I agree with Robert.\n> As with indexes, there should be a mechanism for determining whether\n> extended statistics are used or not. If it were available, users would be\n> able to\n> tune using extended statistics and get better execution plans.\n> \n\nI do agree with that, but I also understand Tom's concerns about the\ncosts. His concern is that to make this work, we have to keep/copy the\ninformation for all queries, even if that user never does explain.\n\nYes, we do the same thing (copy of some pieces) for indexes, and from\nthis point of view it's equally reasonable. But there's the difference\nthat for indexes it's always been done this way, hence it's considered\n\"the baseline\", while for extended stats we've not copied the data until\nthis patch, so it'd be seen as a regression.\n\nI think there are two ways to deal with this - ideally, we'd show that\nthe overhead is negligible (~noise). And if it's measurable, we'd need\nto argue that it's worth it - but that's much harder, IMHO.\n\nSo I'd suggest you try to measure the overhead on a couple cases (simple\nquery with 0 or more statistics applied).\n\n> \n> \n>> 6) I'm not sure we want to have this under EXPLAIN (VERBOSE). It's what\n>> I did in the initial PoC patch, but maybe we should invent a new flag\n>> for this purpose, otherwise VERBOSE will cover too much stuff? I'm\n>> thinking about \"STATS\" for example.\n>>\n>> This would probably mean the patch should also add a new auto_explain\n>> \"log_\" flag to enable/disable this.\n>>\n> \n> I thought it might be better to do this, so I'll fix it.\n> \n\nOK\n\n> \n> \n>> 7) The patch really needs some docs - I'd mention this in the EXPLAIN\n>> docs, probably. There's also a chapter about estimates, maybe that\n>> should mention this too? Try searching for places in the SGML docs\n>> mentioning extended stats and/or explain, I guess.\n>>\n> \n> I plan to create documentation after the specifications are finalized.\n> \n\nI'm, not sure that's a good approach. Maybe it doesn't need to be\nmentioned in the section explaining how estimates work, but it'd be good\nto have it at least in the EXPLAIN command docs. The thing is - docs are\na nice way for reviewers to learn about how the feature is expected to\nwork / be used. Yes, it may need to be adjusted if the patch changes,\nbut it's likely much easier than changing the code.\n\n> \n> \n>> For tests, I guess stats_ext is the right place to test this. I'm not\n>> sure what's the best way to do this, though. If it's covered by VERBOSE,\n>> that seems it might be unstable - and that would be an issue. But maybe\n>> we might add a function similar to check_estimated_rows(), but verifying\n>> the query used the expected statistics to estimate expected clauses.\n>>\n> \n> As for testing, I think it's more convenient for reviewers to include it in\n> the patch,\n> so I'm thinking of including it in the next patch.\n> \n\nI'm not sure I understand what you mean - what is more convenient to\ninclude in the patch & you plan to include in the next patch version?\n\nMy opinion is that there clearly need to be some regression tests, be it\nin stats_ext.sql or in some other script. But to make it easier, we\nmight have a function similar to check_estimated_rows() which would\nextract just the interesting part of the plan.\n\n> \n> So there's stuff to do to make this committable, but hopefully this\n>> review gives you some guidance regarding what/how ;-)\n>>\n> \n> Thank you! It helps me a lot!\n> \n> The attached patch does not correspond to the above comment.\n> But it does solve some of the issues mentioned in previous threads.\n> \n> The next patch is planned to include:\n> 6) Add stats option to explain command\n> 8) Add regression test (stats_ext.sql)\n> 4) Add new node (resolve errors in cfbot and prepared statement)\n> \n\nSounds good.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2024 16:03:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "On 6/28/24 13:16, Tatsuro Yamada wrote:\n> Hi Tomas and All,\n> \n> Attached file is a new patch including:\n> 6) Add stats option to explain command\n> 7) The patch really needs some docs (partly)\n> \n> >4) Add new node (resolve errors in cfbot and prepared statement)\n> \n> I tried adding a new node in pathnode.h, but it doesn't work well.\n> So, it needs more time to implement it successfully because this is\n> the first time to add a new node in it.\n> \n\nI'm not sure why it didn't work well, and I haven't tried adding the\nstruct myself so I might be missing something important, but m\nassumption was the new struct would go to plannodes.h. The planning\nworks in phases:\n\n parse -> build Path nodes -> pick cheapest Path -> create Plan\n\nand it's the Plan that is printed by EXPLAIN. The pathnodes.h and\nplannodes.h match this, so if it's expected to be in Plan it should go\nto plannodes.h I think.\n\n> \n>> 8) Add regression test (stats_ext.sql)\n> \n> \n> Actually, I am not yet able to add new test cases to stats_ext.sql.\n\nWhy is that not possible? Can you explain?\n\n> Instead, I created a simple test (test.sql) and have attached it.\n> Also, output.txt is the test result.\n> \n> To add new test cases to stats_ext.sql,\n> I'd like to decide on a strategy for modifying it. In particular, there are\n> 381 places where the check_estimated_rows function is used, so should I\n> include the same number of tests, or should we include the bare minimum\n> of tests that cover the code path? I think only the latter would be fine.\n> Any advice is appreciated. :-D\n> \n\nI don't understand. My suggestion was to create a new function, similar\nto check_estimated_rows(), that's get a query, do EXPLAIN and extract\nthe list of applied statistics. Similar to what check_estimated_rows()\ndoes for number of rows.\n\nI did not mean to suggest you modify check_estimated_rows() to extract\nboth the number of rows and statistics, nor to modify the existing tests\n(that's not very useful, because there's only one extended statistics in\neach of those tests, and by testing the estimate we implicitly test that\nit's applied).\n\nMy suggestion is to add a couple new queries, with multiple statistics\nand multiple clauses etc. And then test the patch on those. You could do\nsimple EXPLAIN (COSTS OFF), or add the new function to make it a bit\nmore stable (but maybe it's not worth it).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2024 16:15:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "Hi,\n\nLet me share my opinion on those questions ...\n\n\nOn 7/12/24 12:09, [email protected] wrote:\n> Hi,\n> \n> Thanks for working the feature. As a user, I find it useful, and I'd like to use\n> it in v18! Although I've just started start looking into it, I have a few questions.\n> \n> \n> (1)\n> \n> Is it better to make the order of output consistent? For example, even\n> though there are three clauses shown in the below case, the order does not\n> match.\n> * \"Filter\" shows that \"id1\" is first.\n> * \"Ext Stats\" shows that \"id2\" is first.\n> \n> -- An example\n> DROP TABLE IF EXISTS test;\n> CREATE TABLE test (id1 int2, id2 int4, id3 int8, value varchar(32));\n> INSERT INTO test (SELECT i%11, i%103, i%1009, 'hello' FROM generate_series(1,1000000) s(i));\n> create statistics test_s1 on id1, id2 from test; analyze;\n> \n> =# EXPLAIN (STATS) SELECT * FROM test WHERE id1 = 1 AND (id2 = 2 OR id2 > 10);\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------\n> Gather (cost=1000.00..23092.77 rows=84311 width=20)\n> Workers Planned: 2\n> -> Parallel Seq Scan on test (cost=0.00..13661.67 rows=35130 width=20)\n> Filter: ((id1 = 1) AND ((id2 = 2) OR (id2 > 10))) -- here\n> Ext Stats: public.test_s1 Clauses: (((id2 = 2) OR (id2 > 10)) AND (id1 = 1)) -- here\n> (5 rows)\n> \n\nI don't think we need to make the order consistent. It probably wouldn't\nhurt, but I'm not sure it's even possible for all scan types - for\nexample in an index scan, the clauses might be split between index\nconditions and filters, etc.\n\n> \n> \n> (2)\n> \n> Do we really need the schema names without VERBOSE option? As in the above case,\n> \"Ext Stats\" shows schema name \"public\", even though the table name \"test\" isn't\n> shown with its schema name.\n> \n> Additionally, if the VERBOSE option is specified, should the column names also be\n> printed with namespace?\n> \n> =# EXPLAIN (VERBOSE, STATS) SELECT * FROM test WHERE id1 = 1 AND (id2 = 2 OR id2 > 10);\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------\n> Gather (cost=1000.00..22947.37 rows=82857 width=20)\n> Output: id1, id2, id3, value\n> Workers Planned: 2\n> -> Parallel Seq Scan on public.test (cost=0.00..13661.67 rows=34524 width=20)\n> Output: id1, id2, id3, value\n> Filter: ((test.id1 = 1) AND ((test.id2 = 2) OR (test.id2 > 10)))\n> Ext Stats: public.test_s1 Clauses: (((id2 = 2) OR (id2 > 10)) AND (id1 = 1)) -- here\n> (7 rows)\n> \n\nYeah, I don't think there's a good reason to force printing schema for\nthe statistics, if it's not needed for the table. The rules should be\nthe same, I think.\n\n> \n> \n> (3)\n> \n> I might be misunderstanding something, but do we need the clauses? Is there any\n> case where users would want to know the clauses? For example, wouldn't the\n> following be sufficient?\n> \n>> Ext Stats: id1, id2 using test_s1\n> \n\nThe stats may overlap, and some clauses may be matching multiple of\nthem. And some statistics do not support all clause types (e.g.\nfunctional dependencies work only with equality conditions). Yes, you\nmight deduce which statistics are used for which clause, but it's not\ntrivial - interpreting explain is already not trivial, let's not make it\nharder.\n\n(If tracking the exact clauses turns out to be expensive, we might\nrevisit this - it might make it cheaper).\n\n> \n> \n> (4)\n> \n> The extended statistics with \"dependencies\" or \"ndistinct\" option don't seem to\n> be shown in EXPLAIN output. Am I missing something? (Is this expected?)\n> \n> I tested the examples in the documentation. Although it might work with\n> \"mcv\" option, I can't confirm that it works because \"unrecognized node type\"\n> error occurred in my environment.\n> https://www.postgresql.org/docs/current/sql-createstatistics.html\n> \n> (It might be wrong since I'm beginner with extended stats codes.)\n> IIUC, the reason is that the patch only handles statext_mcv_clauselist_selectivity(),\n> and doesn't handle dependencies_clauselist_selectivity() and estimate_multivariate_ndistinct().\n> \n> \n> -- doesn't work with \"dependencies\" option?\n> =# \\dX\n> List of extended statistics\n> Schema | Name | Definition | Ndistinct | Dependencies | MCV \n> --------+---------+--------------------+-----------+--------------+---------\n> public | s1 | a, b FROM t1 | (null) | defined | (null)\n> (2 rows)\n> \n> =# EXPLAIN (STATS, ANALYZE) SELECT * FROM t1 WHERE (a = 1) AND (b = 0);\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------\n> Gather (cost=1000.00..11685.00 rows=100 width=8) (actual time=0.214..50.327 rows=100 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Seq Scan on t1 (cost=0.00..10675.00 rows=42 width=8) (actual time=30.300..46.610 rows=33 loops=3)\n> Filter: ((a = 1) AND (b = 0))\n> Rows Removed by Filter: 333300\n> Planning Time: 0.246 ms\n> Execution Time: 50.361 ms\n> (8 rows)\n> \n> -- doesn't work with \"ndistinct\"?\n> =# \\dX\n> List of extended statistics\n> Schema | Name | Definition | Ndistinct | Dependencies | MCV \n> --------+------+------------------------------------------------------------------+-----------+--------------+--------\n> public | s3 | date_trunc('month'::text, a), date_trunc('day'::text, a) FROM t3 | defined | (null) | (null)\n> (1 row)\n> \n> postgres(437635)=# EXPLAIN (STATS, ANALYZE) SELECT * FROM t3\n> WHERE date_trunc('month', a) = '2020-01-01'::timestamp;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------\n> Seq Scan on t3 (cost=0.00..10210.01 rows=45710 width=8) (actual time=0.027..143.199 rows=44640 loops=1)\n> Filter: (date_trunc('month'::text, a) = '2020-01-01 00:00:00'::timestamp without time zone)\n> Rows Removed by Filter: 480961\n> Planning Time: 0.088 ms\n> Execution Time: 144.590 ms\n> (5 rows)\n> \n> -- doesn't work with \"mvc\". It might work, but the error happens in my environments\n> =# \\dX\n> List of extended statistics\n> Schema | Name | Definition | Ndistinct | Dependencies | MCV \n> --------+------+--------------+-----------+--------------+---------\n> public | s2 | a, b FROM t2 | (null) | (null) | defined\n> (1 row)\n> \n> -- I encountered the error with the query.\n> =# EXPLAIN (STATS, ANALYZE) SELECT * FROM t2 WHERE (a = 1) AND (b = 1);\n> ERROR: unrecognized node type: 268\n> \n> \n\nYes, you're right we don't show some stats. For dependencies there's the\nproblem that we don't apply them individually, so it's not really\npossible to map clauses to individual stats. I wonder if we might have a\nspecial \"entry\" to show clauses estimated by the functional dependencies\ncombined from all stats (instead of a particular statistics).\n\nFor ndistinct, I think we don't show this because it doesn't go through\nclauselist_selectivity, which is the only thing I modified in the PoC.\nBut I guess we might improve estimate_num_groups() to track the stats in\na similar way, I guess.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2024 16:40:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "> Let me share my opinion on those questions ...\n\nThanks! I could understand the patch well thanks to your comments.\n\n \n> On 7/12/24 12:09, [email protected] wrote:\n> > Is it better to make the order of output consistent? For example, even\n> > though there are three clauses shown in the below case, the order does\n> > not match.\n> > * \"Filter\" shows that \"id1\" is first.\n> > * \"Ext Stats\" shows that \"id2\" is first.\n> >\n> > -- An example\n> > DROP TABLE IF EXISTS test;\n> > CREATE TABLE test (id1 int2, id2 int4, id3 int8, value varchar(32));\n> > INSERT INTO test (SELECT i%11, i%103, i%1009, 'hello' FROM\n> > generate_series(1,1000000) s(i)); create statistics test_s1 on id1,\n> > id2 from test; analyze;\n> >\n> > =# EXPLAIN (STATS) SELECT * FROM test WHERE id1 = 1 AND (id2 = 2 OR id2 > 10);\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------- Gather (cost=1000.00..23092.77 rows=84311\n> > width=20)\n> > Workers Planned: 2\n> > -> Parallel Seq Scan on test (cost=0.00..13661.67 rows=35130 width=20)\n> > Filter: ((id1 = 1) AND ((id2 = 2) OR (id2 > 10)))\n> -- here\n> > Ext Stats: public.test_s1 Clauses: (((id2 = 2) OR (id2 > 10)) AND (id1 = 1))\n> -- here\n> > (5 rows)\n> >\n> \n> I don't think we need to make the order consistent. It probably wouldn't hurt, but I'm\n> not sure it's even possible for all scan types - for example in an index scan, the clauses\n> might be split between index conditions and filters, etc.\n\nOK, I understand it isn't unexpected behavior.\n\n\n> > (3)\n> >\n> > I might be misunderstanding something, but do we need the clauses? Is\n> > there any case where users would want to know the clauses? For\n> > example, wouldn't the following be sufficient?\n> >\n> >> Ext Stats: id1, id2 using test_s1\n> >\n> \n> The stats may overlap, and some clauses may be matching multiple of them. And some\n> statistics do not support all clause types (e.g.\n> functional dependencies work only with equality conditions). Yes, you might deduce\n> which statistics are used for which clause, but it's not trivial - interpreting explain is\n> already not trivial, let's not make it harder.\n> \n> (If tracking the exact clauses turns out to be expensive, we might revisit this - it might\n> make it cheaper).\n\nThanks. I agree that we need to show the clauses.\n\n\n> > (4)\n> >\n> > The extended statistics with \"dependencies\" or \"ndistinct\" option\n> > don't seem to be shown in EXPLAIN output. Am I missing something? (Is\n> > this expected?)\n> >\n> > I tested the examples in the documentation. Although it might work\n> > with \"mcv\" option, I can't confirm that it works because \"unrecognized node type\"\n> > error occurred in my environment.\n> > https://urldefense.com/v3/__https://www.postgresql.org/docs/current/sq\n> > l-createstatistics.html__;!!GCTRfqYYOYGmgK_z!9H-FTXrhg7cr0U2r4PoKEeWM1\n> > v9feP8I8zlNyhf-801n-KI8bIMAxOQgaetSTpek3ECk2_FKWEsuApVZ-ys-ka7rfjX8ANB\n> > 9zQ$\n> >\n> > (It might be wrong since I'm beginner with extended stats codes.)\n> > IIUC, the reason is that the patch only handles\n> > statext_mcv_clauselist_selectivity(),\n> > and doesn't handle dependencies_clauselist_selectivity() and\n> estimate_multivariate_ndistinct().\n> >\n> >\n> > -- doesn't work with \"dependencies\" option?\n> > =# \\dX\n> > List of extended statistics\n> > Schema | Name | Definition | Ndistinct | Dependencies | MCV\n> > --------+---------+--------------------+-----------+--------------+---\n> > --------+---------+--------------------+-----------+--------------+---\n> > --------+---------+--------------------+-----------+--------------+---\n> > public | s1 | a, b FROM t1 | (null) | defined | (null)\n> > (2 rows)\n> >\n> > =# EXPLAIN (STATS, ANALYZE) SELECT * FROM t1 WHERE (a = 1) AND (b = 0);\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ---------------------------------------------\n> > Gather (cost=1000.00..11685.00 rows=100 width=8) (actual\n> time=0.214..50.327 rows=100 loops=1)\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > -> Parallel Seq Scan on t1 (cost=0.00..10675.00 rows=42 width=8) (actual\n> time=30.300..46.610 rows=33 loops=3)\n> > Filter: ((a = 1) AND (b = 0))\n> > Rows Removed by Filter: 333300 Planning Time: 0.246 ms\n> > Execution Time: 50.361 ms\n> > (8 rows)\n> >\n> > -- doesn't work with \"ndistinct\"?\n> > =# \\dX\n> > List of extended statistics\n> > Schema | Name | Definition |\n> Ndistinct | Dependencies | MCV\n> >\n> --------+------+------------------------------------------------------------------+------\n> -----+--------------+--------\n> > public | s3 | date_trunc('month'::text, a), date_trunc('day'::text, a) FROM t3 |\n> defined | (null) | (null)\n> > (1 row)\n> >\n> > postgres(437635)=# EXPLAIN (STATS, ANALYZE) SELECT * FROM t3\n> > WHERE date_trunc('month', a) = '2020-01-01'::timestamp;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ------------------------------------\n> > Seq Scan on t3 (cost=0.00..10210.01 rows=45710 width=8) (actual\n> time=0.027..143.199 rows=44640 loops=1)\n> > Filter: (date_trunc('month'::text, a) = '2020-01-01 00:00:00'::timestamp\n> without time zone)\n> > Rows Removed by Filter: 480961\n> > Planning Time: 0.088 ms\n> > Execution Time: 144.590 ms\n> > (5 rows)\n> >\n> > -- doesn't work with \"mvc\". It might work, but the error happens in my\n> > environments =# \\dX\n> > List of extended statistics\n> > Schema | Name | Definition | Ndistinct | Dependencies | MCV\n> > --------+------+--------------+-----------+--------------+---------\n> > public | s2 | a, b FROM t2 | (null) | (null) | defined\n> > (1 row)\n> >\n> > -- I encountered the error with the query.\n> > =# EXPLAIN (STATS, ANALYZE) SELECT * FROM t2 WHERE (a = 1) AND (b =\n> > 1);\n> > ERROR: unrecognized node type: 268\n> >\n> >\n> \n> Yes, you're right we don't show some stats. For dependencies there's the problem that\n> we don't apply them individually, so it's not really possible to map clauses to individual\n> stats. I wonder if we might have a special \"entry\" to show clauses estimated by the\n> functional dependencies combined from all stats (instead of a particular statistics).\n\nOK, I understand it's intended behavior for \"dependencies\" and we need to consider how to\nshow them in EXPLAIN output in future.\n\n\n> For ndistinct, I think we don't show this because it doesn't go through\n> clauselist_selectivity, which is the only thing I modified in the PoC.\n> But I guess we might improve estimate_num_groups() to track the stats in a similar way,\n> I guess.\n\nThanks. IIUC, the reason is that it doesn't go through statext_clauselist_selectivity() because\nthe number of clauses is one though it goes through clauselist_selectivity().\n\n\n> > ERROR: unrecognized node type: 268\n\nRegarding the above error, do \"applied_stats\" need have the list of \"StatisticExtInfo\"\nbecause it's enough to have the list of Oid(stat->statOid) for EXPLAIN output in the current patch?\nchange_to_applied_stats_has_list_of_oids.diff is the change I assumed. Do you have any plan to\nshow extra information for example \"kind\" of \"StatisticExtInfo\"?\n\nThe above is just one idea came up with while I read the following comments of header\nof pathnodes.h, and to support copy \"StatisticExtInfo\" will leads many other nodes to support copy.\n * We don't support copying RelOptInfo, IndexOptInfo, or Path nodes.\n * There are some subsidiary structs that are useful to copy, though.\n\n\nBy the way, I found curios result while I tested with the above patch. It shows same \"Ext Stats\" twice.\nI think it's expected behavior because the stat is used when estimate the cost of \"Partial HashAggregate\" and \"Group\".\nI've shared the result because I could not understand soon when I saw it first time. I think it's better to let users understand\nwhen the stats are used, but I don't have any idea now.\n\n-- I tested with the example of CREATE STATISTICS documentation.\npsql=# EXPLAIN (STATS, ANALYZE) SELECT date_trunc('month', a), date_trunc('day', a) FROM t3 GROUP BY 1, 2;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=9530.56..9576.18 rows=365 width=16) (actual time=286.908..287.909 rows=366 loops=1)\n Group Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n -> Gather Merge (cost=9530.56..9572.53 rows=365 width=16) (actual time=286.904..287.822 rows=498 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Sort (cost=8530.55..8531.46 rows=365 width=16) (actual time=282.905..282.919 rows=249 loops=2)\n Sort Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n Sort Method: quicksort Memory: 32kB\n Worker 0: Sort Method: quicksort Memory: 32kB\n -> Partial HashAggregate (cost=8509.54..8515.02 rows=365 width=16) (actual time=282.716..282.768 rows=249 loops=2)\n Group Key: date_trunc('month'::text, a), date_trunc('day'::text, a)\n Batches: 1 Memory Usage: 45kB\n Worker 0: Batches: 1 Memory Usage: 45kB\n -> Parallel Seq Scan on t3 (cost=0.00..6963.66 rows=309177 width=16) (actual time=0.021..171.214 rows=262800 loops=2)\n Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a) -- here\n Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a) -- here\n Planning Time: 114327.206 ms\n Execution Time: 288.007 ms\n(18 rows)\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 18 Jul 2024 10:37:42 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "On 7/18/24 12:37, [email protected] wrote:\n>> Let me share my opinion on those questions ...\n> ...>\n>> For ndistinct, I think we don't show this because it doesn't go through\n>> clauselist_selectivity, which is the only thing I modified in the PoC.\n>> But I guess we might improve estimate_num_groups() to track the stats in a similar way,\n>> I guess.\n> \n> Thanks. IIUC, the reason is that it doesn't go through statext_clauselist_selectivity() because\n> the number of clauses is one though it goes through clauselist_selectivity().\n> \n\nAh, I see I misunderstood the original report. The query used was\n\n EXPLAIN (STATS, ANALYZE) SELECT * FROM t3\n WHERE date_trunc('month', a) = '2020-01-01'::timestamp;\n\nAnd it has nothing to do with the number of clauses being one neither.\n\nThe problem is this estimate is handled by examine_variable() matching\nthe expression to the \"expression\" stats, and injecting it into the\nvariable, so that the clauselist_selectivity() sees these stats.\n\nThis would happen even if you build just expression statistics on each\nof the date_trunc() calls, and then tried a query with two clauses:\n\n CREATE STATISTICS s4 ON date_trunc('day', a) FROM t3;\n CREATE STATISTICS s3 ON date_trunc('month', a) FROM t3;\n\n EXPLAIN SELECT * FROM t3\n WHERE date_trunc('month', a) = '2020-01-01'::timestamp\n AND date_trunc('day', 'a') = '2020-01-01'::timestamp;\n\nNot sure how to handle this - we could remember when explain_variable()\ninjects statistics like this, I guess. But do we know that each call to\nexamine_variable() is for estimation? And do we know for which clause?\n\n> \n>>> ERROR: unrecognized node type: 268\n> \n> Regarding the above error, do \"applied_stats\" need have the list of \"StatisticExtInfo\"\n> because it's enough to have the list of Oid(stat->statOid) for EXPLAIN output in the current patch?\n> change_to_applied_stats_has_list_of_oids.diff is the change I assumed. Do you have any plan to\n> show extra information for example \"kind\" of \"StatisticExtInfo\"?\n> \n> The above is just one idea came up with while I read the following comments of header\n> of pathnodes.h, and to support copy \"StatisticExtInfo\" will leads many other nodes to support copy.\n> * We don't support copying RelOptInfo, IndexOptInfo, or Path nodes.\n> * There are some subsidiary structs that are useful to copy, though.\n> \n\nI do think tracking just the OID would work, because we already know how\nto copy List objects. But if we want to also track the clauses, we'd\nhave to keep multiple lists, right? That seems a bit inconvenient.\n\n> \n> By the way, I found curios result while I tested with the above patch. It shows same \"Ext Stats\" twice.\n> I think it's expected behavior because the stat is used when estimate the cost of \"Partial HashAggregate\" and \"Group\".\n> I've shared the result because I could not understand soon when I saw it first time. I think it's better to let users understand\n> when the stats are used, but I don't have any idea now.\n> \n> -- I tested with the example of CREATE STATISTICS documentation.\n> psql=# EXPLAIN (STATS, ANALYZE) SELECT date_trunc('month', a), date_trunc('day', a) FROM t3 GROUP BY 1, 2;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Group (cost=9530.56..9576.18 rows=365 width=16) (actual time=286.908..287.909 rows=366 loops=1)\n> Group Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n> -> Gather Merge (cost=9530.56..9572.53 rows=365 width=16) (actual time=286.904..287.822 rows=498 loops=1)\n> Workers Planned: 1\n> Workers Launched: 1\n> -> Sort (cost=8530.55..8531.46 rows=365 width=16) (actual time=282.905..282.919 rows=249 loops=2)\n> Sort Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n> Sort Method: quicksort Memory: 32kB\n> Worker 0: Sort Method: quicksort Memory: 32kB\n> -> Partial HashAggregate (cost=8509.54..8515.02 rows=365 width=16) (actual time=282.716..282.768 rows=249 loops=2)\n> Group Key: date_trunc('month'::text, a), date_trunc('day'::text, a)\n> Batches: 1 Memory Usage: 45kB\n> Worker 0: Batches: 1 Memory Usage: 45kB\n> -> Parallel Seq Scan on t3 (cost=0.00..6963.66 rows=309177 width=16) (actual time=0.021..171.214 rows=262800 loops=2)\n> Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a) -- here\n> Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a) -- here\n> Planning Time: 114327.206 ms\n> Execution Time: 288.007 ms\n> (18 rows)\n> \n\nI haven't looked into this, but my guess would be this is somehow\nrelated to the parallelism - there's one parallel worker, which means we\nhave 2 processes to report stats for (leader + worker). And you get two\ncopies of the \"Ext Stats\" line. Could be a coincidence, ofc, but maybe\nthere's a loop to print some worker info, and you print the statistics\ninfo in it?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jul 2024 23:36:11 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Showing applied extended statistics in explain Part 2"
},
{
"msg_contents": "> On 7/18/24 12:37, [email protected] wrote:\n> >> Let me share my opinion on those questions ...\n> > ...>\n> >> For ndistinct, I think we don't show this because it doesn't go\n> >> through clauselist_selectivity, which is the only thing I modified in the PoC.\n> >> But I guess we might improve estimate_num_groups() to track the stats\n> >> in a similar way, I guess.\n> >\n> > Thanks. IIUC, the reason is that it doesn't go through\n> > statext_clauselist_selectivity() because the number of clauses is one though it goes\n> through clauselist_selectivity().\n> >\n> \n> Ah, I see I misunderstood the original report. The query used was\n> \n> EXPLAIN (STATS, ANALYZE) SELECT * FROM t3\n> WHERE date_trunc('month', a) = '2020-01-01'::timestamp;\n> \n> And it has nothing to do with the number of clauses being one neither.\n> \n> The problem is this estimate is handled by examine_variable() matching the expression\n> to the \"expression\" stats, and injecting it into the variable, so that the\n> clauselist_selectivity() sees these stats.\n> \n> This would happen even if you build just expression statistics on each of the\n> date_trunc() calls, and then tried a query with two clauses:\n> \n> CREATE STATISTICS s4 ON date_trunc('day', a) FROM t3;\n> CREATE STATISTICS s3 ON date_trunc('month', a) FROM t3;\n> \n> EXPLAIN SELECT * FROM t3\n> WHERE date_trunc('month', a) = '2020-01-01'::timestamp\n> AND date_trunc('day', 'a') = '2020-01-01'::timestamp;\n> \n> Not sure how to handle this - we could remember when explain_variable() injects\n> statistics like this, I guess. But do we know that each call to\n> examine_variable() is for estimation? And do we know for which clause?\n\nI see. The issue is related to extended statistics for single expression. As a\nfirst step, it's ok for me that we don't support it.\n\nThe below is just an idea to know clauses... \nAlthough I'm missing something, can callers of examine_variable()\nfor estimation to rebuild the clauses from partial information of \"OpExpr\"?\n\nOnly clause_selectivity_ext() knows the information of actual full clauses.\nBut we don't need full information. It's enough to know the information\nto show \"OpExpr\" for EXPLAIN.\n\nget_oper_expr() deparse \"OpExpr\" using only the operator oid and arguments\nin get_oper_expr().\n\nIf so, the caller to estimate, for example eqsel_internal(), scalarineqsel_wrapper()\nand so on, seems to be able to know the \"OpExpr\" information, which are operator\noid and arguments, and used extended statistics easily to show for EXPLAIN.\n\n# Memo: the call path of the estimation function\n caller to estimate selectivity (eqsel_internal()/scalargtjoinsel_wrappter()/...)\n -> get_restriction_variable()/get_join_valiables()\n -> examine_variable()\n\n\n> >>> ERROR: unrecognized node type: 268\n> >\n> > Regarding the above error, do \"applied_stats\" need have the list of \"StatisticExtInfo\"\n> > because it's enough to have the list of Oid(stat->statOid) for EXPLAIN output in the\n> current patch?\n> > change_to_applied_stats_has_list_of_oids.diff is the change I assumed.\n> > Do you have any plan to show extra information for example \"kind\" of\n> \"StatisticExtInfo\"?\n> >\n> > The above is just one idea came up with while I read the following\n> > comments of header of pathnodes.h, and to support copy \"StatisticExtInfo\" will leads\n> many other nodes to support copy.\n> > * We don't support copying RelOptInfo, IndexOptInfo, or Path nodes.\n> > * There are some subsidiary structs that are useful to copy, though.\n> >\n> \n> I do think tracking just the OID would work, because we already know how to copy List\n> objects. But if we want to also track the clauses, we'd have to keep multiple lists, right?\n> That seems a bit inconvenient.\n\nUnderstood. In future, we might show not only the applied_clauses but also the clauses of\nits extended statistics (StatisticExtInfo->exprs). \n\n\n> > By the way, I found curios result while I tested with the above patch. It shows same\n> \"Ext Stats\" twice.\n> > I think it's expected behavior because the stat is used when estimate the cost of\n> \"Partial HashAggregate\" and \"Group\".\n> > I've shared the result because I could not understand soon when I saw\n> > it first time. I think it's better to let users understand when the stats are used, but I\n> don't have any idea now.\n> >\n> > -- I tested with the example of CREATE STATISTICS documentation.\n> > psql=# EXPLAIN (STATS, ANALYZE) SELECT date_trunc('month', a), date_trunc('day',\n> a) FROM t3 GROUP BY 1, 2;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------------------------------------------------------------\n> > - Group (cost=9530.56..9576.18 rows=365 width=16) (actual\n> > time=286.908..287.909 rows=366 loops=1)\n> > Group Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n> > -> Gather Merge (cost=9530.56..9572.53 rows=365 width=16) (actual\n> time=286.904..287.822 rows=498 loops=1)\n> > Workers Planned: 1\n> > Workers Launched: 1\n> > -> Sort (cost=8530.55..8531.46 rows=365 width=16) (actual\n> time=282.905..282.919 rows=249 loops=2)\n> > Sort Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n> > Sort Method: quicksort Memory: 32kB\n> > Worker 0: Sort Method: quicksort Memory: 32kB\n> > -> Partial HashAggregate (cost=8509.54..8515.02 rows=365\n> width=16) (actual time=282.716..282.768 rows=249 loops=2)\n> > Group Key: date_trunc('month'::text, a), date_trunc('day'::text,\n> a)\n> > Batches: 1 Memory Usage: 45kB\n> > Worker 0: Batches: 1 Memory Usage: 45kB\n> > -> Parallel Seq Scan on t3 (cost=0.00..6963.66 rows=309177\n> width=16) (actual time=0.021..171.214 rows=262800 loops=2)\n> > Ext Stats: public.s3 Clauses: date_trunc('month'::text,\n> a), date_trunc('day'::text, a) -- here\n> > Ext Stats: public.s3 Clauses:\n> > date_trunc('month'::text, a), date_trunc('day'::text, a) -- here\n> > Planning Time: 114327.206 ms Execution Time: 288.007 ms\n> > (18 rows)\n> >\n> \n> I haven't looked into this, but my guess would be this is somehow related to the\n> parallelism - there's one parallel worker, which means we have 2 processes to report\n> stats for (leader + worker). And you get two copies of the \"Ext Stats\" line. Could be a\n> coincidence, ofc, but maybe there's a loop to print some worker info, and you print the\n> statistics info in it?\n\nI think yes and no. In the above case, it relates to parallelism, but it doesn't print the\ninformation per each worker.\n\n-- Make the number of workers is 5 and EXPLAIN without ANALYZE option.\n-- But \"Ext Stats\" is printed only twice.\n=# EXPLAIN (STATS) SELECT date_trunc('month', a), date_trunc('day', a) FROM t3 GROUP BY 1, 2;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Group (cost=4449.49..4489.50 rows=365 width=16)\n Group Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n -> Gather Merge (cost=4449.49..4478.55 rows=1825 width=16)\n Workers Planned: 5\n -> Sort (cost=4449.41..4450.32 rows=365 width=16)\n Sort Key: (date_trunc('month'::text, a)), (date_trunc('day'::text, a))\n -> Partial HashAggregate (cost=4428.40..4433.88 rows=365 width=16)\n Group Key: date_trunc('month'::text, a), date_trunc('day'::text, a)\n -> Parallel Seq Scan on t3 (cost=0.00..3902.80 rows=105120 width=16)\n Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a)\n Ext Stats: public.s3 Clauses: date_trunc('month'::text, a), date_trunc('day'::text, a)\n(11 rows)\n\nWhen creating a group path, it creates partial grouping paths if possible, and then\ncreates the final grouping path. At this time, both the partial grouping path and\nthe final grouping path use the same RelOptInfo to repeatedly use the extended\nstatistics to know how many groups there will be. That's why it outputs only twice. \nThere may be other similar calculation for partial paths.\n\n# The call path of the above query\n create_grouping_paths\n create_ordinary_grouping_paths\n create_partial_grouping_paths\n get_number_of_groups\n estimate_num_groups\n estimate_multivariate_ndistinct -- first time to estimate the number of groups for partial grouping path\n get_number_of_groups\n estimate_num_groups\n estimate_multivariate_ndistinct -- second time to estimate the number of groups for final grouping path\n\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Jul 2024 10:17:23 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Showing applied extended statistics in explain Part 2"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI think that pgstat_reset_replslot() is missing LWLock protection. Indeed, we\ndon't have any guarantee that the slot is active (then preventing it to be\ndropped/recreated) when this function is executed.\n\nAttached a patch to add the missing protection.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 1 Mar 2024 10:15:48 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On 01/03/2024 12:15, Bertrand Drouvot wrote:\n> Hi hackers,\n> \n> I think that pgstat_reset_replslot() is missing LWLock protection. Indeed, we\n> don't have any guarantee that the slot is active (then preventing it to be\n> dropped/recreated) when this function is executed.\n\nYes, so it seems at quick glance. We have a similar issue in \npgstat_fetch_replslot(); it might return stats for wrong slot, if the \nslot is dropped/recreated concurrently. Do we care?\n\n> --- a/src/backend/utils/activity/pgstat_replslot.c\n> +++ b/src/backend/utils/activity/pgstat_replslot.c\n> @@ -46,6 +46,8 @@ pgstat_reset_replslot(const char *name)\n> \n> \tAssert(name != NULL);\n> \n> +\tLWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n> +\n> \t/* Check if the slot exits with the given name. */\n> \tslot = SearchNamedReplicationSlot(name, true);\n\nSearchNamedReplicationSlot() will also acquire the lock in LW_SHARED \nmode, when you pass need_lock=true. So that at least should be changed \nto false.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 09:55:32 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 1:25 PM Heikki Linnakangas <[email protected]> wrote:\n\n> SearchNamedReplicationSlot() will also acquire the lock in LW_SHARED\n> mode, when you pass need_lock=true. So that at least should be changed\n> to false.\n>\n\nAlso don't we need to release the lock when we return here:\n\n/*\n* Nothing to do for physical slots as we collect stats only for logical\n* slots.\n*/\nif (SlotIsPhysical(slot))\nreturn;\n\nthanks\nShveta\n\n\n",
"msg_date": "Tue, 5 Mar 2024 14:19:19 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 05, 2024 at 09:55:32AM +0200, Heikki Linnakangas wrote:\n> On 01/03/2024 12:15, Bertrand Drouvot wrote:\n> > Hi hackers,\n> > \n> > I think that pgstat_reset_replslot() is missing LWLock protection. Indeed, we\n> > don't have any guarantee that the slot is active (then preventing it to be\n> > dropped/recreated) when this function is executed.\n> \n> Yes, so it seems at quick glance.\n\nThanks for looking at it!\n\n> We have a similar issue in\n> pgstat_fetch_replslot(); it might return stats for wrong slot, if the slot\n> is dropped/recreated concurrently.\n\nGood catch! \n\n> Do we care?\n\nYeah, I think we should: done in v2 attached.\n\n> > --- a/src/backend/utils/activity/pgstat_replslot.c\n> > +++ b/src/backend/utils/activity/pgstat_replslot.c\n> > @@ -46,6 +46,8 @@ pgstat_reset_replslot(const char *name)\n> > \tAssert(name != NULL);\n> > +\tLWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n> > +\n> > \t/* Check if the slot exits with the given name. */\n> > \tslot = SearchNamedReplicationSlot(name, true);\n> \n> SearchNamedReplicationSlot() will also acquire the lock in LW_SHARED mode,\n> when you pass need_lock=true. So that at least should be changed to false.\n>\n\nRight, done in v2. Also had to add an extra \"need_lock\" argument to\nget_replslot_index() for the same reason while taking care of pgstat_fetch_replslot().\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 5 Mar 2024 13:20:54 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 05, 2024 at 02:19:19PM +0530, shveta malik wrote:\n> On Tue, Mar 5, 2024 at 1:25 PM Heikki Linnakangas <[email protected]> wrote:\n> \n> > SearchNamedReplicationSlot() will also acquire the lock in LW_SHARED\n> > mode, when you pass need_lock=true. So that at least should be changed\n> > to false.\n> >\n> \n> Also don't we need to release the lock when we return here:\n> \n> /*\n> * Nothing to do for physical slots as we collect stats only for logical\n> * slots.\n> */\n> if (SlotIsPhysical(slot))\n> return;\n\nD'oh! Thanks! Fixed in v2 shared up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 13:22:23 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 6:52 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > /*\n> > * Nothing to do for physical slots as we collect stats only for logical\n> > * slots.\n> > */\n> > if (SlotIsPhysical(slot))\n> > return;\n>\n> D'oh! Thanks! Fixed in v2 shared up-thread.\n\nThanks. Can we try to get rid of multiple LwLockRelease in\npgstat_reset_replslot(). Is this any better?\n\n /*\n- * Nothing to do for physical slots as we collect stats only for logical\n- * slots.\n+ * Reset stats if it is a logical slot. Nothing to do for physical slots\n+ * as we collect stats only for logical slots.\n */\n- if (SlotIsPhysical(slot))\n- {\n- LWLockRelease(ReplicationSlotControlLock);\n- return;\n- }\n-\n- /* reset this one entry */\n- pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n- ReplicationSlotIndex(slot));\n+ if (SlotIsLogical(slot))\n+ pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n+ ReplicationSlotIndex(slot));\n\n LWLockRelease(ReplicationSlotControlLock);\n\n\nSomething similar in pgstat_fetch_replslot() perhaps?\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:24:46 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 06, 2024 at 10:24:46AM +0530, shveta malik wrote:\n> On Tue, Mar 5, 2024 at 6:52 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> Thanks. Can we try to get rid of multiple LwLockRelease in\n> pgstat_reset_replslot(). Is this any better?\n> \n> /*\n> - * Nothing to do for physical slots as we collect stats only for logical\n> - * slots.\n> + * Reset stats if it is a logical slot. Nothing to do for physical slots\n> + * as we collect stats only for logical slots.\n> */\n> - if (SlotIsPhysical(slot))\n> - {\n> - LWLockRelease(ReplicationSlotControlLock);\n> - return;\n> - }\n> -\n> - /* reset this one entry */\n> - pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n> - ReplicationSlotIndex(slot));\n> + if (SlotIsLogical(slot))\n> + pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n> + ReplicationSlotIndex(slot));\n> \n> LWLockRelease(ReplicationSlotControlLock);\n> \n\nYeah, it's easier to read and probably reduce the pgstat_replslot.o object file\nsize a bit for non optimized build.\n\n> Something similar in pgstat_fetch_replslot() perhaps?\n\nYeah, all of the above done in v3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 6 Mar 2024 09:05:59 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Wed, Mar 06, 2024 at 09:05:59AM +0000, Bertrand Drouvot wrote:\n> Yeah, all of the above done in v3 attached.\n\nInteresting, so this relies on the slot index to ensure the unicity of\nthe stat entries. And if the entry pointing to this ID is updated\nwe may refer to just incorrect data.\n\nThe inconsistency you could get for the fetch and reset cases are\nnarrow, but at least what you are proposing here would protect the \nindex lookup until the entry is copied from shmem, so this offers a\nbetter consistency protection when querying\npg_stat_get_replication_slot() on a fetch, while avoiding a reset of\nincorrect data under concurrent activity.\n\nIn passing.. pgstat_create_replslot() and pgstat_drop_replslot() rely\non the assumption that the LWLock ReplicationSlotAllocationLock is\ntaken while calling these routines. Perhaps that would be worth some\nextra Assert(LWLockHeldByMeInMode()) in pgstat_replslot.c for these\ntwo? Not directly related to this problem.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2024 14:17:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 2:36 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Mar 06, 2024 at 10:24:46AM +0530, shveta malik wrote:\n> > On Tue, Mar 5, 2024 at 6:52 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > Thanks. Can we try to get rid of multiple LwLockRelease in\n> > pgstat_reset_replslot(). Is this any better?\n> >\n> > /*\n> > - * Nothing to do for physical slots as we collect stats only for logical\n> > - * slots.\n> > + * Reset stats if it is a logical slot. Nothing to do for physical slots\n> > + * as we collect stats only for logical slots.\n> > */\n> > - if (SlotIsPhysical(slot))\n> > - {\n> > - LWLockRelease(ReplicationSlotControlLock);\n> > - return;\n> > - }\n> > -\n> > - /* reset this one entry */\n> > - pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n> > - ReplicationSlotIndex(slot));\n> > + if (SlotIsLogical(slot))\n> > + pgstat_reset(PGSTAT_KIND_REPLSLOT, InvalidOid,\n> > + ReplicationSlotIndex(slot));\n> >\n> > LWLockRelease(ReplicationSlotControlLock);\n> >\n>\n> Yeah, it's easier to read and probably reduce the pgstat_replslot.o object file\n> size a bit for non optimized build.\n>\n> > Something similar in pgstat_fetch_replslot() perhaps?\n>\n> Yeah, all of the above done in v3 attached.\n>\n\nThanks for the patch.\n\nFor the fix in pgstat_fetch_replslot(), even with the lock in fetch\ncall, there are chances that the concerned slot can be dropped and\nrecreated.\n\n--It can happen in a small window in pg_stat_get_replication_slot()\nwhen we are consuming the return values of pgstat_fetch_replslot\n(using slotent).\n\n--Also it can happen at a later stage when we have switched to\nfetching the next slot (obtained from 'pg_replication_slots' through\nview' pg_stat_replication_slots'), the previous one can be dropped.\n\nUltimately the results of system view 'pg_replication_slots' can still\ngive us non-existing or re-created slots. But yes, I do not deny that\nit gives us better consistency protection.\n\nDo you feel that the lock in pgstat_fetch_replslot() should be moved\nto its caller when we are done copying the results of that slot? This\nwill improve the protection slightly.\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 7 Mar 2024 10:57:28 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 10:57:28AM +0530, shveta malik wrote:\n> --It can happen in a small window in pg_stat_get_replication_slot()\n> when we are consuming the return values of pgstat_fetch_replslot\n> (using slotent).\n\nYeah, it is possible that what you retrieve from\npgstat_fetch_replslot() does not refer exactly to the slot's content\nunder concurrent activity, but you cannot protect a single scan of\npg_stat_replication_slots as of an effect of its design:\npg_stat_get_replication_slot() has to be called multiple times. The\npatch at least makes sure that the copy of the slot's stats retrieved\nby pgstat_fetch_entry() is slightly more consistent, but you cannot do\nbetter than that except if the data retrieved from\npg_replication_slots and its stats are fetched in the same context\nfunction call, holding the replslot LWLock for the whole scan\nduration.\n\n> Do you feel that the lock in pgstat_fetch_replslot() should be moved\n> to its caller when we are done copying the results of that slot? This\n> will improve the protection slightly.\n\nI don't see what extra protection this would offer, as\npg_stat_get_replication_slot() is called once for each slot. Feel\nfree to correct me if I'm missing something.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2024 14:41:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 11:12 AM Michael Paquier <[email protected]> wrote:\n>\n\n> Yeah, it is possible that what you retrieve from\n> pgstat_fetch_replslot() does not refer exactly to the slot's content\n> under concurrent activity, but you cannot protect a single scan of\n> pg_stat_replication_slots as of an effect of its design:\n> pg_stat_get_replication_slot() has to be called multiple times. The\n> patch at least makes sure that the copy of the slot's stats retrieved\n> by pgstat_fetch_entry() is slightly more consistent\n\nRight, I understand that.\n\n, but you cannot do\n> better than that except if the data retrieved from\n> pg_replication_slots and its stats are fetched in the same context\n> function call, holding the replslot LWLock for the whole scan\n> duration.\n\nYes, agreed.\n\n>\n> > Do you feel that the lock in pgstat_fetch_replslot() should be moved\n> > to its caller when we are done copying the results of that slot? This\n> > will improve the protection slightly.\n>\n> I don't see what extra protection this would offer, as\n> pg_stat_get_replication_slot() is called once for each slot. Feel\n> free to correct me if I'm missing something.\n\nIt slightly improves the chances. pgstat_fetch_replslot is only\ncalled from pg_stat_get_replication_slot(), I thought it might be\nbetter to acquire lock before we call pgstat_fetch_replslot and\nrelease once we are done copying the results for that particular slot.\nBut I also understand that it will not prevent someone from dropping\nthat slot at a later stage when the rest of the calls of\npg_stat_get_replication_slot() are still pending. So I am okay with\nthe current one.\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 7 Mar 2024 11:30:55 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 11:30:55AM +0530, shveta malik wrote:\n> It slightly improves the chances. pgstat_fetch_replslot is only\n> called from pg_stat_get_replication_slot(), I thought it might be\n> better to acquire lock before we call pgstat_fetch_replslot and\n> release once we are done copying the results for that particular slot.\n> But I also understand that it will not prevent someone from dropping\n> that slot at a later stage when the rest of the calls of\n> pg_stat_get_replication_slot() are still pending.\n\nI doubt that there will be more callers of pgstat_fetch_replslot() in\nthe near future, but at least we would be a bit safer with these\ninternals IDs when manipulating the slots, when considered in\nisolation of this API call\n\n> So I am okay with the current one.\n\nOkay, noted.\n\nLet's give a couple of days to others, in case there are more\ncomments. (Patch looked OK here after a second look this afternoon.)\n--\nMichael",
"msg_date": "Fri, 8 Mar 2024 14:12:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 07, 2024 at 02:17:53PM +0900, Michael Paquier wrote:\n> On Wed, Mar 06, 2024 at 09:05:59AM +0000, Bertrand Drouvot wrote:\n> > Yeah, all of the above done in v3 attached.\n> \n> In passing.. pgstat_create_replslot() and pgstat_drop_replslot() rely\n> on the assumption that the LWLock ReplicationSlotAllocationLock is\n> taken while calling these routines. Perhaps that would be worth some\n> extra Assert(LWLockHeldByMeInMode()) in pgstat_replslot.c for these\n> two? Not directly related to this problem.\n\nYeah, good point: I'll create a dedicated patch for that.\n\nNote that currently pgstat_drop_replslot() would not satisfy this new Assert\nwhen being called from InvalidatePossiblyObsoleteSlot(). I think this call\nshould be removed and created a dedicated thread for that [1].\n\n[1]: https://www.postgresql.org/message-id/ZermH08Eq6YydHpO%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Mar 2024 10:26:21 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Fri, Mar 08, 2024 at 10:26:21AM +0000, Bertrand Drouvot wrote:\n> Yeah, good point: I'll create a dedicated patch for that.\n\nSounds good to me.\n\n> Note that currently pgstat_drop_replslot() would not satisfy this new Assert\n> when being called from InvalidatePossiblyObsoleteSlot(). I think this call\n> should be removed and created a dedicated thread for that [1].\n> \n> [1]: https://www.postgresql.org/message-id/ZermH08Eq6YydHpO%40ip-10-97-1-34.eu-west-3.compute.internal\n\nThanks.\n--\nMichael",
"msg_date": "Fri, 8 Mar 2024 19:46:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Fri, Mar 08, 2024 at 07:46:26PM +0900, Michael Paquier wrote:\n> Sounds good to me.\n\nI've applied the patch of this thread as b36fbd9f8da1, though I did\nnot see a huge point in backpatching as at the end this is just a\nconsistency improvement.\n--\nMichael",
"msg_date": "Mon, 11 Mar 2024 12:33:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Fri, Mar 08, 2024 at 07:46:26PM +0900, Michael Paquier wrote:\n>> Sounds good to me.\n\n> I've applied the patch of this thread as b36fbd9f8da1, though I did\n> not see a huge point in backpatching as at the end this is just a\n> consistency improvement.\n\nI've closed the CF entry for this [1] as committed. Please re-open\nit if there was something left to do here.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/47/4878/\n\n\n",
"msg_date": "Tue, 02 Apr 2024 15:18:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 03:18:24PM -0400, Tom Lane wrote:\n> I've closed the CF entry for this [1] as committed. Please re-open\n> it if there was something left to do here.\n> \n> [1] https://commitfest.postgresql.org/47/4878/\n\nThanks, I was not aware of this one.\n--\nMichael",
"msg_date": "Wed, 3 Apr 2024 15:02:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing LWLock protection in pgstat_reset_replslot()"
}
] |
[
{
"msg_contents": "hi.\n\n/*****************************************************************************\n * globals.h -- *\n *****************************************************************************/\n\nThe above comment src/include/miscadmin.h is not accurate?\nwe don't have globals.h file?\n\n\n",
"msg_date": "Fri, 1 Mar 2024 19:19:35 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "src/include/miscadmin.h outdated comments"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 12:19, jian he <[email protected]> wrote:\n>\n> hi.\n>\n> /*****************************************************************************\n> * globals.h -- *\n> *****************************************************************************/\n>\n> The above comment src/include/miscadmin.h is not accurate?\n> we don't have globals.h file?\n\nThe header of the file describes the following:\n\n * miscadmin.h\n * This file contains general postgres administration and initialization\n * stuff that used to be spread out between the following files:\n * globals.h global variables\n * pdir.h directory path crud\n * pinit.h postgres initialization\n * pmod.h processing modes\n * Over time, this has also become the preferred place for widely known\n * resource-limitation stuff, such as work_mem and check_stack_depth().\n\nSo, presumably that section is what once was globals.h.\n\nAs to whether the comment should remain now that it's been 17 years\nsince those files were merged, I'm not sure: while the comment has\nmostly historic value, there is something to be said about it\ndelineating sections of the file.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 1 Mar 2024 12:26:53 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: src/include/miscadmin.h outdated comments"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\npostgres [1264904]=# select 123456789.123456789123456::double precision;\r\n┌────────────────────┐\r\n│ float8 │\r\n├────────────────────┤\r\n│ 123456789.12345679 │\r\n└────────────────────┘\r\n(1 row)\r\n\r\nI do not understand why this number is truncated at 123456789.12345679 that\r\nis 17 digits and not 15 digits\r\n\r\nAny idea\r\n\r\nFabrice\r\n\r\nDocumentation says:\r\ndouble precision 8 bytes variable-precision, inexact 15 decimal digits\r\nprecision\r\n\nHello,postgres [1264904]=# select 123456789.123456789123456::double precision;┌────────────────────┐│ float8 │├────────────────────┤│ 123456789.12345679 │└────────────────────┘(1 row)I do not understand why this number is truncated at 123456789.12345679 that is 17 digits and not 15 digitsAny ideaFabriceDocumentation says:double precision8 bytesvariable-precision, inexact15 decimal digits precision",
"msg_date": "Fri, 1 Mar 2024 15:46:45 +0100",
"msg_from": "Fabrice Chapuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "double precisoin type"
},
{
"msg_contents": "Fabrice Chapuis <[email protected]> writes:\n> Documentation says:\n> double precision 8 bytes variable-precision, inexact 15 decimal digits\n> precision\n\nThe documentation is stating the minimum number of decimal digits\nthat will be accurately reproduced. You got 16 reproduced correctly\nin this example, but you were lucky.\n\nfloat8out has a different rule, which is to emit enough digits to\ndescribe the actually-stored binary value unambiguously, so that\ndump and reload will not change the stored value.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Mar 2024 11:12:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: double precisoin type"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nIn this thread, I want to promote entries from CommitFest that require review. I have scanned through the bugs, clients, and documentation sections, and here is my take on the current situation. All of these threads are currently in the \"Needs review\" state and were marked by the patch author as targeting version 17.\n\nBugs:\n* LockAcquireExtended improvement\n Not really a bug, but rather a limitation. Thread might be of interest for a reviewer who wants to dive into heavy locks. Some input from Robert. I doubt this improvement will land into 17.\n* Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY column(s)\n 3-liner fix, with input from Alvaro.\n* Avoid deadlock and concurrency during orphan temp table removal\n A real deadlock from production. 0 prior external review, hurry up to be first reviewer :) The patch is small, yet need a deep understanding of lock protocols of different combination of objects.\n* Explicitly show dependent types as extension members\n Some issues on the tip of FDW and types subsystems. Fix by Tom, not backpatchable. If you want better understanding of extendebility and type dependencies - take this for review. IMO most probably will be in 17, just need some extra eyes.\n* initdb's -c option behaves wrong way\n Fundamental debate, might seems much like tabs vs spaces (strncmp vs strncasecmp). Patch by Tom, perfect as usual, needs agreement from someone who's already involved.\n\nClients:\n* vacuumdb/clusterdb/reindexdb: allow specifying objects to process in all databases\n Nice feature, some review from Kyotaro Horiguchi, but need more.\n* Support for named parsed statement in psql\n Some review by Jelte was done, but seem to require more attention.\n* Extend pgbench partitioning to pgbench_history\n Tomas Vondra explressed some backward comparability concerns, Melanie questioned implementation details. I doubt this will land into 17, but eventually might be part of a famous \"sort of TPC-B\".\n* psql: Allow editing query results with \\gedit\n There's a possible new nice psql feature, but as of January architectural discussion was still going on. It does not seem like it will be in 17, unless some agreement emerge.\n\nDocumentation:\n* SET ROLE documentation improvement\n Thin matters of superuser documentation. Nathan signed up as a committer, but the thread has no updates for some months.\n* Add detail regarding resource consumption wrt max_connections\n Cary Huang reviewd the patch, but the result is inconclusive.\n* Quick Start Guide to PL/pgSQL and PL/Python Documentation\n Some comments from Pavel and Li seem like were not addressed.\n* Simplify documentation related to Windows builds\n Andres had a couple of notes that were addressed by the author.\n* PG DOCS - protocol specifying boolean parameters without values.\n Small leftover in a relatevely old documentation thread. Needs a bump from someone in the thread.\n* Documentation: warn about two_phase when altering a subscription\n Amit LGTMed, I've added him to cc of the followup.\n\nStay tuned for other CF sections.\n\n\nBest regards, Andrey Borodin, learning how to be a CFM.\n\n",
"msg_date": "Sat, 2 Mar 2024 23:32:15 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CF entries for 17 to be reviewed"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 1:32 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> Hi hackers!\n>\n> In this thread, I want to promote entries from CommitFest that require review. I have scanned through the bugs, clients, and documentation sections, and here is my take on the current situation. All of these threads are currently in the \"Needs review\" state and were marked by the patch author as targeting version 17.\n\nHi Andrey, thanks for volunteering. I at least had forgotten to update\nthe target version for all of my registered patches. I suspect others\nmay be in the same situation. Anyway, I've done that now. But, I'm not\nsure if the ones that do have a target version = 17 are actually all\nthe patches targeting 17.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 2 Mar 2024 15:19:51 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "\n\n> On 3 Mar 2024, at 01:19, Melanie Plageman <[email protected]> wrote:\n> \n> I'm not\n> sure if the ones that do have a target version = 17 are actually all\n> the patches targeting 17.\n\nYes, for me it’s only a hint where to bump things up. I will extend scope on other versions when I fill finish a pass though entries with version 17.\n\n\nHere’s my take on “Miscellaneous” section.\n\n\n* Permute underscore separated components of columns before fuzzy matching\n\tThe patch received some review, but not for latest version. I pinged the reviewer for an update.\n* Add pg_wait_for_lockers() function\n\tCitus-related patch, but may be of use to other distributed systems. This patch worth attention at least because author replied to himself 10+ times. Interesting addition, some feedback from Andres and Laurenz.\n* Improve the log message output of basic_archive when basic_archive.archive_directory parameter is not set\n\tRelatively simple change to improve user-friendliness. Daniel Gustafsson expressed interest recently.\n* Fix log_line_prefix to display the transaction id (%x) for statements not in a transaction block\n\tReasonable log improvement, code is simple but in a tricky place. There was some feedback, I've asked if respondent can be a reviewer.\n* Add Index-level REINDEX with multiple jobs\n\tAn addition to DBA toolset. Some unaddressed feedback, pinged authors.\n* Add LSN <-> time conversion facility\n\tThere's ongoing discussion between Melanie and Tomas. Relatively heavyweight patchset, but given current solid technical level of the discussion this might land into 17. Take your chance to chime-in with review! :)\n* date_trunc function in interval version\n\tSome time tricks. There are review notes by Tomas. I pinged authors.\n* Adding comments to help understand psql hidden queries\n\tA couple of patches for psql --echo-hidden. Seems useful and simple. No reviews at all though. I moved the patch to \"Clients\" to reflect actual patch purpose and lighten generic “Miscellaneous\".\n* Improving EXPLAIN's display of SubPlan nodes\n\tSome EXPLAIN changes, Alexander Alekseev was looking into this. I've asked him if he would be the reviewer.\n* Should we remove -Wdeclaration-after-statement?\n\tNot really a patch, kind of an opinion poll. The result is now kind of -BigNumber, I see no chances for this to get into 17, but maybe in future.\n* Add to_regtypemod() SQL function\n\tCool nice function, some reviewers approved the patch. I've took a glance on the patch, seems nice, switched to \"Ready for Committer\". Some unrelated changes to genbki.pl, but according to thread it was needed for something.\n* un-revert MAINTAIN privilege and pg_maintain predefined role\n\tThe work seems to be going on.\n* Checkpoint extension hook\n\tThe patch is not provided yet. I've pinged the thread.\n\nStay tuned, I hope everyone interested in reviewing will find themself a cool interesting patch or two.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Mar 2024 13:42:51 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "\n\n> On 4 Mar 2024, at 13:42, Andrey M. Borodin <[email protected]> wrote:\n> \n> Here’s my take on “Miscellaneous” section.\n\nI’ve read other small sections.\n\nMonitoring & Control\n* Logging parallel worker draught\n\tThe patchset on improving loggin os resource starvation when parallel workers are spawned. Some major refactoring proposed. I've switched to WoA.\n* System username in pg_stat_activity\n\tThere's active discussion on extending or not pg_stat_activity.\n\nTesting\n* Add basic tests for the low-level backup method\n\tMichael Paquier provided feedback, so I switched to WoA.\n\nSystem Administration\n* recovery modules\n\tThere was a very active discussion, but after April 2023 it stalled, and only some rebases are there. Maybe a fresh look could revive the thread.\n* Possibility to disable `ALTER SYSTEM`\n\tThe discussion seems active, but inconclusive.\n* Add checkpoint/redo LSNs to recovery errors.\n\tMichael Paquier provided feedback, so I switched to WoA.\n\nSecurity\n* add not_before and not_after timestamps to sslinfo extension and pg_stat_ssl\n\tMost recent version provided by Daniel Gustafsson, but the thread stalled in September 2023. Maybe Jacob or some other reviewer could refresh it, IMO this might land in 17.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Mar 2024 16:51:46 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "\n\n> On 4 Mar 2024, at 14:51, Andrey M. Borodin <[email protected]> wrote:\n> \n> I’ve read other small sections.\n\nHere are statuses for \"Refactoring\" section:\n* New [relation] options engine\n\tRelatively heavy refactoring. Author keeps interest to the patch for some years now. As I understood the main problem is that big refactoring cannot be split into incremental steps. Definitely worth reviewing, but I think not for 17 already...\n* Confine vacuum skip logic to lazy_scan_skip\n\tThere was a discussion at the end of 2023, but no recent review activity. Author actively improves the patchset.\n* Change prefetch and read strategies to use range in pg_prewarm\n\tSome discussion is happening. Changed to WoA to reflect actual status.\n* Potential issue in ecpg-informix decimal converting functions\n\tOn Daniel's TODO list.\n* BitmapHeapScan table AM violation removal (and use streaming read API)\n\tActive discussion with reviewers is going on.\n* Streaming read sequential and TID range scan\n\tSeems like discussion on this patch is going on in nearby threads. In this thread I observe only improved patch versions posted.\n\nAll in all \"Refactoring\" section seemed to me more complex and demanding in-depth knowledge. It's difficult to judge why new approaches are an improvement. So for newcomer reviewers I'd recommend to look to other sections.\n\nThanks.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 6 Mar 2024 16:49:48 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "\n\n> On 6 Mar 2024, at 18:49, Andrey M. Borodin <[email protected]> wrote:\n> \n> Here are statuses for \"Refactoring\" section:\n\nI've made a pass through \"Replication and Recovery\" and \"SQL commands\" sections.\n\"SQL commands\" section seems to me most interesting stuff on the commitfest (but I'm far from inspecting every thing thread yet). But reviewing patches from this section is not a work for one CF definitely. Be prepared that this is a long run, with many iterations and occasional architectural pivots. Typically reviewers work on these items for much more than one month.\n\nReplication and Recovery\n* Synchronizing slots from primary to standby\n Titanic work. A lot of stuff already pushed, v108 is now in the discussion. ISTM that entry barrier of jumping into discussion with something useful is really high.\n* CREATE SUBSCRIPTION ... SERVER\n The discussion stopped in January. Authors posted new version recently.\n* speed up a logical replication setup (pg_createsubscriber)\n Newest version posted recently, but it fails CFbot's tests. Pinged authors.\n* Have pg_basebackup write \"dbname\" in \"primary_conninfo\"?\n Some feedback and descussin provided. Switched to WoA.\n\nSQL Commands\n* Add SPLIT PARTITION/MERGE PARTITIONS commands\n Cool new commands, very useful for sharding. CF item was RfC recently, need review update after rebase.\n* Add CANONICAL option to xmlserialize\n Vignesh C and Chapman Flack provided some feedback back in October 2023, but the patch still needs review.\n* Incremental View Maintenance\n This is a super cool feature. IMO at this point is too late for 17, but you should consider reviewing this not because it's easy, but because it's hard. It's real rocket science. Fine-grained 11-step patchset, which can change a lot of workloads if committed. CFbot finds some failures, but this should not stop anyone frome reviewing in this case.\n* Implement row pattern recognition feature\n SQL:2016 feature, carefully split into 8 steps. Needs review, probably review in a long run. The feature seems big. 3 reviewers are working on this, but no recent input for some months.\n* COPY TO json\n Active thread with many different authors proposing different patches. I could not summarize it, asked CF entry author for help.\n* Add new error_action COPY ON_ERROR \"log\"\n There's an active discussion in the thread.\n* add COPY option RECECT_LIMIT\n While the patch seems much similar to previous, there's no discussion in this thread...\n \nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 8 Mar 2024 22:59:51 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "Hi,\n\n> > On 6 Mar 2024, at 18:49, Andrey M. Borodin <[email protected]> wrote:\n> >\n> > Here are statuses for \"Refactoring\" section:\n>\n> [...]\n\n> Aleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\n> Currently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n\nServer Features:\n\n* Fix partitionwise join with partially-redundant join clauses\n I see there is a good discussion in the progress here. Doesn't\nseem to be needing more reviewers at the moment.\n* ALTER TABLE SET ACCESS METHOD on partitioned tables\n Ditto.\n* Patch to implement missing join selectivity estimation for range types\n The patch doesn't apply and has been \"Waiting on Author\" for a few\nmonths now. Could be a candidate for closing with RwF.\n* logical decoding and replication of sequences, take 2\n According to Tomas Vondra the patch is not ready for PG17. The\npatch is marked as \"Waiting on Author\". Although it could be withrowed\nfor now, personally I see no problem with keeping it WoA until the\nPG18 cycle begins.\n* Add the ability to limit the amount of memory that can be allocated\nto backends\n v20231226 doesn't apply. The patch needs love from somebody interested in it.\n* Multi-version ICU\n By a quick look the patch doesn't apply and was moved between\nseveral commitfests, last time in \"Waiting on Author\" state. Could be\na candidate for closing with RwF.\n* Post-special Page Storage TDE support\n A large patch divided into 28 (!) parts. Currently needs a rebase.\nWhich shouldn't necessarily stop a reviewer looking for a challenging\ntask.\n* Built-in collation provider for \"C\" and \"C.UTF-8\"\n Peter E left some feedback today, so I changed the status to\n\"Waiting on Author\"\n* ltree hash functions\n Marked as RfC and cfbot seems to be happy with the patch. Could\nuse some attention from a committer?\n* UUID v7\n The patch is in good shape. Michael argued that the patch should\nbe merged when RFC is approved. No action seems to be needed until\nthen.\n* (to be continued)\n\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:33:20 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "Hi,\n\n> > Aleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\n> > Currently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n>\n> Server Features:\n>\n> [...]\n\nI noticed that \"Avoid mixing custom and OpenSSL BIO functions\" had two\nentries [1][2]. I closed [1] and marked it as \"Withdrawn\" due to lack\nof a better status. Maybe we need an additional \"Duplicate\" status.\n\n[1]: https://commitfest.postgresql.org/47/4834/\n[2]: https://commitfest.postgresql.org/47/4835/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:49:32 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CF entries for 17 to be reviewed"
},
{
"msg_contents": "Hi,\n\n> > Aleksander, I would greatly appreciate if you join me in managing CF. Together we can move more stuff :)\n> > Currently, I'm going through \"SQL Commands\". And so far I had not come to \"Performance\" and \"Server Features\" at all... So if you can handle updating statuses of that sections - that would be great.\n>\n> Server Features:\n>\n> [...]\n>\n> * (to be continued)\n\nServer Features:\n\n* Custom storage managers (SMGR), redux\n The patch needs a rebase. I notified the authors.\n* pg_tracing\n Ditto. The patch IMO is promising. I encourage anyone interested in\nthe topic to take a look.\n* Support run-time partition pruning for hash join\n The patch needs (more) review. It doesn't look extremely complex.\n* Support prepared statement invalidation when result or argument types change\n I changed the status to \"Waiting on Author\". The patch needs a\nrebase since January.\n* Allow INSTEAD OF DELETE triggers to modify the tuple for RETURNING\n Needs rebase.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 15 Mar 2024 17:28:32 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CF entries for 17 to be reviewed"
}
] |
[
{
"msg_contents": "These two animals seem to have got mixed up about about the size of\nthis relation in the same place:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-02-28%2017%3A34%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-03-01%2006%3A47%3A53\n\n+++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/regress/results/constraints.out\n2024-03-01 08:22:11.624897033 +0100\n@@ -573,42 +573,38 @@\n UNIQUE (i) DEFERRABLE INITIALLY DEFERRED;\n BEGIN;\n INSERT INTO unique_tbl VALUES (1, 'five');\n+ERROR: could not read blocks 0..0 in file \"base/16384/21437\": read\nonly 0 of 8192 bytes\n\nThat error message changed slightly in my smgrreadv() commit a couple\nof months ago (it would have been \"block 0\" and now it's \"blocks 0..0\"\nbecause now we can read more than one block at a time) but I don't\nimmediately see how anything at that low level could be responsible\nfor this.\n\n\n",
"msg_date": "Sun, 3 Mar 2024 10:39:57 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Failures in constraints regression test, \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "These are \"my\" animals (running at a local university). There's a couple\ninteresting details:\n\n1) the animals run on the same machine (one with gcc, one with clang)\n\n2) I did upgrade the OS and restarted the machine on 2024/02/26, i.e.\nright before the failures started\n\nThese might be just coincidences, but maybe something got broken by the\nupgrade ... OTOH it's weird it'd affect just HEAD and none of the other\nbranches, and on two difference compilers.\n\nJust to be sure I removed the buildroot, in case there's something wrong\nwith ccache. It's a wild guess, but I don't have any other idea.\n\nregards\n\nOn 3/2/24 22:39, Thomas Munro wrote:\n> These two animals seem to have got mixed up about about the size of\n> this relation in the same place:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2024-02-28%2017%3A34%3A30\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2024-03-01%2006%3A47%3A53\n> \n> +++ /home/buildfarm/trilobite/buildroot/HEAD/pgsql.build/src/test/regress/results/constraints.out\n> 2024-03-01 08:22:11.624897033 +0100\n> @@ -573,42 +573,38 @@\n> UNIQUE (i) DEFERRABLE INITIALLY DEFERRED;\n> BEGIN;\n> INSERT INTO unique_tbl VALUES (1, 'five');\n> +ERROR: could not read blocks 0..0 in file \"base/16384/21437\": read\n> only 0 of 8192 bytes\n> \n> That error message changed slightly in my smgrreadv() commit a couple\n> of months ago (it would have been \"block 0\" and now it's \"blocks 0..0\"\n> because now we can read more than one block at a time) but I don't\n> immediately see how anything at that low level could be responsible\n> for this.\n> \n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Mar 2024 23:29:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "Le samedi 2 mars 2024, 23:29:52 CET Tomas Vondra a écrit :\n> These are \"my\" animals (running at a local university). There's a couple\n> interesting details:\n\nHi Tomas,\ndo you still have the failing cluster data ? \n\nNoah pointed me to this thread, and it looks a bit similar to the FSM \ncorruption issue I'm facing: https://www.postgresql.org/message-id/\n1925490.taCxCBeP46%40aivenlaptop\n\nSo if you still have the data, it would be nice to see if you indeed have a \ncorrupted FSM, and if you have indications when it happened.\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 14:16:28 +0100",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "\n\nOn 3/4/24 14:16, Ronan Dunklau wrote:\n> Le samedi 2 mars 2024, 23:29:52 CET Tomas Vondra a écrit :\n>> These are \"my\" animals (running at a local university). There's a couple\n>> interesting details:\n> \n> Hi Tomas,\n> do you still have the failing cluster data ? \n> \n> Noah pointed me to this thread, and it looks a bit similar to the FSM \n> corruption issue I'm facing: https://www.postgresql.org/message-id/\n> 1925490.taCxCBeP46%40aivenlaptop\n> \n> So if you still have the data, it would be nice to see if you indeed have a \n> corrupted FSM, and if you have indications when it happened.\n> \n\nSorry, I nuked the buildroot so I don't have the data anymore. Let's see\nif it fails again.\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Mar 2024 14:29:51 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "Happened again. I see this is OpenSUSE. Does that mean the file\nsystem is Btrfs?\n\n\n",
"msg_date": "Fri, 8 Mar 2024 21:33:56 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On 3/8/24 09:33, Thomas Munro wrote:\n> Happened again. I see this is OpenSUSE. Does that mean the file\n> system is Btrfs?\n\n\nIt is, but I don't think that matters - I've been able to reproduce this\nlocally on my laptop using ext4 filesystem. I'd bet the important piece\nhere is -DCLOBBER_CACHE_ALWAYS (and it seems avocet/trilobite are the\nonly animals running with this).\n\nAlso, if this really is a filesystem (or environment) issue, it seems\nvery strange it'd only affect HEAD and not the other branches. So it\nseems quite likely this is actually triggered by a commit.\n\nLooking at the commits from the good/bad runs, I see this:\n\navocet: good=4c2369a bad=f5a465f\ntrilobite: good=d13ff82 bad=def0ce3\n\nThat means the commit would have to be somewhere here:\n\nf5a465f1a07 Promote assertion about !ReindexIsProcessingIndex to ...\n57b28c08305 Doc: fix minor typos in two ECPG function descriptions.\n28e858c0f95 Improve documentation and GUC description for ...\na661bf7b0f5 Remove flaky isolation tests for timeouts\n874d817baa1 Multiple revisions to the GROUP BY reordering tests\n466979ef031 Replace lateral references to removed rels in subqueries\na6b2a51e16d Avoid dangling-pointer problem with partitionwise ...\nd360e3cc60e Fix compiler warning on typedef redeclaration\n8af25652489 Introduce a new smgr bulk loading facility.\ne612384fc78 Fix mistake in SQL features list\nd13ff82319c Fix BF failure in commit 93db6cbda0.\n\nMy guess would be 8af25652489, as it's the only storage-related commit.\n\nI'm currently running tests to verify this.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 8 Mar 2024 13:21:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "\n\nOn 3/8/24 13:21, Tomas Vondra wrote:\n> On 3/8/24 09:33, Thomas Munro wrote:\n>> Happened again. I see this is OpenSUSE. Does that mean the file\n>> system is Btrfs?\n> \n> \n> It is, but I don't think that matters - I've been able to reproduce this\n> locally on my laptop using ext4 filesystem. I'd bet the important piece\n> here is -DCLOBBER_CACHE_ALWAYS (and it seems avocet/trilobite are the\n> only animals running with this).\n> \n> Also, if this really is a filesystem (or environment) issue, it seems\n> very strange it'd only affect HEAD and not the other branches. So it\n> seems quite likely this is actually triggered by a commit.\n> \n> Looking at the commits from the good/bad runs, I see this:\n> \n> avocet: good=4c2369a bad=f5a465f\n> trilobite: good=d13ff82 bad=def0ce3\n> \n> That means the commit would have to be somewhere here:\n> \n> f5a465f1a07 Promote assertion about !ReindexIsProcessingIndex to ...\n> 57b28c08305 Doc: fix minor typos in two ECPG function descriptions.\n> 28e858c0f95 Improve documentation and GUC description for ...\n> a661bf7b0f5 Remove flaky isolation tests for timeouts\n> 874d817baa1 Multiple revisions to the GROUP BY reordering tests\n> 466979ef031 Replace lateral references to removed rels in subqueries\n> a6b2a51e16d Avoid dangling-pointer problem with partitionwise ...\n> d360e3cc60e Fix compiler warning on typedef redeclaration\n> 8af25652489 Introduce a new smgr bulk loading facility.\n> e612384fc78 Fix mistake in SQL features list\n> d13ff82319c Fix BF failure in commit 93db6cbda0.\n> \n> My guess would be 8af25652489, as it's the only storage-related commit.\n> \n> I'm currently running tests to verify this.\n> \n\nYup, the breakage starts with this commit. I haven't looked into the\nroot cause, or whether the commit maybe just made some pre-existing\nissue easier to hit. Also, I haven't followed the discussion on the\npgsql-bugs thread [1], maybe there are some interesting findings.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/1878547.tdWV9SEqCh%40aivenlaptop\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 8 Mar 2024 14:36:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "Le vendredi 8 mars 2024, 14:36:48 CET Tomas Vondra a écrit :\n> > My guess would be 8af25652489, as it's the only storage-related commit.\n> > \n> > I'm currently running tests to verify this.\n> \n> Yup, the breakage starts with this commit. I haven't looked into the\n> root cause, or whether the commit maybe just made some pre-existing\n> issue easier to hit. Also, I haven't followed the discussion on the\n> pgsql-bugs thread [1], maybe there are some interesting findings.\n> \n\nIf that happens only on HEAD and not on 16, and doesn't involve WAL replay, \nthen it's not the same bug. \n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Fri, 08 Mar 2024 14:40:00 +0100",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On Sat, Mar 9, 2024 at 2:36 AM Tomas Vondra\n<[email protected]> wrote:\n> On 3/8/24 13:21, Tomas Vondra wrote:\n> > My guess would be 8af25652489, as it's the only storage-related commit.\n> >\n> > I'm currently running tests to verify this.\n> >\n>\n> Yup, the breakage starts with this commit. I haven't looked into the\n> root cause, or whether the commit maybe just made some pre-existing\n> issue easier to hit. Also, I haven't followed the discussion on the\n> pgsql-bugs thread [1], maybe there are some interesting findings.\n\nAdding Heikki.\n\n\n",
"msg_date": "Sat, 9 Mar 2024 09:29:52 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "\n\nOn 3/8/24 21:29, Thomas Munro wrote:\n> On Sat, Mar 9, 2024 at 2:36 AM Tomas Vondra\n> <[email protected]> wrote:\n>> On 3/8/24 13:21, Tomas Vondra wrote:\n>>> My guess would be 8af25652489, as it's the only storage-related commit.\n>>>\n>>> I'm currently running tests to verify this.\n>>>\n>>\n>> Yup, the breakage starts with this commit. I haven't looked into the\n>> root cause, or whether the commit maybe just made some pre-existing\n>> issue easier to hit. Also, I haven't followed the discussion on the\n>> pgsql-bugs thread [1], maybe there are some interesting findings.\n> \n> Adding Heikki.\n\nI spent a bit of time investigating this today, but haven't made much\nprogress due to (a) my unfamiliarity with the smgr code in general and\nthe patch in particular, and (b) CLOBBER_CACHE_ALWAYS making it quite\ntime consuming to iterate and experiment.\n\nHowever, the smallest schedule that still reproduces the issue is:\n\n-------------------\ntest: test_setup\n\ntest: create_aggregate create_function_sql create_cast constraints\ntriggers select inherit typed_table vacuum drop_if_exists\nupdatable_views roleattributes create_am hash_func errors infinite_recurse\n-------------------\n\nI tried to reduce the second step to just \"constraints\", but that does\nnot fail. Clearly there's some concurrency at play, and having just a\nsingle backend makes that go away.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 8 Mar 2024 21:48:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On Sat, Mar 9, 2024 at 9:48 AM Tomas Vondra\n<[email protected]> wrote:\n> I spent a bit of time investigating this today, but haven't made much\n> progress due to (a) my unfamiliarity with the smgr code in general and\n> the patch in particular, and (b) CLOBBER_CACHE_ALWAYS making it quite\n> time consuming to iterate and experiment.\n>\n> However, the smallest schedule that still reproduces the issue is:\n>\n> -------------------\n> test: test_setup\n>\n> test: create_aggregate create_function_sql create_cast constraints\n> triggers select inherit typed_table vacuum drop_if_exists\n> updatable_views roleattributes create_am hash_func errors infinite_recurse\n> -------------------\n\nThanks, reproduced here (painfully slowly). Looking...\n\nHuh, also look at these extra problems later in the logs of the latest\ntrilobite and avocet runs, further down after the short read errors:\n\nTRAP: failed Assert(\"j > attnum\"), File: \"heaptuple.c\", Line: 640, PID: 15753\npostgres: autovacuum worker regression(ExceptionalCondition+0x67)[0x9c5f37]\npostgres: autovacuum worker regression[0x4b60c8]\npostgres: autovacuum worker regression[0x5ff735]\npostgres: autovacuum worker regression[0x5fe468]\npostgres: autovacuum worker regression(analyze_rel+0x133)[0x5fd5e3]\npostgres: autovacuum worker regression(vacuum+0x6b6)[0x683926]\npostgres: autovacuum worker regression[0x7ce5e3]\npostgres: autovacuum worker regression[0x7cc4f0]\npostgres: autovacuum worker regression(StartAutoVacWorker+0x22)[0x7cc152]\npostgres: autovacuum worker regression[0x7d57d1]\npostgres: autovacuum worker regression[0x7d37bf]\npostgres: autovacuum worker regression[0x6f5a4f]\n\nThen crash recovery fails, in one case with:\n\n2024-03-07 20:28:18.632 CET [15860:4] WARNING: will not overwrite a used ItemId\n2024-03-07 20:28:18.632 CET [15860:5] CONTEXT: WAL redo at 0/FB07A48\nfor Heap/INSERT: off: 9, flags: 0x00; blkref #0: rel 1663/16384/2610,\nblk 12\n2024-03-07 20:28:18.632 CET [15860:6] PANIC: failed to add tuple\n2024-03-07 20:28:18.632 CET [15860:7] CONTEXT: WAL redo at 0/FB07A48\nfor Heap/INSERT: off: 9, flags: 0x00; blkref #0: rel 1663/16384/2610,\nblk 12\n\n... and in another with:\n\n2024-03-05 11:51:27.992 CET [25510:4] PANIC: invalid lp\n2024-03-05 11:51:27.992 CET [25510:5] CONTEXT: WAL redo at 0/F87A8D0\nfor Heap/INPLACE: off: 29; blkref #0: rel 1663/16384/20581, blk 0\n\n\n",
"msg_date": "Sun, 10 Mar 2024 17:02:39 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 5:02 PM Thomas Munro <[email protected]> wrote:\n> Thanks, reproduced here (painfully slowly). Looking...\n\nI changed that ERROR to a PANIC and now I can see that\n_bt_metaversion() is failing to read a meta page (block 0), and the\nfile is indeed of size 0 in my filesystem. Which is not cool, for a\nbtree. Looking at btbuildempty(), we have this sequence:\n\n bulkstate = smgr_bulk_start_rel(index, INIT_FORKNUM);\n\n /* Construct metapage. */\n metabuf = smgr_bulk_get_buf(bulkstate);\n _bt_initmetapage((Page) metabuf, P_NONE, 0, allequalimage);\n smgr_bulk_write(bulkstate, BTREE_METAPAGE, metabuf, true);\n\n smgr_bulk_finish(bulkstate);\n\nOoh. One idea would be that the smgr lifetime stuff is b0rked,\nintroducing corruption. Bulk write itself isn't pinning the smgr\nrelation, it's relying purely on the object not being invalidated,\nwhich the theory of 21d9c3ee's commit message allowed for but ... here\nit's destroyed (HASH_REMOVE'd) sooner under CACHE_CLOBBER_ALWAYS,\nwhich I think we failed to grok. If that's it, I'm surprised that\nthings don't implode more spectacularly. Perhaps HASH_REMOVE should\nclobber objects in debug builds, similar to pfree?\n\nFor that hypothesis, the corruption might not be happening in the\nabove-quoted code itself, because it doesn't seem to have an\ninvalidation acceptance point (unless I'm missing it). Some other\nbulk write got mixed up? Not sure yet.\n\nI won't be surprised if the answer is: if you're holding a reference,\nyou have to get a pin (referring to bulk_write.c).\n\n\n",
"msg_date": "Sun, 10 Mar 2024 18:48:22 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 6:48 PM Thomas Munro <[email protected]> wrote:\n> I won't be surprised if the answer is: if you're holding a reference,\n> you have to get a pin (referring to bulk_write.c).\n\nAhhh, on second thoughts, I take that back, I think the original\ntheory still actually works just fine. It's just that somewhere in\nour refactoring of that commit, when we were vacillating between\ndifferent semantics for 'destroy' and 'release', I think we made a\nmistake: in RelationCacheInvalidate() I think we should now call\nsmgrreleaseall(), not smgrdestroyall(). That satisfies the\nrequirements for sinval queue overflow: we close md.c segments (and\nmost importantly virtual file descriptors), so our lack of sinval\nrecords can't hurt us, we'll reopen all files as required. That's\nwhat CLOBBER_CACHE_ALWAYS is effectively testing (and more). But the\nsmgr pointer remains valid, and retains only its \"identity\", eg hash\ntable key, and that's also fine. It won't be destroyed until after\nthe end of the transaction. Which was the point, and it allows things\nlike bulk_write.c (and streaming_read.c) to hold an smgr reference.\nRight?\n\n\n",
"msg_date": "Sun, 10 Mar 2024 19:23:35 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "Thanks for diagnosing this!\n\nOn 10/03/2024 08:23, Thomas Munro wrote:\n> On Sun, Mar 10, 2024 at 6:48 PM Thomas Munro <[email protected]> wrote:\n>> I won't be surprised if the answer is: if you're holding a reference,\n>> you have to get a pin (referring to bulk_write.c).\n> \n> Ahhh, on second thoughts, I take that back, I think the original\n> theory still actually works just fine. It's just that somewhere in\n> our refactoring of that commit, when we were vacillating between\n> different semantics for 'destroy' and 'release', I think we made a\n> mistake: in RelationCacheInvalidate() I think we should now call\n> smgrreleaseall(), not smgrdestroyall().\n\nYes, I ran the reproducer with 'rr', and came to the same conclusion. \nThat smgrdestroyall() call closes destroys the SmgrRelation, breaking \nthe new assumption that an unpinned SmgrRelation is valid until end of \ntransaction.\n\n> That satisfies the\n> requirements for sinval queue overflow: we close md.c segments (and\n> most importantly virtual file descriptors), so our lack of sinval\n> records can't hurt us, we'll reopen all files as required. That's\n> what CLOBBER_CACHE_ALWAYS is effectively testing (and more). But the\n> smgr pointer remains valid, and retains only its \"identity\", eg hash\n> table key, and that's also fine. It won't be destroyed until after\n> the end of the transaction. Which was the point, and it allows things\n> like bulk_write.c (and streaming_read.c) to hold an smgr reference.\n> Right\n\n+1.\n\nI wonder if we can now simplify RelationCacheInvalidate() further. In \nthe loop above that smgrdestroyall():\n\n> \twhile ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)\n> \t{\n> \t\trelation = idhentry->reldesc;\n> \n> \t\t/* Must close all smgr references to avoid leaving dangling ptrs */\n> \t\tRelationCloseSmgr(relation);\n> \n> \t\t/*\n> \t\t * Ignore new relations; no other backend will manipulate them before\n> \t\t * we commit. Likewise, before replacing a relation's relfilelocator,\n> \t\t * we shall have acquired AccessExclusiveLock and drained any\n> \t\t * applicable pending invalidations.\n> \t\t */\n> \t\tif (relation->rd_createSubid != InvalidSubTransactionId ||\n> \t\t\trelation->rd_firstRelfilelocatorSubid != InvalidSubTransactionId)\n> \t\t\tcontinue;\n> \n\nI don't think we need to call RelationCloseSmgr() for relations created \nin the same transaction anymore.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 10 Mar 2024 11:20:19 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On 10/03/2024 11:20, Heikki Linnakangas wrote:\n> On 10/03/2024 08:23, Thomas Munro wrote:\n>> On Sun, Mar 10, 2024 at 6:48 PM Thomas Munro <[email protected]> wrote:\n>>> I won't be surprised if the answer is: if you're holding a reference,\n>>> you have to get a pin (referring to bulk_write.c).\n>>\n>> Ahhh, on second thoughts, I take that back, I think the original\n>> theory still actually works just fine. It's just that somewhere in\n>> our refactoring of that commit, when we were vacillating between\n>> different semantics for 'destroy' and 'release', I think we made a\n>> mistake: in RelationCacheInvalidate() I think we should now call\n>> smgrreleaseall(), not smgrdestroyall().\n> \n> Yes, I ran the reproducer with 'rr', and came to the same conclusion.\n> That smgrdestroyall() call closes destroys the SmgrRelation, breaking\n> the new assumption that an unpinned SmgrRelation is valid until end of\n> transaction.\n> \n>> That satisfies the\n>> requirements for sinval queue overflow: we close md.c segments (and\n>> most importantly virtual file descriptors), so our lack of sinval\n>> records can't hurt us, we'll reopen all files as required. That's\n>> what CLOBBER_CACHE_ALWAYS is effectively testing (and more). But the\n>> smgr pointer remains valid, and retains only its \"identity\", eg hash\n>> table key, and that's also fine. It won't be destroyed until after\n>> the end of the transaction. Which was the point, and it allows things\n>> like bulk_write.c (and streaming_read.c) to hold an smgr reference.\n>> Right\n> \n> +1.\n> \n> I wonder if we can now simplify RelationCacheInvalidate() further. In\n> the loop above that smgrdestroyall():\n> \n>> \twhile ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)\n>> \t{\n>> \t\trelation = idhentry->reldesc;\n>>\n>> \t\t/* Must close all smgr references to avoid leaving dangling ptrs */\n>> \t\tRelationCloseSmgr(relation);\n>>\n>> \t\t/*\n>> \t\t * Ignore new relations; no other backend will manipulate them before\n>> \t\t * we commit. Likewise, before replacing a relation's relfilelocator,\n>> \t\t * we shall have acquired AccessExclusiveLock and drained any\n>> \t\t * applicable pending invalidations.\n>> \t\t */\n>> \t\tif (relation->rd_createSubid != InvalidSubTransactionId ||\n>> \t\t\trelation->rd_firstRelfilelocatorSubid != InvalidSubTransactionId)\n>> \t\t\tcontinue;\n>>\n> \n> I don't think we need to call RelationCloseSmgr() for relations created\n> in the same transaction anymore.\n\nBarring objections, I'll commit the attached.\n\nHmm, I'm not sure if we need even smgrreleaseall() here anymore. It's \nnot required for correctness AFAICS. We don't do it in single-rel \ninvalidation in RelationCacheInvalidateEntry() either.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sun, 10 Mar 2024 22:30:54 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 9:30 AM Heikki Linnakangas <[email protected]> wrote:\n> Barring objections, I'll commit the attached.\n\n+1\n\nI guess the comment for smgrreleaseall() could also be updated. It\nmentions only PROCSIGNAL_BARRIER_SMGRRELEASE, but I think sinval\noverflow (InvalidateSystemCaches()) should also be mentioned?\n\n> Hmm, I'm not sure if we need even smgrreleaseall() here anymore. It's\n> not required for correctness AFAICS. We don't do it in single-rel\n> invalidation in RelationCacheInvalidateEntry() either.\n\nI think we do, because we have missed sinval messages. It's unlikely\nbut a relfilenode might have been recycled, and we might have file\ndescriptors that point to the unlinked files. That is, there are new\nfiles with the same names and we need to open those ones.\n\n\n",
"msg_date": "Mon, 11 Mar 2024 09:59:52 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 9:59 AM Thomas Munro <[email protected]> wrote:\n> On Mon, Mar 11, 2024 at 9:30 AM Heikki Linnakangas <[email protected]> wrote:\n> > Hmm, I'm not sure if we need even smgrreleaseall() here anymore. It's\n> > not required for correctness AFAICS. We don't do it in single-rel\n> > invalidation in RelationCacheInvalidateEntry() either.\n>\n> I think we do, because we have missed sinval messages. It's unlikely\n> but a relfilenode might have been recycled, and we might have file\n> descriptors that point to the unlinked files. That is, there are new\n> files with the same names and we need to open those ones.\n\n... though I think you would be right if Dilip and Robert had\nsucceeded in their quest to introduce 56-bit non-cycling relfilenodes.\nAnd for the record, we can also shoot ourselves in the foot in another\nknown case without sinval[1], so more work is needed here, but that\ndoesn't mean this sinval code path should also aim footwards.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLs554tQFCUjv_vn7ft9Xv5LNjPoAd--3Df%2BJJKJ7A8kw%40mail.gmail.com#f099d68e95edcfe408818447d9da04a7\n\n\n",
"msg_date": "Mon, 11 Mar 2024 10:32:09 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On 10/03/2024 22:59, Thomas Munro wrote:\n> On Mon, Mar 11, 2024 at 9:30 AM Heikki Linnakangas <[email protected]> wrote:\n>> Barring objections, I'll commit the attached.\n> \n> +1\n\nPushed, thanks!\n\n> I guess the comment for smgrreleaseall() could also be updated. It\n> mentions only PROCSIGNAL_BARRIER_SMGRRELEASE, but I think sinval\n> overflow (InvalidateSystemCaches()) should also be mentioned?\n\nI removed that comment; people can grep to find the callers.\n\n>> Hmm, I'm not sure if we need even smgrreleaseall() here anymore. It's\n>> not required for correctness AFAICS. We don't do it in single-rel\n>> invalidation in RelationCacheInvalidateEntry() either.\n> \n> I think we do, because we have missed sinval messages. It's unlikely\n> but a relfilenode might have been recycled, and we might have file\n> descriptors that point to the unlinked files. That is, there are new\n> files with the same names and we need to open those ones.\n\nGotcha.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 11 Mar 2024 09:09:39 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "Hello Heikki,\n\n11.03.2024 10:09, Heikki Linnakangas wrote:\n> On 10/03/2024 22:59, Thomas Munro wrote:\n>> On Mon, Mar 11, 2024 at 9:30 AM Heikki Linnakangas <[email protected]> wrote:\n>>> Barring objections, I'll commit the attached.\n>>\n>> +1\n>\n> Pushed, thanks!\n\nPlease look at a new anomaly, that I've discovered in master.\n\nStarting from af0e7deb4, the following script:\nnumjobs=80\nfor ((i=1;i<=50;i++)); do\necho \"I $i\"\n\nfor ((j=1;j<=numjobs;j++)); do createdb db$j; done\n\nfor ((j=1;j<=numjobs;j++)); do\necho \"\nVACUUM FULL pg_class;\nREINDEX INDEX pg_class_oid_index;\n\" | psql -d db$j >/dev/null 2>&1 &\n\necho \"\nCREATE TABLE tbl1 (t text);\nDROP TABLE tbl1;\n\" | psql -d db$j >/dev/null 2>&1 &\ndone\nwait\n\ngrep 'was terminated' server.log && break;\nfor ((j=1;j<=numjobs;j++)); do dropdb db$j; done\n\ndone\n\ntriggers a segfault:\n2024-06-19 19:22:49.009 UTC [1607210:6] LOG: server process (PID 1607671) was terminated by signal 11: Segmentation fault\n\nwith the following stack trace:\nCore was generated by `postgres: law db50 [local] CREATE TABLE '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\nwarning: Section `.reg-xstate/1607671' in core file too small.\n#0 0x000055d04cb8232e in RelationReloadNailed (relation=0x7f7d0a1b1fd8) at relcache.c:2415\n2415 relp = (Form_pg_class) GETSTRUCT(pg_class_tuple);\n(gdb) bt\n#0 0x000055d04cb8232e in RelationReloadNailed (relation=0x7f7d0a1b1fd8) at relcache.c:2415\n#1 0x000055d04cb8278c in RelationClearRelation (relation=0x7f7d0a1b1fd8, rebuild=true) at relcache.c:2560\n#2 0x000055d04cb834e9 in RelationCacheInvalidate (debug_discard=false) at relcache.c:3048\n#3 0x000055d04cb72e3c in InvalidateSystemCachesExtended (debug_discard=false) at inval.c:680\n#4 0x000055d04cb73190 in InvalidateSystemCaches () at inval.c:794\n#5 0x000055d04c9754ad in ReceiveSharedInvalidMessages (\n invalFunction=0x55d04cb72eee <LocalExecuteInvalidationMessage>,\n resetFunction=0x55d04cb7317e <InvalidateSystemCaches>) at sinval.c:105\n#6 0x000055d04cb731b4 in AcceptInvalidationMessages () at inval.c:808\n#7 0x000055d04c97d2ed in LockRelationOid (relid=2662, lockmode=1) at lmgr.c:136\n#8 0x000055d04c404b1f in relation_open (relationId=2662, lockmode=1) at relation.c:55\n#9 0x000055d04c47941e in index_open (relationId=2662, lockmode=1) at indexam.c:137\n#10 0x000055d04c4787c2 in systable_beginscan (heapRelation=0x7f7d0a1b1fd8, indexId=2662, indexOK=true, snapshot=0x0,\n nkeys=1, key=0x7ffd456a8570) at genam.c:396\n#11 0x000055d04cb7e93d in ScanPgRelation (targetRelId=3466, indexOK=true, force_non_historic=false) at relcache.c:381\n#12 0x000055d04cb7fe15 in RelationBuildDesc (targetRelId=3466, insertIt=true) at relcache.c:1093\n#13 0x000055d04cb81c93 in RelationIdGetRelation (relationId=3466) at relcache.c:2108\n#14 0x000055d04c404b29 in relation_open (relationId=3466, lockmode=1) at relation.c:58\n#15 0x000055d04cb720a6 in BuildEventTriggerCache () at evtcache.c:129\n#16 0x000055d04cb71f6a in EventCacheLookup (event=EVT_SQLDrop) at evtcache.c:68\n#17 0x000055d04c61c037 in trackDroppedObjectsNeeded () at event_trigger.c:1342\n#18 0x000055d04c61bf02 in EventTriggerBeginCompleteQuery () at event_trigger.c:1284\n#19 0x000055d04c9ac744 in ProcessUtilitySlow (pstate=0x55d04d04aca0, pstmt=0x55d04d021540,\n queryString=0x55d04d020830 \"CREATE TABLE tbl1 (t text);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n queryEnv=0x0, dest=0x55d04d021800, qc=0x7ffd456a8fb0) at utility.c:1107\n#20 0x000055d04c9ac64d in standard_ProcessUtility (pstmt=0x55d04d021540,\n queryString=0x55d04d020830 \"CREATE TABLE tbl1 (t text);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x55d04d021800, qc=0x7ffd456a8fb0) at utility.c:1067\n#21 0x000055d04c9ab54d in ProcessUtility (pstmt=0x55d04d021540,\n queryString=0x55d04d020830 \"CREATE TABLE tbl1 (t text);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x55d04d021800, qc=0x7ffd456a8fb0) at utility.c:523\n#22 0x000055d04c9a9dc6 in PortalRunUtility (portal=0x55d04d0c1020, pstmt=0x55d04d021540, isTopLevel=true,\n setHoldSnapshot=false, dest=0x55d04d021800, qc=0x7ffd456a8fb0) at pquery.c:1158\n#23 0x000055d04c9aa03d in PortalRunMulti (portal=0x55d04d0c1020, isTopLevel=true, setHoldSnapshot=false,\n dest=0x55d04d021800, altdest=0x55d04d021800, qc=0x7ffd456a8fb0) at pquery.c:1315\n#24 0x000055d04c9a9487 in PortalRun (portal=0x55d04d0c1020, count=9223372036854775807, isTopLevel=true, run_once=true,\n dest=0x55d04d021800, altdest=0x55d04d021800, qc=0x7ffd456a8fb0) at pquery.c:791\n#25 0x000055d04c9a2150 in exec_simple_query (query_string=0x55d04d020830 \"CREATE TABLE tbl1 (t text);\")\n at postgres.c:1273\n#26 0x000055d04c9a71f5 in PostgresMain (dbname=0x55d04d05dbf0 \"db50\", username=0x55d04d05dbd8 \"law\") at postgres.c:4675\n#27 0x000055d04c8bd8c8 in BackendRun (port=0x55d04d0540c0) at postmaster.c:4475\n#28 0x000055d04c8bcedd in BackendStartup (port=0x55d04d0540c0) at postmaster.c:4151\n#29 0x000055d04c8b93a9 in ServerLoop () at postmaster.c:1769\n#30 0x000055d04c8b8ca4 in PostmasterMain (argc=3, argv=0x55d04d01a770) at postmaster.c:1468\n#31 0x000055d04c76c577 in main (argc=3, argv=0x55d04d01a770) at main.c:197\n\n(gdb) p pg_class_tuple\n$1 = (HeapTuple) 0x0\n\nserver.log might also contain:\n2024-06-19 19:25:38.060 UTC [1618682:5] psql ERROR: could not read blocks 3..3 in file \"base/28531/28840\": read only 0 \nof 8192 bytes\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 19 Jun 2024 23:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On Wed, Jun 19, 2024 at 11:00:00PM +0300, Alexander Lakhin wrote:\n> Starting from af0e7deb4, the following script:\n> [...]\n> triggers a segfault:\n> 2024-06-19 19:22:49.009 UTC [1607210:6] LOG: server process (PID\n> 1607671) was terminated by signal 11: Segmentation fault\n\nOpen item added for this issue.\n--\nMichael",
"msg_date": "Thu, 20 Jun 2024 13:15:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On 19/06/2024 23:00, Alexander Lakhin wrote:\n> Please look at a new anomaly, that I've discovered in master.\n> \n> ...\n> \n> triggers a segfault:\n> 2024-06-19 19:22:49.009 UTC [1607210:6] LOG: server process (PID 1607671) was terminated by signal 11: Segmentation fault\n> \n> ...\n> \n> server.log might also contain:\n> 2024-06-19 19:25:38.060 UTC [1618682:5] psql ERROR: could not read blocks 3..3 in file \"base/28531/28840\": read only 0\n> of 8192 bytes\n\nThanks for the report! I was not able to reproduce the segfault, but I \ndo see the \"could not read blocks\" error very quickly with the script.\n\nIn commit af0e7deb4a, I removed the call to RelationCloseSmgr() from \nRelationCacheInvalidate(). I thought it was no longer needed, because we \nno longer free the underlying SmgrRelation.\n\nHowever, it meant that if the relfilenode of the relation was changed, \nthe relation keeps pointing to the SMgrRelation of the old relfilenode. \nSo we still need the RelationCloseSmgr() call, in case the relfilenode \nhas changed.\n\nPer attached patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 21 Jun 2024 01:52:37 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> In commit af0e7deb4a, I removed the call to RelationCloseSmgr() from \n> RelationCacheInvalidate(). I thought it was no longer needed, because we \n> no longer free the underlying SmgrRelation.\n\n> However, it meant that if the relfilenode of the relation was changed, \n> the relation keeps pointing to the SMgrRelation of the old relfilenode. \n> So we still need the RelationCloseSmgr() call, in case the relfilenode \n> has changed.\n\nOuch. How come we did not see this immediately in testing? I'd have\nthought surely such a bug would be exposed by any command that\nrewrites a heap.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Jun 2024 19:12:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test,\n \"read only 0 of 8192 bytes\""
},
{
"msg_contents": "On 21/06/2024 02:12, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> In commit af0e7deb4a, I removed the call to RelationCloseSmgr() from\n>> RelationCacheInvalidate(). I thought it was no longer needed, because we\n>> no longer free the underlying SmgrRelation.\n> \n>> However, it meant that if the relfilenode of the relation was changed,\n>> the relation keeps pointing to the SMgrRelation of the old relfilenode.\n>> So we still need the RelationCloseSmgr() call, in case the relfilenode\n>> has changed.\n> \n> Ouch. How come we did not see this immediately in testing? I'd have\n> thought surely such a bug would be exposed by any command that\n> rewrites a heap.\n\nThere is a RelationCloseSmgr() call in RelationClearRelation(), which \ncovers the common cases. This only occurs during \nRelationCacheInvalidate(), when pg_class's relfilenumber was changed.\n\nHmm, looking closer, I think this might be a more appropriate place for \nthe RelationCloseSmgr() call:\n\n> \t\t\t/*\n> \t\t\t * If it's a mapped relation, immediately update its rd_locator in\n> \t\t\t * case its relfilenumber changed. We must do this during phase 1\n> \t\t\t * in case the relation is consulted during rebuild of other\n> \t\t\t * relcache entries in phase 2. It's safe since consulting the\n> \t\t\t * map doesn't involve any access to relcache entries.\n> \t\t\t */\n> \t\t\tif (RelationIsMapped(relation))\n> \t\t\t\tRelationInitPhysicalAddr(relation);\n\nThat's where we change the relfilenumber, before the \nRelationClearRelation() call.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 21 Jun 2024 02:25:02 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
},
{
"msg_contents": "On 21/06/2024 02:25, Heikki Linnakangas wrote:\n> Hmm, looking closer, I think this might be a more appropriate place for\n> the RelationCloseSmgr() call:\n> \n>> \t\t\t/*\n>> \t\t\t * If it's a mapped relation, immediately update its rd_locator in\n>> \t\t\t * case its relfilenumber changed. We must do this during phase 1\n>> \t\t\t * in case the relation is consulted during rebuild of other\n>> \t\t\t * relcache entries in phase 2. It's safe since consulting the\n>> \t\t\t * map doesn't involve any access to relcache entries.\n>> \t\t\t */\n>> \t\t\tif (RelationIsMapped(relation))\n>> \t\t\t\tRelationInitPhysicalAddr(relation);\n> \n> That's where we change the relfilenumber, before the\n> RelationClearRelation() call.\n\nPushed a fix that way.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 21 Jun 2024 17:14:32 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Failures in constraints regression test, \"read only 0 of 8192\n bytes\""
}
] |
[
{
"msg_contents": "Hello hackers,\n\npsql has the :{?name} syntax for testing a psql variable existence.\n\nBut currently doing \\echo :{?VERB<Tab> doesn't trigger tab completion.\n\nThis patch fixes it. I've also included a TAP test.\n\nBest regards,\nSteve Chavez",
"msg_date": "Sat, 2 Mar 2024 21:00:30 -0500",
"msg_from": "Steve Chavez <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql: fix variable existence tab completion"
},
{
"msg_contents": "On 2024-03-03 03:00 +0100, Steve Chavez wrote:\n> psql has the :{?name} syntax for testing a psql variable existence.\n> \n> But currently doing \\echo :{?VERB<Tab> doesn't trigger tab completion.\n> \n> This patch fixes it. I've also included a TAP test.\n\nThanks. The code looks good, all tests pass, and the tab completion\nworks as expected when testing manually.\n\n-- \nErik\n\n\n",
"msg_date": "Sun, 3 Mar 2024 16:37:28 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 5:37 PM Erik Wienhold <[email protected]> wrote:\n> On 2024-03-03 03:00 +0100, Steve Chavez wrote:\n> > psql has the :{?name} syntax for testing a psql variable existence.\n> >\n> > But currently doing \\echo :{?VERB<Tab> doesn't trigger tab completion.\n> >\n> > This patch fixes it. I've also included a TAP test.\n>\n> Thanks. The code looks good, all tests pass, and the tab completion\n> works as expected when testing manually.\n\nA nice improvement. I've checked why we have at all the '{' at\nWORD_BREAKS and if we're going to break anything by removing that. It\nseems that '{' here from the very beginning and it comes from the\ndefault value of rl_basic_word_break_characters [1]. It seems that\n:{?name} is the only usage of '{' sign in psql. So, removing it from\nWORD_BREAKS shouldn't break anything.\n\nI'm going to push this patch if no objections.\n\nLinks.\n1. https://tiswww.case.edu/php/chet/readline/readline.html#index-rl_005fbasic_005fword_005fbreak_005fcharacters\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:57:01 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "Hello!\n\nOn 14.03.2024 17:57, Alexander Korotkov wrote:\n> On Sun, Mar 3, 2024 at 5:37 PM Erik Wienhold <[email protected]> wrote:\n>> On 2024-03-03 03:00 +0100, Steve Chavez wrote:\n>>> psql has the :{?name} syntax for testing a psql variable existence.\n>>>\n>>> But currently doing \\echo :{?VERB<Tab> doesn't trigger tab completion.\n>>>\n>>> This patch fixes it. I've also included a TAP test.\n>>\n>> Thanks. The code looks good, all tests pass, and the tab completion\n>> works as expected when testing manually.\n\nI'm not sure if Debian 10 is actual for the current master. But, if this is the case,\ni suggest a patch, since the test will not work under this OS.\nThe thing is that, Debian 10 will backslash curly braces and the question mark and\nTAB completion will lead to the string like that:\n\n\\echo :\\{\\?VERBOSITY\\}\n\ninstead of expected:\n\n\\echo :{?VERBOSITY}\n\nThe patch attached fix the 010_tab_completion.pl test in the same way like [1].\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n[1] https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Mon, 6 May 2024 09:05:38 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "Hi, Anton!\n\nOn Mon, May 6, 2024 at 9:05 AM Anton A. Melnikov\n<[email protected]> wrote:\n> On 14.03.2024 17:57, Alexander Korotkov wrote:\n> > On Sun, Mar 3, 2024 at 5:37 PM Erik Wienhold <[email protected]> wrote:\n> >> On 2024-03-03 03:00 +0100, Steve Chavez wrote:\n> >>> psql has the :{?name} syntax for testing a psql variable existence.\n> >>>\n> >>> But currently doing \\echo :{?VERB<Tab> doesn't trigger tab completion.\n> >>>\n> >>> This patch fixes it. I've also included a TAP test.\n> >>\n> >> Thanks. The code looks good, all tests pass, and the tab completion\n> >> works as expected when testing manually.\n>\n> I'm not sure if Debian 10 is actual for the current master. But, if this is the case,\n> i suggest a patch, since the test will not work under this OS.\n> The thing is that, Debian 10 will backslash curly braces and the question mark and\n> TAB completion will lead to the string like that:\n>\n> \\echo :\\{\\?VERBOSITY\\}\n>\n> instead of expected:\n>\n> \\echo :{?VERBOSITY}\n>\n> The patch attached fix the 010_tab_completion.pl test in the same way like [1].\n\nThank you for the fix. As I get, the fix teaches\n010_tab_completion.pl to tolerate the invalid result of tab\ncompletion. Do you think we could fix it another way to make the\nresult of tab completion correct?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Mon, 6 May 2024 13:19:25 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "Hi, Alexander!\n\nOn 06.05.2024 13:19, Alexander Korotkov wrote:\n>> The patch attached fix the 010_tab_completion.pl test in the same way like [1].\n> \n> Thank you for the fix. As I get, the fix teaches\n> 010_tab_completion.pl to tolerate the invalid result of tab\n> completion. Do you think we could fix it another way to make the\n> result of tab completion correct?\n\nRight now i don't see any straight way to fix this to the correct tab completion.\nThere are several similar cases in this test.\nE.g., for such a commands:\n \n CREATE TABLE \"mixedName\" (f1 int, f2 text);\n select * from \"mi<TAB> ;\n\ngives with debian 10:\npostgres=# select * from \\\"mixedName\\\" ;\n\nresulting in an error.\n \nNow there is a similar workaround in the 010_tab_completion.pl with regex: qr/\"mixedName\\\\?\" /\n\nI think if there were or will be complaints from users about this behavior in Debian 10,\nthen it makes sense to look for more complex solutions that will fix a backslash substitutions.\nIf no such complaints, then it is better to make a workaround in test.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 7 May 2024 10:37:27 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "\"Anton A. Melnikov\" <[email protected]> writes:\n> On 06.05.2024 13:19, Alexander Korotkov wrote:\n>> Now there is a similar workaround in the 010_tab_completion.pl with regex: qr/\"mixedName\\\\?\" /\n\n> I think if there were or will be complaints from users about this behavior in Debian 10,\n> then it makes sense to look for more complex solutions that will fix a backslash substitutions.\n> If no such complaints, then it is better to make a workaround in test.\n\nActually, I think we ought to just reject this change. Debian 10\nwill be two years past EOL before PG 17 ships. So I don't see a\nreason to support it in the tests anymore. One of the points of\nsuch testing is to expose broken platforms, not mask them.\n\nObviously, if anyone can point to a still-in-support platform\nwith the same bug, that calculus might change.\n\nWith respect to the other hacks Alexander mentions, maybe we\ncould clean some of those out too? I don't recall what platform\nwe had in mind there, but we've moved our goalposts on what\nwe support pretty far in the last couple years.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 18:10:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "On 19.07.2024 01:10, Tom Lane wrote:\n> Actually, I think we ought to just reject this change. Debian 10\n> will be two years past EOL before PG 17 ships. So I don't see a\n> reason to support it in the tests anymore. One of the points of\n> such testing is to expose broken platforms, not mask them.\n> \n> Obviously, if anyone can point to a still-in-support platform\n> with the same bug, that calculus might change.\n\nThe bug when broken version of libedit want to backslash some symbols\n(e.g. double quotas, curly braces, the question mark)\ni only encountered on Debian 10 (buster).\n\nIf anyone has encountered a similar error on some other system,\nplease share such information.\n\n\n> With respect to the other hacks Alexander mentions, maybe we\n> could clean some of those out too? I don't recall what platform\n> we had in mind there, but we've moved our goalposts on what\n> we support pretty far in the last couple years.\n\nAgreed that no reason to save workarounds for non-supported systems.\nHere is the patch that removes fixes for Buster bug mentioned above.\n\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 21 Jul 2024 02:24:25 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "\"Anton A. Melnikov\" <[email protected]> writes:\n> On 19.07.2024 01:10, Tom Lane wrote:\n>> With respect to the other hacks Alexander mentions, maybe we\n>> could clean some of those out too? I don't recall what platform\n>> we had in mind there, but we've moved our goalposts on what\n>> we support pretty far in the last couple years.\n\n> Agreed that no reason to save workarounds for non-supported systems.\n> Here is the patch that removes fixes for Buster bug mentioned above.\n\nPushed. I shall now watch the buildfarm from a safe distance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Sep 2024 16:26:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
},
{
"msg_contents": "\nOn 04.09.2024 23:26, Tom Lane wrote:\n> \n> Pushed. I shall now watch the buildfarm from a safe distance.\n> \n\nThanks! I'll be ready to fix possible falls.\n\nWith the best regards, \n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 5 Sep 2024 08:07:26 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql: fix variable existence tab completion"
}
] |
[
{
"msg_contents": "I recently encountered some odd behavior with a query both selecting and\nsorting by `random()`. When I posted about it on pgsql-bugs ^1, David\nJohnston and Tom Lane provided some very detailed explanations as to\nwhat was happening, but weren't sure whether or where information about\nit could live comfortably in the docs. I think it's a useful addition;\nit's not an everyday occurrence but I'm very much not the first person\nto run into it. After a bit of looking, I think I've found a reasonable\nlocation.\n\nThis patch revises\nhttps://www.postgresql.org/docs/current/queries-order.html to discuss\nsort expressions and options separately, and fits a caveat based on\nTom's suggested language (with an example) into the former section.\n\nThere are a few other minor tweaks included here:\n\n- note that `*` is not an expression\n- consolidate output column examples\n- mention non-column sort expressions\n\nI did write a query demonstrating the `group by` case Tom mentioned, but\nexpect that one's a lot less common.\n\n1: https://www.postgresql.org/message-id/CZHAF947QQQO.27MAUK2SVMBXW%40nmfay.com",
"msg_date": "Sat, 02 Mar 2024 23:13:12 -0500",
"msg_from": "\"Dian Fay\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[docs] revise ORDER BY documentation"
}
] |
[
{
"msg_contents": "\nThe issue can be reproduced with the following steps:\n\ncreate table x_events (.., created_at timestamp, a int, b int); \n\ncreate index idx_1 on t(created_at, a);\ncreate index idx_2 on t(created_at, b);\n\nquery:\nselect * from t where create_at = current_timestamp and b = 1;\n\nindex (created_at, a) rather than (created_at, b) may be chosen for the\nabove query if the statistics think \"create_at = current_timestamp\" has\nno rows, then both index are OK, actually it is true just because\nstatistics is out of date.\n\nI just run into this again recently and have two new idea this time,\nI'd like gather some feedback on this.\n\n1. We can let the user define the column as the value is increased day by\n day. the syntax may be:\n\n ALTER TABLE x_events ALTER COLUMN created_at ALWAYS_INCREASED.\n\n then when a query like 'create_at op const', the statistics module can\n treat it as 'created_at = $1'. so the missing statistics doesn't make\n difference. Then I think the above issue can be avoided. \n\n This is different from letting user using a PreparedStmt directly\n because it is possible that we always choose a custom plan, the\n easiest way to make this happen is we do a planning time partition \n prune.\n\n2. Use some AI approach to forecast the data it doesn't gather yet. The\n training stage may happen at analyze stage, take the above case for\n example, it may get a model like 'there are 100 rows per second for\n the time during 9:00 to 18:00 and there are 2 rows per seconds for\n other time range.\n \nFor now, I think option 1 may be easier to happen.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sun, 03 Mar 2024 15:01:23 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 3/3/2024 14:01, Andy Fan wrote:\n> 1. We can let the user define the column as the value is increased day by\n> day. the syntax may be:\n> \n> ALTER TABLE x_events ALTER COLUMN created_at ALWAYS_INCREASED.\n> \n> then when a query like 'create_at op const', the statistics module can\n> treat it as 'created_at = $1'. so the missing statistics doesn't make\n> difference. Then I think the above issue can be avoided.\nLet me write some words to support your efforts in that way.\nI also have some user cases where they periodically insert data in large \nchunks. These chunks contain 'always increased' values, and it causes \ntrouble each time they start an analytic query over this new data before \nthe analyze command.\nI have thought about that issue before but invented nothing special \nexcept a more aggressive analysis of such tables.\nYour trick can work, but it needs a new parameter in pg_type and a lot \nof additional code for such a rare case.\nI'm looking forward to the demo patch.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 11:57:56 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Sun, 3 Mar 2024 at 20:08, Andy Fan <[email protected]> wrote:\n> The issue can be reproduced with the following steps:\n>\n> create table x_events (.., created_at timestamp, a int, b int);\n>\n> create index idx_1 on t(created_at, a);\n> create index idx_2 on t(created_at, b);\n>\n> query:\n> select * from t where create_at = current_timestamp and b = 1;\n>\n> index (created_at, a) rather than (created_at, b) may be chosen for the\n> above query if the statistics think \"create_at = current_timestamp\" has\n> no rows, then both index are OK, actually it is true just because\n> statistics is out of date.\n\nI don't think there's really anything too special about the fact that\nthe created_at column is always increasing. We commonly get 1-row\nestimates after multiplying the selectivities from individual stats.\nYour example just seems like yet another reason that this could\nhappen.\n\nI've been periodically talking about introducing \"risk\" as a factor\nthat the planner should consider. I did provide some detail in [1]\nabout the design that was in my head at that time. I'd not previously\nthought that it could also solve this problem, but after reading your\nemail, I think it can.\n\nI don't think it would be right to fudge the costs in any way, but I\nthink the risk factor for IndexPaths could take into account the\nnumber of unmatched index clauses and increment the risk factor, or\n\"certainty_factor\" as it is currently in my brain-based design. That\nway add_path() would be more likely to prefer the index that matches\nthe most conditions.\n\nThe exact maths to calculate the certainty_factor for this case I\ndon't quite have worked out yet. I plan to work on documenting the\ndesign of this and try and get a prototype patch out sometime during\nthis coming southern hemisphere winter so that there's at least a full\ncycle of feedback opportunity before the PG18 freeze.\n\nWe should do anything like add column options in the meantime. Those\nare hard to remove once added.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvo2sMPF9m%3Di%2BYPPUssfTV1GB%3DZ8nMVa%2B9Uq4RZJ8sULeQ%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 18:33:06 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 4/3/2024 12:33, David Rowley wrote:\n> [1] https://www.postgresql.org/message-id/CAApHDvo2sMPF9m%3Di%2BYPPUssfTV1GB%3DZ8nMVa%2B9Uq4RZJ8sULeQ%40mail.gmail.com\nThanks for the link!\nCould we use the trick with the get_actual_variable_range() to find some \nreason and extrapolate histogram data out of the boundaries when an \nindex shows us that we have min/max outside known statistics?\nBecause it would be used for the values out of the histogram, it should \nonly add an overhead with a reason.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 13:20:22 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 19:20, Andrei Lepikhov <[email protected]> wrote:\n> Could we use the trick with the get_actual_variable_range() to find some\n> reason and extrapolate histogram data out of the boundaries when an\n> index shows us that we have min/max outside known statistics?\n> Because it would be used for the values out of the histogram, it should\n> only add an overhead with a reason.\n\nI think, in theory, it would be possible to add a function similar to\nget_actual_variable_range() for equality clauses, but I'd be worried\nabout the overheads of doing so. I imagine it would be much more\ncommon to find an equality condition with a value that does not fit in\nany histogram/MCV bucket. get_actual_variable_range() can be quite\ncostly when there are a large number of tuples ready to be vacuumed,\nand having an equivalent function for equality conditions could appear\nto make the planner \"randomly\" slow without much of an explanation as\nto why.\n\nI think we still do get some complaints about\nget_actual_variable_range() despite it now using\nSnapshotNonVacuumable. It used to be much worse with the snapshot\ntype it used previous to what it uses today. IIRC it took a few\niterations to get the performance of the function to a level that\nseems \"mostly acceptable\".\n\nDavid\n\n\n",
"msg_date": "Mon, 4 Mar 2024 20:04:18 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> On Sun, 3 Mar 2024 at 20:08, Andy Fan <[email protected]> wrote:\n>> The issue can be reproduced with the following steps:\n>>\n>> create table x_events (.., created_at timestamp, a int, b int);\n>>\n>> create index idx_1 on t(created_at, a);\n>> create index idx_2 on t(created_at, b);\n>>\n>> query:\n>> select * from t where create_at = current_timestamp and b = 1;\n>>\n>> index (created_at, a) rather than (created_at, b) may be chosen for the\n>> above query if the statistics think \"create_at = current_timestamp\" has\n>> no rows, then both index are OK, actually it is true just because\n>> statistics is out of date.\n>\n> I don't think there's really anything too special about the fact that\n> the created_at column is always increasing. We commonly get 1-row\n> estimates after multiplying the selectivities from individual stats.\n> Your example just seems like yet another reason that this could\n> happen.\n\nYou are right about there are more cases which lead this happen. However\nthis is the only case where the created_at = $1 trick can works, which\nwas the problem I wanted to resove when I was writing. \n\n> I've been periodically talking about introducing \"risk\" as a factor\n> that the planner should consider. I did provide some detail in [1]\n> about the design that was in my head at that time. I'd not previously\n> thought that it could also solve this problem, but after reading your\n> email, I think it can.\n\nHaha, I remeber you were against \"risk factor\" before at [1], and at\nthat time we are talking about the exact same topic as here, and I\nproposaled another risk factor. Without an agreement, I did it in my\nown internal version and get hurted then, something like I didn't pay\nenough attention to Bitmap Index Scan and Index scan. Then I forget the\n\"risk factor\".\n\n>\n> I don't think it would be right to fudge the costs in any way, but I\n> think the risk factor for IndexPaths could take into account the\n> number of unmatched index clauses and increment the risk factor, or\n> \"certainty_factor\" as it is currently in my brain-based design. That\n> way add_path() would be more likely to prefer the index that matches\n> the most conditions.\n\nThis is somehow similar with my proposal at [1]? What do you think\nabout the treat 'col op const' as 'col op $1' for the marked column?\nThis could just resolve a subset of questions in your mind, but the\nmethod looks have a solid reason.\n\nCurrently I treat the risk factor as what you did before, but this maybe\nanother time for me to switch my mind again.\n\n[1] https://www.postgresql.org/message-id/CAApHDvovVWCbeR4v%2BA4Dkwb%3DYS_GuJG9OyCm8jZu%2B%2BcP2xsY_A%40mail.gmail.com\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 19:20:30 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nAndrei Lepikhov <[email protected]> writes:\n\n> On 3/3/2024 14:01, Andy Fan wrote:\n>> 1. We can let the user define the column as the value is increased day by\n>> day. the syntax may be:\n>> ALTER TABLE x_events ALTER COLUMN created_at ALWAYS_INCREASED.\n>> then when a query like 'create_at op const', the statistics module\n>> can\n>> treat it as 'created_at = $1'. so the missing statistics doesn't make\n>> difference. Then I think the above issue can be avoided.\n> Let me write some words to support your efforts in that way.\n> I also have some user cases where they periodically insert data in large\n> chunks. These chunks contain 'always increased' values, and it causes\n> trouble each time they start an analytic query over this new data before\n> the analyze command.\n> I have thought about that issue before but invented nothing special\n> except a more aggressive analysis of such tables.\n\nI have to say we run into a exactly same sistuation and use the same\ntrick to solve the problem, and we know no matter how aggressive it is,\nthe problem may still happen.\n\n> Your trick can work, but it needs a new parameter in pg_type and a lot\n> of additional code for such a rare case.\n> I'm looking forward to the demo patch.\n\nMaybe my word \"auto_increased\" is too like a type, but actually what I\nwant to is adding a new attribute for pg_attribute which ties with one\ncolumn in one relation. When we figure out a selective on this\n*column*, we do such trick. This probably doesn't need much code.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 19:37:54 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 3/4/24 06:33, David Rowley wrote:\n> On Sun, 3 Mar 2024 at 20:08, Andy Fan <[email protected]> wrote:\n>> The issue can be reproduced with the following steps:\n>>\n>> create table x_events (.., created_at timestamp, a int, b int);\n>>\n>> create index idx_1 on t(created_at, a);\n>> create index idx_2 on t(created_at, b);\n>>\n>> query:\n>> select * from t where create_at = current_timestamp and b = 1;\n>>\n>> index (created_at, a) rather than (created_at, b) may be chosen for the\n>> above query if the statistics think \"create_at = current_timestamp\" has\n>> no rows, then both index are OK, actually it is true just because\n>> statistics is out of date.\n> \n> I don't think there's really anything too special about the fact that\n> the created_at column is always increasing. We commonly get 1-row\n> estimates after multiplying the selectivities from individual stats.\n> Your example just seems like yet another reason that this could\n> happen.\n> \n> I've been periodically talking about introducing \"risk\" as a factor\n> that the planner should consider. I did provide some detail in [1]\n> about the design that was in my head at that time. I'd not previously\n> thought that it could also solve this problem, but after reading your\n> email, I think it can.\n> \n> I don't think it would be right to fudge the costs in any way, but I\n> think the risk factor for IndexPaths could take into account the\n> number of unmatched index clauses and increment the risk factor, or\n> \"certainty_factor\" as it is currently in my brain-based design. That\n> way add_path() would be more likely to prefer the index that matches\n> the most conditions.\n> \n> The exact maths to calculate the certainty_factor for this case I\n> don't quite have worked out yet. I plan to work on documenting the\n> design of this and try and get a prototype patch out sometime during\n> this coming southern hemisphere winter so that there's at least a full\n> cycle of feedback opportunity before the PG18 freeze.\n> \n\nI've been thinking about this stuff too, so I'm curious to hear what\nkind of plan you come up with. Every time I tried to formalize a more\nconcrete plan, I ended up with a very complex (and possible yet more\nfragile) approach.\n\nI think we'd need to consider a couple things:\n\n\n1) reliability of cardinality estimates\n\nI think this is pretty much the same concept as confidence intervals,\ni.e. knowing not just the regular estimate, but also a range where the\nactual value lies with high confidence (e.g. 90%).\n\nFor a single clauses this might not be terribly difficult, but for more\ncomplex cases (multiple conditions, ...) it seems far more complex :-(\nFor example, let's say we know confidence intervals for two conditions.\nWhat's the confidence interval when combined using AND or OR?\n\n\n2) robustness of the paths\n\nKnowing just the confidence intervals does not seem sufficient, though.\nThe other thing that matters is how this affects the paths, how robust\nthe paths are. I mean, if we have alternative paths with costs that flip\nsomewhere in the confidence interval - which one to pick? Surely one\nthing to consider is how fast the costs change for each path.\n\n\n3) complexity of the model\n\nI suppose we'd prefer a systematic approach (and not some ad hoc\nsolution for one particular path/plan type). So this would be somehow\nintegrated into the cost model, making it yet more complex. I'm quite\nworried about this (not necessarily because of performance reasons).\n\nI wonder if trying to improve the robustness solely by changes in the\nplanning phase is a very promising approach. I mean - sure, we should\nimprove that, but by definition it relies on a priori information. And\nnot only the stats may be stale - it's a very lossy approximation of the\nactual data. Even if the stats are perfectly up to date / very detailed,\nthere's still going to be a lot of uncertainty.\n\n\nI wonder if we'd be better off if we experimented with more robust\nplans, like SmoothScan [1] or g-join [2].\n\n\nregards\n\n[1]\nhttps://stratos.seas.harvard.edu/sites/scholar.harvard.edu/files/stratos/files/smooth_vldbj.pdf\n\n[2]\nhttp://wwwlgis.informatik.uni-kl.de/cms/fileadmin/users/haerder/2011/JoinAndGrouping.pdf\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Mar 2024 18:03:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 00:37, Andy Fan <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I don't think it would be right to fudge the costs in any way, but I\n> > think the risk factor for IndexPaths could take into account the\n> > number of unmatched index clauses and increment the risk factor, or\n> > \"certainty_factor\" as it is currently in my brain-based design. That\n> > way add_path() would be more likely to prefer the index that matches\n> > the most conditions.\n>\n> This is somehow similar with my proposal at [1]? What do you think\n> about the treat 'col op const' as 'col op $1' for the marked column?\n> This could just resolve a subset of questions in your mind, but the\n> method looks have a solid reason.\n\nDo you mean this?\n\n> + /*\n> + * To make the planner more robust to handle some inaccurate statistics\n> + * issue, we will add a extra cost to qpquals so that the less qpquals\n> + * the lower cost it has.\n> + */\n> + cpu_run_cost += 0.01 * list_length(qpquals);\n\nI don't think it's a good idea to add cost penalties like you proposed\nthere. This is what I meant by \"I don't think it would be right to\nfudge the costs in any way\".\n\nIf you modify the costs to add some small penalty so that the planner\nis more likely to favour some other plan, what happens if we then\ndecide the other plan has some issue and we want to penalise that for\nsome other reason? Adding the 2nd penalty might result in the original\nplan choice again. Which one should be penalised more? I think the\nuncertainty needs to be tracked separately.\n\nFudging the costs like this is also unlikely to play nicely with\nadd_path's use of STD_FUZZ_FACTOR. There'd be an incentive to do\nthings like total_cost *= STD_FUZZ_FACTOR; to ensure we get a large\nenough penalty.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/CAApHDvovVWCbeR4v%2BA4Dkwb%3DYS_GuJG9OyCm8jZu%2B%2BcP2xsY_A%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 12:13:38 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> On Tue, 5 Mar 2024 at 00:37, Andy Fan <[email protected]> wrote:\n>>\n>> David Rowley <[email protected]> writes:\n>> > I don't think it would be right to fudge the costs in any way, but I\n>> > think the risk factor for IndexPaths could take into account the\n>> > number of unmatched index clauses and increment the risk factor, or\n>> > \"certainty_factor\" as it is currently in my brain-based design. That\n>> > way add_path() would be more likely to prefer the index that matches\n>> > the most conditions.\n>>\n>> This is somehow similar with my proposal at [1]? What do you think\n>> about the treat 'col op const' as 'col op $1' for the marked column?\n>> This could just resolve a subset of questions in your mind, but the\n>> method looks have a solid reason.\n>\n> Do you mean this?\n\nYes, it is not cautious enough to say \"similar\" too quick.\n\nAfter reading your opinion again, I think what you are trying to do is\nadding one more dimension to Path compared with the existing cost and\npathkey information and it would take effects on add_path stage. That is\nimpressive, and I'm pretty willing to do more testing once the v1 is\ndone.\n\nI just noted you have expressed your idea about my proposal 1,\n\n> We should do anything like add column options in the meantime. Those\n> are hard to remove once added.\n\nI will try it very soon. and I'm a bit of upset no one care about my\nproposal 2 which is the AI method, I see many companies want to\nintroduce AI to planner even I don't seen any impressive success, but\nthis user case looks like a candidate. \n\n>> + /*\n>> + * To make the planner more robust to handle some inaccurate statistics\n>> + * issue, we will add a extra cost to qpquals so that the less qpquals\n>> + * the lower cost it has.\n>> + */\n>> + cpu_run_cost += 0.01 * list_length(qpquals);\n>\n> I don't think it's a good idea to add cost penalties like you proposed\n> there. This is what I meant by \"I don't think it would be right to\n> fudge the costs in any way\".\n>\n> If you modify the costs to add some small penalty so that the planner\n> is more likely to favour some other plan, what happens if we then\n> decide the other plan has some issue and we want to penalise that for\n> some other reason? Adding the 2nd penalty might result in the original\n> plan choice again. Which one should be penalised more? I think the\n> uncertainty needs to be tracked separately.\n>\n> Fudging the costs like this is also unlikely to play nicely with\n> add_path's use of STD_FUZZ_FACTOR. There'd be an incentive to do\n> things like total_cost *= STD_FUZZ_FACTOR; to ensure we get a large\n> enough penalty.\n\nI agree and I just misunderstood your proposal yesterday. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 13:24:29 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "Hi,\n\n>\n>> We should do anything like add column options in the meantime. Those\n>> are hard to remove once added.\n>\n> I will try it very soon.\n\nAttached is a PoC version. and here is the test case.\n\ncreate table t(a int, b int, c int) with (autovacuum_enabled=off);\ncreate index on t(a, b);\ncreate index on t(a, c);\n\ninsert into t select floor(random() * 100 + 1)::int, i, i\nfrom generate_series(1, 100000) i;\n\nanalyze t;\n\ninsert into t\nselect floor(random() * 10 + 1)::int + 100 , i, i\nfrom generate_series(1, 1000) i;\n\n-- one of the below queries would choose a wrong index.\n-- here is the result from my test.\nexplain (costs off) select * from t where a = 109 and c = 8;\n QUERY PLAN \n---------------------------------------\n Index Scan using t_a_c_idx on t\n Index Cond: ((a = 109) AND (c = 8))\n(2 rows)\n\nexplain (costs off) select * from t where a = 109 and b = 8;\n QUERY PLAN \n---------------------------------\n Index Scan using t_a_c_idx on t\n Index Cond: (a = 109)\n Filter: (b = 8)\n(3 rows)\n\nWrong index is chosen for the second case!\n\n-- After applying the new API.\n\nalter table t alter column a set (force_generic=on);\n\nexplain (costs off) select * from t where a = 109 and c = 8;\n QUERY PLAN \n---------------------------------------\n Index Scan using t_a_c_idx on t\n Index Cond: ((a = 109) AND (c = 8))\n(2 rows)\n\nexplain (costs off) select * from t where a = 109 and b = 8;\n QUERY PLAN \n---------------------------------------\n Index Scan using t_a_b_idx on t\n Index Cond: ((a = 109) AND (b = 8))\n(2 rows)\n\nThen both cases can choose a correct index.\n\ncommit f8cca472479c50ba73479ec387882db43d203522 (HEAD -> shared_detoast_value)\nAuthor: yizhi.fzh <[email protected]>\nDate: Tue Mar 5 18:27:48 2024 +0800\n\n Add a \"force_generic\" attoptions for selfunc.c\n \n Sometime user just care about the recent data and the optimizer\n statistics for such data is not gathered, then some bad decision may\n happen. Before this patch, we have to make the autoanalyze often and\n often, but it is not only expensive but also may be too late.\n \n This patch introduces a new attoptions like this:\n \n ALTER TABLE t ALTER COLUMN col set (force_generic=true);\n \n Then selfunc.c realizes this and ignore the special Const value, then\n average selectivity is chosen. This fall into the weakness of generic\n plan, but this patch doesn't introduce any new weakness and we leave the\n decision to user which could resolve some problem. Also this logic only\n apply to eqsel since the ineqsel have the get_actual_variable_range\n mechanism which is helpful for index choose case at least.\n\nI think it is OK for a design review, for the implementaion side, the\nknown issue includes:\n\n1. Support grap such infromation from its parent for partitioned table\nif the child doesn't have such information.\n2. builtin document and testing. \n\nAny feedback is welcome.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 05 Mar 2024 20:56:29 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 5/3/2024 19:56, Andy Fan wrote:\n> I think it is OK for a design review, for the implementaion side, the\n> known issue includes:\n> \n> 1. Support grap such infromation from its parent for partitioned table\n> if the child doesn't have such information.\n> 2. builtin document and testing.\n> \n> Any feedback is welcome.\nThanks for your efforts.\nI was confused when you showed the problem connected to clauses like \n\"Var op Const\" and \"Var op Param\".\nAs far as I know, the estimation logic of such clauses uses MCV and \nnumber-distinct statistics. So, being out of MCV values, it becomes \ntotally insensitive to any internal skew in data and any data outside \nthe statistics boundaries.\nHaving studied the example you provided with the patch, I think it is \nnot a correct example:\nDifference between var_eq_const and var_eq_non_const quite obvious:\nIn the second routine, you don't have information about the const value \nand can't use MCV for estimation. Also, you can't exclude MCV values \nfrom the estimation. And it is just luck that you've got the right \nanswer. I think if you increased the weight of the unknown part, you \nwould get a bad result, too.\nI would like to ask David why the var_eq_const estimator doesn't have an \noption for estimation with a histogram. Having that would relieve a \nproblem with skewed data. Detecting the situation with incoming const \nthat is out of the covered area would allow us to fall back to ndistinct \nestimation or something else. At least, histogram usage can be \nrestricted by the reltuples value and ratio between the total number of \nMCV values and the total number of distinct values in the table.\n\nJust for demo: demonstration of data skew issue:\n\nCREATE EXTENSION tablefunc;\nCREATE TABLE norm_test AS\n SELECT abs(r::integer) AS val\n FROM normal_rand(1E7::integer, 5.::float8, 300.::float8) AS r;\nANALYZE norm_test;\n\n-- First query is estimated with MCV quite precisely:\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 100;\n-- result: planned rows=25669, actual rows=25139\n\n-- Here we have numdistinct estimation, mostly arbitrary:\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 200;\n-- result: planned rows=8604, actual rows=21239\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 500;\n-- result: planned rows=8604, actual rows=6748\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 600;\n-- result: planned rows=8604, actual rows=3501\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 700;\n-- result: planned rows=8604, actual rows=1705\nEXPLAIN ANALYZE SELECT * FROM norm_test WHERE val = 1000;\n-- result: planned rows=8604, actual rows=91\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 7 Mar 2024 15:17:10 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 02:09, Andy Fan <[email protected]> wrote:\n> This patch introduces a new attoptions like this:\n>\n> ALTER TABLE t ALTER COLUMN col set (force_generic=true);\n>\n> Then selfunc.c realizes this and ignore the special Const value, then\n> average selectivity is chosen. This fall into the weakness of generic\n> plan, but this patch doesn't introduce any new weakness and we leave the\n> decision to user which could resolve some problem. Also this logic only\n> apply to eqsel since the ineqsel have the get_actual_variable_range\n> mechanism which is helpful for index choose case at least.\n\nIf you don't want the planner to use the statistics for the column why\nnot just do the following?\n\nALTER TABLE t ALTER COLUMN col SET STATISTICS 0;\n\nANALYZE won't delete any existing statistics, so that might need to be\ndone manually.\n\nDavid\n\n\n",
"msg_date": "Thu, 7 Mar 2024 23:06:22 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> On Wed, 6 Mar 2024 at 02:09, Andy Fan <[email protected]> wrote:\n>> This patch introduces a new attoptions like this:\n>>\n>> ALTER TABLE t ALTER COLUMN col set (force_generic=true);\n>>\n>> Then selfunc.c realizes this and ignore the special Const value, then\n>> average selectivity is chosen. This fall into the weakness of generic\n>> plan, but this patch doesn't introduce any new weakness and we leave the\n>> decision to user which could resolve some problem. Also this logic only\n>> apply to eqsel since the ineqsel have the get_actual_variable_range\n>> mechanism which is helpful for index choose case at least.\n>\n> If you don't want the planner to use the statistics for the column why\n> not just do the following?\n\nAcutally I didn't want the planner to ignore the statistics totally, I\nwant the planner to treat the \"Const\" which probably miss optimizer part\naverage, which is just like what we did for generic plan for the blow\nquery. \n\nprepare s as SELECT * FROM t WHERE a = $1 and b = $2;\nexplain (costs off) execute s(109, 8);\n QUERY PLAN \n---------------------------------\n Index Scan using t_a_c_idx on t\n Index Cond: (a = 109)\n Filter: (b = 8) \n\n(3 rows)\n\ncustom plan, Wrong index due to we have a bad estimation for a = 109.\n\n\nset plan_cache_mode to force_generic_plan ;\nexplain (costs off) execute s(109, 8);\n QUERY PLAN \n---------------------------------------\n Index Scan using t_a_b_idx on t\n Index Cond: ((a = $1) AND (b = $2)) -- Correct index.\n(2 rows)\n\nGeneric plan - we use the average estimation for the missed optimizer\nstatistics part and *if the new value is not so different from existing\nones*, we can get a disired result. \n\nIt is true that the \"generic\" way is not as exactly accurate as the\n\"custom\" way since the later one can use the data in MCV, but that is\nthe cost we have to pay to make the missed optimizer statistics less\nimporant and generic plan has the same issue as well. As for this\naspect, I think the way you proposed probably have a wider use case.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 07 Mar 2024 18:16:52 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 21:17, Andrei Lepikhov <[email protected]> wrote:\n> I would like to ask David why the var_eq_const estimator doesn't have an\n> option for estimation with a histogram. Having that would relieve a\n> problem with skewed data. Detecting the situation with incoming const\n> that is out of the covered area would allow us to fall back to ndistinct\n> estimation or something else. At least, histogram usage can be\n> restricted by the reltuples value and ratio between the total number of\n> MCV values and the total number of distinct values in the table.\n\nIf you can think of a way how to calculate it, you should propose a patch.\n\nIIRC, we try to make the histogram buckets evenly sized based on the\nnumber of occurrences. I've not followed the code in default, I'd\nguess that doing that allows us to just subtract off the MCV\nfrequencies and assume the remainder is evenly split over each\nhistogram bucket, so unless we had an n_distinct per histogram bucket,\nor at the very least n_distinct_for_histogram_values, then how would\nthe calculation look for what we currently record?\n\nDavid\n\n\n",
"msg_date": "Thu, 7 Mar 2024 23:32:26 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nAndrei Lepikhov <[email protected]> writes:\n\n> On 5/3/2024 19:56, Andy Fan wrote:\n>> I think it is OK for a design review, for the implementaion side, the\n>> known issue includes:\n>> 1. Support grap such infromation from its parent for partitioned table\n>> if the child doesn't have such information.\n>> 2. builtin document and testing.\n>> Any feedback is welcome.\n> Thanks for your efforts.\n> I was confused when you showed the problem connected to clauses like\n> \"Var op Const\" and \"Var op Param\".\n\nhmm, then what is the soluation in your mind when you say the \"ticky\" in\n[1]? I am thinking we have some communication gap here.\n\n> As far as I know, the estimation logic of such clauses uses MCV and\n> number-distinct statistics. So, being out of MCV values, it becomes\n> totally insensitive to any internal skew in data and any data outside\n> the statistics boundaries.\n> Having studied the example you provided with the patch, I think it is\n> not a correct example:\n> Difference between var_eq_const and var_eq_non_const quite obvious:\n\nThe response should be same as what I did in [2], let's see if we can\nmake the gap between us smaller.\n\n> In the second routine, you don't have information about the const value\n> and can't use MCV for estimation. Also, you can't exclude MCV values\n> from the estimation. And it is just luck that you've got the right\n> answer. I think if you increased the weight of the unknown part, you\n> would get a bad result, too.\n\n> I would like to ask David why the var_eq_const estimator doesn't have an\n> option for estimation with a histogram. Having that would relieve a\n> problem with skewed data. Detecting the situation with incoming const\n> that is out of the covered area would allow us to fall back to ndistinct\n> estimation or something else. At least, histogram usage can be\n> restricted by the reltuples value and ratio between the total number of\n> MCV values and the total number of distinct values in the table.\n\nI think an example which show your algorithm is better would be pretty\nhelpful for communication. \n\n[1] https://www.postgresql.org/message-id/15381eea-cbc3-4087-9d90-ab752292bd54%40postgrespro.ru\n[2] https://www.postgresql.org/message-id/87msra9vgo.fsf%40163.com\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 07 Mar 2024 18:42:31 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 23:40, Andy Fan <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > If you don't want the planner to use the statistics for the column why\n> > not just do the following?\n>\n> Acutally I didn't want the planner to ignore the statistics totally, I\n> want the planner to treat the \"Const\" which probably miss optimizer part\n> average, which is just like what we did for generic plan for the blow\n> query.\n\nI'm with Andrei on this one and agree with his \"And it is just luck\nthat you've got the right answer\".\n\nI think we should fix the general problem of the planner not choosing\nthe better index. I understand you've had a go at that before, but I\ndidn't think fudging the costs was the right fix to coax the planner\ninto the safer choice.\n\nI'm not personally interested in any bandaid fixes for this. I'd\nrather see us come up with a long-term solution that just makes things\nbetter.\n\nI also understand you're probably frustrated and just want to make\nsomething better. However, it's not like it's a new problem. The more\ngeneral problem of the planner making risky choices outdates both of\nour time spent working on PostgreSQL, so I don't think a hasty\nsolution that fixes some small subset of the problem is that helpful.\n\nDavid\n\n\n",
"msg_date": "Fri, 8 Mar 2024 00:28:49 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> On Thu, 7 Mar 2024 at 23:40, Andy Fan <[email protected]> wrote:\n>>\n>> David Rowley <[email protected]> writes:\n>> > If you don't want the planner to use the statistics for the column why\n>> > not just do the following?\n>>\n>> Acutally I didn't want the planner to ignore the statistics totally, I\n>> want the planner to treat the \"Const\" which probably miss optimizer part\n>> average, which is just like what we did for generic plan for the blow\n>> query.\n>\n> I'm with Andrei on this one and agree with his \"And it is just luck\n> that you've got the right answer\".\n\nAny example to support this conclusion? and what's the new problem after\nthis?\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 08 Mar 2024 17:45:54 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 7/3/2024 17:32, David Rowley wrote:\n> On Thu, 7 Mar 2024 at 21:17, Andrei Lepikhov <[email protected]> wrote:\n>> I would like to ask David why the var_eq_const estimator doesn't have an\n>> option for estimation with a histogram. Having that would relieve a\n>> problem with skewed data. Detecting the situation with incoming const\n>> that is out of the covered area would allow us to fall back to ndistinct\n>> estimation or something else. At least, histogram usage can be\n>> restricted by the reltuples value and ratio between the total number of\n>> MCV values and the total number of distinct values in the table.\n> \n> If you can think of a way how to calculate it, you should propose a patch.\n> \n> IIRC, we try to make the histogram buckets evenly sized based on the\n> number of occurrences. I've not followed the code in default, I'd\n> guess that doing that allows us to just subtract off the MCV\n> frequencies and assume the remainder is evenly split over each\n> histogram bucket, so unless we had an n_distinct per histogram bucket,\n> or at the very least n_distinct_for_histogram_values, then how would\n> the calculation look for what we currently record?\nYeah, It is my mistake; I see nothing special here with such a kind of \nhistogram: in the case of a coarse histogram net, the level of \nuncertainty in one bin is too high to make a better estimation. I am \njust pondering detection situations when estimation constant is just out \nof statistics scope to apply to alternative, more expensive logic \ninvolving the number of index pages out of the boundary, index tuple \nwidth, and distinct value. The Left and right boundaries of the \nhistogram are suitable detectors for such a situation.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 17:20:39 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nAfter some more thoughts about the diference of the two ideas, then I\nfind we are resolving two different issues, just that in the wrong index\nchoose cases, both of them should work generally. \n\nYour idea actually adding some rule based logic named certainty_factor,\njust the implemenation is very grace. for the example in this case, it\ntake effects *when the both indexes has the same cost*. I believe that\ncan resolve the index choose here, but how about the rows estimation?\nissue due to the fact that the design will not fudge the cost anyway, I\nassume you will not fudge the rows or selectivity as well. Then if the\noptimizer statistics is missing, what can we do for both index choosing\nand rows estimation? I think that's where my idea comes out. \n\nDue to the fact that optimizer statistics can't be up to date by design,\nand assume we have a sistuation where the customer's queries needs that\nstatistcs often, how about doing the predication with the history\nstatistics? it can cover for both index choose and rows estimation. Then\nthe following arguments may be arised. a). we can't decide when the\nmissed optimizer statistics is wanted *automatically*, b). if we\npredicate the esitmiation with the history statistics, the value of MCV\ninformation is missed. The answer for them is a). It is controlled by\nhuman with the \"alter table t alter column a set\n(force_generic=on)\". b). it can't be resolved I think, and it only take\neffects when the real Const is so different from the ones in\nhistory. generic plan has the same issue I think.\n\nI just reviewed the bad queries plan for the past half years internally,\nI found many queries used the Nested loop which is the direct cause. now\nI think I find out a new reason for this, because the missed optimizer\nstatistics cause the rows in outer relation to be 1, which make the Nest\nloop is choosed. I'm not sure your idea could help on this or can help\non this than mine at this aspect.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 08 Mar 2024 19:53:14 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "On 8/3/2024 18:53, Andy Fan wrote:\n> I just reviewed the bad queries plan for the past half years internally,\n> I found many queries used the Nested loop which is the direct cause. now\n> I think I find out a new reason for this, because the missed optimizer\n> statistics cause the rows in outer relation to be 1, which make the Nest\n> loop is choosed. I'm not sure your idea could help on this or can help\n> on this than mine at this aspect.\n\nHaving had the same problem for a long time, I've made an attempt and \ninvented a patch that probes an index to determine whether the estimated \nconstant is within statistics' scope.\nI remember David's remark on the overhead problem, but I don't argue it \nhere. This patch is on the table to have one more solution sketch for \nfurther discussion.\nAlso, Andy, if you have a specific problem with index choosing, you can \ntry a tiny option that makes the index-picking technique less dependent \non the ordering of index lists [1].\n\n[1] \nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Tue, 12 Mar 2024 17:22:15 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\n>\n> Having had the same problem for a long time, I've made an attempt and\n> invented a patch that probes an index to determine whether the estimated\n> constant is within statistics' scope.\n> I remember David's remark on the overhead problem, but I don't argue it\n> here. This patch is on the table to have one more solution sketch for\n> further discussion.\n\nI think the following code will be really horrendous on peformance\naspect, think about the cases where we have thousands of tuples.\n\n+\t\tindex_rescan(index_scan, scankeys, 1, NULL, 0);\n+\t\twhile (index_getnext_tid(index_scan, ForwardScanDirection) != NULL)\n+\t\t{\n+\t\t\tntuples++;\n+\t\t}\n+\n\n> Also, Andy, if you have a specific problem with index choosing, you can\n> try a tiny option that makes the index-picking technique less dependent\n> on the ordering of index lists [1].\n\nthanks, index choosing issue already not the only issue I want to address now.\n\nYou said the my patch was kind of lucky to work at [1], have you figure\nout an example to prove that?\n\n[1]\nhttps://www.postgresql.org/message-id/701d2097-2c5b-41e2-8629-734e3c8ba613%40postgrespro.ru \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 13 Mar 2024 15:39:07 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "Hello everyone,\n\n> After some more thoughts about the diference of the two ideas, then I\n> find we are resolving two different issues, just that in the wrong index\n> choose cases, both of them should work generally. \n\nHere is the formal version for the attribute reloptions direction.\n\ncommit 0d842e39275710a544b11033f5eec476147daf06 (HEAD -> force_generic)\nAuthor: yizhi.fzh <[email protected]>\nDate: Sun Mar 31 11:51:28 2024 +0800\n\n Add a attopt to disable MCV when estimating for Var = Const\n \n As of current code, when calculating the selectivity for Var = Const,\n planner first checks if the Const is an most common value and if not, it\n takes out all the portions of MCV's selectivity and num of distinct\n value, and treat the selectivity for Const equal for the rest\n n_distinct.\n \n This logic works great when the optimizer statistic is up to date,\n however if the known most common value has taken up most of the\n selectivity at the last run of analyze, and the new most common value in\n reality has not been gathered, the estimation for the new MCV will be\n pretty bad. A common case for this would be created_at = {current_date};\n \n To overcome this issue, we provides a new syntax:\n \n ALTER TABLE tablename ALTER COLUMN created_at SET (force_generic=on);\n \n After this, planner ignores the value of MCV for this column when\n estimating for Var = Const and treating all the values equally.\n \n This would cause some badness if the values for a column are pretty not\n equal which is what MCV is designed for, however this patch just provide\n one more option to user and let user make the decision.\n\nHere is an example about its user case.\n\ncreate table t(a int, b int, c int) with (autovacuum_enabled=off);\ncreate index on t(a, b);\ncreate index on t(a, c);\ncreate table t2 (id int primary key, a int);\ninsert into t2 select i , i from generate_series(1, 800)i;\n\ninsert into t select floor(random() * 100 + 1)::int, i, i\nfrom generate_series(1, 100000) i;\nanalyze t,t2;\n\ninsert into t\nselect floor(random() * 10 + 1)::int + 100 , i, i\nfrom generate_series(1, 10000) i;\n\nexplain (costs off) select * from t where a = 109 and b = 8;\nexplain (costs off, analyze)\nselect * from t join t2 on t.c = t2.id where t.a = 109;\n\nALTER TABLE t ALTER COLUMN a SET (force_generic=on);\n\n-- We will see some good result now.\nexplain (costs off) select * from t where a = 109 and b = 8;\nexplain (costs off, analyze)\nselect * from t join t2 on t.c = t2.id where t.a = 109;\n\nI will add this to our commitfest application, any feedback is welcome! \n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 31 Mar 2024 11:53:12 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
},
{
"msg_contents": "\nAndy Fan <[email protected]> writes:\n\n> Hello everyone,\n>\n>> After some more thoughts about the diference of the two ideas, then I\n>> find we are resolving two different issues, just that in the wrong index\n>> choose cases, both of them should work generally. \n>\n> Here is the formal version for the attribute reloptions direction.\n\n> commit 0d842e39275710a544b11033f5eec476147daf06 (HEAD -> force_generic)\n> Author: yizhi.fzh <[email protected]>\n> Date: Sun Mar 31 11:51:28 2024 +0800\n>\n> Add a attopt to disable MCV when estimating for Var = Const\n> \n> As of current code, when calculating the selectivity for Var = Const,\n> planner first checks if the Const is an most common value and if not, it\n> takes out all the portions of MCV's selectivity and num of distinct\n> value, and treat the selectivity for Const equal for the rest\n> n_distinct.\n> \n> This logic works great when the optimizer statistic is up to date,\n> however if the known most common value has taken up most of the\n> selectivity at the last run of analyze, and the new most common value in\n> reality has not been gathered, the estimation for the new MCV will be\n> pretty bad. A common case for this would be created_at = {current_date};\n> \n> To overcome this issue, we provides a new syntax:\n> \n> ALTER TABLE tablename ALTER COLUMN created_at SET (force_generic=on);\n> \n> After this, planner ignores the value of MCV for this column when\n> estimating for Var = Const and treating all the values equally.\n> \n> This would cause some badness if the values for a column are pretty not\n> equal which is what MCV is designed for, however this patch just provide\n> one more option to user and let user make the decision.\n>\n> Here is an example about its user case.\n\n...\n\nHere are some add-ups for this feature:\n\n- After the use this feature, we still to gather the MCV on these\n columns because they are still useful for the join case, see\n eqjoinsel_inner function.\n\n- Will this feature make some cases worse since it relies on the fact\n that not using the MCV list for var = Const? That's is true in\n theory. But if user use this feature right, they will not use this\n feature for these columns. The feature is just designed for the user\n case in the commit message and the theory is exactly same as generic\n plan. If user uses it right, they may save the effort of run 'analyze'\n pretty frequently and get some better result on both index choose and\n rows estimation. Plus the patch is pretty not aggressive and it's easy\n to maintain.\n\n- Is the 'force_generic' a good name for attribute option? Probably not,\n we can find out a good name after we agree on this direction. \n\n- Will it be conflicted with David's idea of certainty_factor? Probably\n not,even both of them can handle the index-choose-case. See my point\n on [1]\n\n[1] https://www.postgresql.org/message-id/877cicao6e.fsf%40163.com \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sun, 28 Apr 2024 10:39:27 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a wrong index choose when statistics is out of date"
}
] |
[
{
"msg_contents": "All,\n\nThe PostgreSQL Contributor Page \n(https://www.postgresql.org/community/contributors/) includes people who \nhave made substantial, long-term contributions of time and effort to the \nPostgreSQL project. The PostgreSQL Contributors Team recognizes the \nfollowing people for their contributions.\n\nNew PostgreSQL Contributors:\n\n* Bertrand Drouvot\n* Gabriele Bartolini\n* Richard Guo\n\nNew PostgreSQL Major Contributors:\n\n* Alexander Lakhin\n* Daniel Gustafsson\n* Dean Rasheed\n* John Naylor\n* Melanie Plageman\n* Nathan Bossart\n\nThank you and congratulations to all!\n\nThanks,\nOn behalf of the PostgreSQL Contributors Team\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 3 Mar 2024 10:57:04 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:\n>\n> The PostgreSQL Contributor Page\n> (https://www.postgresql.org/community/contributors/) includes people who\n> have made substantial, long-term contributions of time and effort to the\n> PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> following people for their contributions.\n>\n> New PostgreSQL Contributors:\n>\n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n>\n> New PostgreSQL Major Contributors:\n>\n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n>\n> Thank you and congratulations to all!\n>\n\nCongratulations to all for their well-deserved recognition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Mar 2024 08:42:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "\n\n> On 3 Mar 2024, at 20:57, Joe Conway <[email protected]> wrote:\n> \n> New PostgreSQL Contributors:\n> \n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n> \n> New PostgreSQL Major Contributors:\n> \n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n\nCongratulations! And thank you for your work on Postgres!\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 4 Mar 2024 10:18:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:\n\n> All,\n>\n> The PostgreSQL Contributor Page\n> (https://www.postgresql.org/community/contributors/) includes people who\n> have made substantial, long-term contributions of time and effort to the\n> PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> following people for their contributions.\n>\n> New PostgreSQL Contributors:\n>\n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n>\n> New PostgreSQL Major Contributors:\n>\n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n>\n\nThank you and many congratulations to all.\n\nRegards,\nAmul\n\nOn Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:All,\n\nThe PostgreSQL Contributor Page \n(https://www.postgresql.org/community/contributors/) includes people who \nhave made substantial, long-term contributions of time and effort to the \nPostgreSQL project. The PostgreSQL Contributors Team recognizes the \nfollowing people for their contributions.\n\nNew PostgreSQL Contributors:\n\n* Bertrand Drouvot\n* Gabriele Bartolini\n* Richard Guo\n\nNew PostgreSQL Major Contributors:\n\n* Alexander Lakhin\n* Daniel Gustafsson\n* Dean Rasheed\n* John Naylor\n* Melanie Plageman\n* Nathan BossartThank you and many congratulations to all.Regards,Amul",
"msg_date": "Mon, 4 Mar 2024 10:50:23 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 03, 2024 at 10:57:04AM -0500, Joe Conway wrote:\n> New PostgreSQL Contributors:\n> \n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n> \n> New PostgreSQL Major Contributors:\n> \n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n\nCongratulations to all!\n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 14:29:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:\n>\n> All,\n>\n> The PostgreSQL Contributor Page\n> (https://www.postgresql.org/community/contributors/) includes people who\n> have made substantial, long-term contributions of time and effort to the\n> PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> following people for their contributions.\n>\n> New PostgreSQL Contributors:\n>\n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n>\n> New PostgreSQL Major Contributors:\n>\n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n>\n\nHearty congratulations. Well deserved.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:09:18 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:\n>\n> All,\n>\n> The PostgreSQL Contributor Page\n> (https://www.postgresql.org/community/contributors/) includes people who\n> have made substantial, long-term contributions of time and effort to the\n> PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> following people for their contributions.\n>\n> New PostgreSQL Contributors:\n>\n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n>\n> New PostgreSQL Major Contributors:\n>\n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n>\n> Thank you and congratulations to all!\n>\n\n Congratulations to all!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:31:43 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Sun, Mar 3, 2024 at 9:28 PM Joe Conway <[email protected]> wrote:\n>\n> All,\n>\n> The PostgreSQL Contributor Page\n> (https://www.postgresql.org/community/contributors/) includes people who\n> have made substantial, long-term contributions of time and effort to the\n> PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> following people for their contributions.\n>\n> New PostgreSQL Contributors:\n>\n> * Bertrand Drouvot\n> * Gabriele Bartolini\n> * Richard Guo\n>\n> New PostgreSQL Major Contributors:\n>\n> * Alexander Lakhin\n> * Daniel Gustafsson\n> * Dean Rasheed\n> * John Naylor\n> * Melanie Plageman\n> * Nathan Bossart\n>\n> Thank you and congratulations to all!\n\n+1. Congratulations to all!\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 17:03:15 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "> > All,\n> >\n> > The PostgreSQL Contributor Page\n> > (https://www.postgresql.org/community/contributors/) includes people who\n> > have made substantial, long-term contributions of time and effort to the\n> > PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> > following people for their contributions.\n> >\n> > New PostgreSQL Contributors:\n> >\n> > * Bertrand Drouvot\n> > * Gabriele Bartolini\n> > * Richard Guo\n> >\n> > New PostgreSQL Major Contributors:\n> >\n> > * Alexander Lakhin\n> > * Daniel Gustafsson\n> > * Dean Rasheed\n> > * John Naylor\n> > * Melanie Plageman\n> > * Nathan Bossart\n> >\n> > Thank you and congratulations to all!\n>\n> +1. Congratulations to all!\n\nCongratulations to all!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:13:36 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 17:43, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> > > All,\n> > >\n> > > The PostgreSQL Contributor Page\n> > > (https://www.postgresql.org/community/contributors/) includes people who\n> > > have made substantial, long-term contributions of time and effort to the\n> > > PostgreSQL project. The PostgreSQL Contributors Team recognizes the\n> > > following people for their contributions.\n> > >\n> > > New PostgreSQL Contributors:\n> > >\n> > > * Bertrand Drouvot\n> > > * Gabriele Bartolini\n> > > * Richard Guo\n> > >\n> > > New PostgreSQL Major Contributors:\n> > >\n> > > * Alexander Lakhin\n> > > * Daniel Gustafsson\n> > > * Dean Rasheed\n> > > * John Naylor\n> > > * Melanie Plageman\n> > > * Nathan Bossart\n> > >\n> > > Thank you and congratulations to all!\n> >\n> > +1. Congratulations to all!\n>\n> Congratulations to all!\n\nCongratulations to all!\n\n\n",
"msg_date": "Tue, 5 Mar 2024 08:20:47 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Contributors Updates"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile reading codes, I found that ApplyLauncherShmemInit() and AutoVacuumShmemInit()\nare always called even if they would not be launched.\nIt may be able to reduce the start time to avoid the unnecessary allocation.\nHowever, I know this improvement would be quite small because the allocated chunks are\nquite small.\n\nAnyway, there are several ways to fix:\n\n1)\nSkip calling ShmemInitStruct() if the related process would not be launched.\nI think this approach is the easiest way. E.g.,\n\n```\n--- a/src/backend/replication/logical/launcher.c\n+++ b/src/backend/replication/logical/launcher.c\n@@ -962,6 +962,9 @@ ApplyLauncherShmemInit(void)\n {\n bool found;\n\n+ if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n+ return;\n+\n```\n\n2)\nDynamically allocate the shared memory. This was allowed by recent commit [1].\nI made a small PoC only for logical launcher to show what I meant. PSA diff file.\nSince some processes (backend, apply worker, parallel apply worker, and tablesync worker)\nrefers the chunk, codes for attachment must be added on the several places.\n\nIf you agree it should be fixed, I will create a patch. Thought?\n\n[1]: https://github.com/postgres/postgres/commit/8b2bcf3f287c79eaebf724cba57e5ff664b01e06\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/",
"msg_date": "Mon, 4 Mar 2024 05:26:25 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "\"Hayato Kuroda (Fujitsu)\" <[email protected]> writes:\n> While reading codes, I found that ApplyLauncherShmemInit() and AutoVacuumShmemInit()\n> are always called even if they would not be launched.\n> It may be able to reduce the start time to avoid the unnecessary allocation.\n\nWhy would this be a good idea? It would require preventing the\ndecision not to launch them from being changed later, except via\npostmaster restart. We've generally been trying to move away\nfrom unchangeable-without-restart decisions. In your example,\n\n> + if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n> + return;\n\nmax_logical_replication_workers is already PGC_POSTMASTER so there's\nnot any immediate loss of flexibility, but I don't think it's a great\nidea to introduce another reason why it has to be PGC_POSTMASTER.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Mar 2024 00:33:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "Dear Tom,\n\nThanks for replying!\n\n> \"Hayato Kuroda (Fujitsu)\" <[email protected]> writes:\n> > While reading codes, I found that ApplyLauncherShmemInit() and\n> AutoVacuumShmemInit()\n> > are always called even if they would not be launched.\n> > It may be able to reduce the start time to avoid the unnecessary allocation.\n> \n> Why would this be a good idea? It would require preventing the\n> decision not to launch them from being changed later, except via\n> postmaster restart.\n\nRight. It is important to relax their GucContext.\n\n> We've generally been trying to move away\n> from unchangeable-without-restart decisions. In your example,\n> \n> > + if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n> > + return;\n> \n> max_logical_replication_workers is already PGC_POSTMASTER so there's\n> not any immediate loss of flexibility, but I don't think it's a great\n> idea to introduce another reason why it has to be PGC_POSTMASTER.\n\nYou are right. The first example implied the max_logical_replication_workers\nwon't be changed. So it is not appropriate.\nSo ... what about second one? The approach allows to allocate a memory after\nstartup, which means that users may able to change the parameter from 0 to\nnatural number in future. (Of course, such an operation is prohibit for now).\nCan it be an initial step to ease the condition?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 06:10:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "On 2024-Mar-04, Hayato Kuroda (Fujitsu) wrote:\n\n> Dear hackers,\n> \n> While reading codes, I found that ApplyLauncherShmemInit() and\n> AutoVacuumShmemInit() are always called even if they would not be\n> launched.\n\nNote that there are situations where the autovacuum launcher is started\neven though autovacuum is nominally turned off, and I suspect your\nproposal would break that. IIRC this occurs when the Xid or multixact\ncounters cross the max_freeze_age threshold.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Porque Kim no hacía nada, pero, eso sí,\ncon extraordinario éxito\" (\"Kim\", Kipling)\n\n\n",
"msg_date": "Mon, 4 Mar 2024 09:09:15 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some shared memory chunks are allocated even if related\n processes won't start"
},
{
"msg_contents": "Dear Alvaro,\r\n\r\nThanks for giving comments!\r\n\r\n> > While reading codes, I found that ApplyLauncherShmemInit() and\r\n> > AutoVacuumShmemInit() are always called even if they would not be\r\n> > launched.\r\n> \r\n> Note that there are situations where the autovacuum launcher is started\r\n> even though autovacuum is nominally turned off, and I suspect your\r\n> proposal would break that. IIRC this occurs when the Xid or multixact\r\n> counters cross the max_freeze_age threshold.\r\n\r\nRight. In GetNewTransactionId(), SetTransactionIdLimit() and some other places,\r\nPMSIGNAL_START_AUTOVAC_LAUNCHER is sent to postmaster when the xid exceeds\r\nautovacuum_freeze_max_age. This work has already been written in the doc [1]:\r\n\r\n```\r\nTo ensure that this does not happen, autovacuum is invoked on any table that\r\nmight contain unfrozen rows with XIDs older than the age specified by the\r\nconfiguration parameter autovacuum_freeze_max_age. (This will happen even\r\nif autovacuum is disabled.)\r\n```\r\n\r\nThis means that my first idea won't work well. Even if the postmaster does not\r\ninitially allocate shared memory, backends may request to start auto vacuum and\r\nuse the region. However, the second idea is still valid, which allows the allocation\r\nof shared memory dynamically. This is a bit efficient for the system which tuples\r\nwon't be frozen. Thought?\r\n\r\n[1]: https://www.postgresql.org/docs/devel/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Mon, 4 Mar 2024 09:50:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "On 2024-Mar-04, Hayato Kuroda (Fujitsu) wrote:\n\n> However, the second idea is still valid, which allows the allocation\n> of shared memory dynamically. This is a bit efficient for the system\n> which tuples won't be frozen. Thought?\n\nI think it would be worth allocating AutoVacuumShmem->av_workItems using\ndynamic shmem allocation, particularly to prevent workitems from being\ndiscarded just because the array is full¹; but other than that, the\nstruct is just 64 bytes long so I doubt it's useful to allocate it\ndynamically.\n\n¹ I mean, if the array is full, just allocate another array, point to it\nfrom the original one, and keep going.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n -- Paul Graham, http://www.paulgraham.com/opensource.html\n\n\n",
"msg_date": "Mon, 4 Mar 2024 12:52:47 +0100",
"msg_from": "'Alvaro Herrera' <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some shared memory chunks are allocated even if related\n processes won't start"
},
{
"msg_contents": "Dear Alvaro,\r\n\r\nThanks for discussing!\r\n\r\n> \r\n> I think it would be worth allocating AutoVacuumShmem->av_workItems using\r\n> dynamic shmem allocation, particularly to prevent workitems from being\r\n> discarded just because the array is full¹; but other than that, the\r\n> struct is just 64 bytes long so I doubt it's useful to allocate it\r\n> dynamically.\r\n> \r\n> ¹ I mean, if the array is full, just allocate another array, point to it\r\n> from the original one, and keep going.\r\n\r\nOK, I understood that my initial proposal is not so valuable, so I can withdraw it.\r\n\r\nAbout the suggetion, you imagined AutoVacuumRequestWork() and brininsert(),\r\nright? I agreed it sounds good, but I don't think it can be implemented by current\r\ninterface. An interface for dynamically allocating memory is GetNamedDSMSegment(),\r\nand it returns the same shared memory region if input names are the same.\r\nTherefore, there is no way to re-alloc the shared memory.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Mon, 4 Mar 2024 13:11:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "Hello Hayato,\n\nOn 2024-Mar-04, Hayato Kuroda (Fujitsu) wrote:\n\n> OK, I understood that my initial proposal is not so valuable, so I can\n> withdraw it.\n\nYeah, that's what it seems to me.\n\n> About the suggetion, you imagined AutoVacuumRequestWork() and\n> brininsert(), right?\n\nCorrect.\n\n> I agreed it sounds good, but I don't think it can be implemented by\n> current interface. An interface for dynamically allocating memory is\n> GetNamedDSMSegment(), and it returns the same shared memory region if\n> input names are the same. Therefore, there is no way to re-alloc the\n> shared memory.\n\nYeah, I was imagining something like this: the workitem-array becomes a\nstruct, which has a name and a \"next\" pointer and a variable number of\nworkitem slots; the AutoVacuumShmem struct has a pointer to the first\nworkitem-struct and the last one; when a workitem is requested by\nbrininsert, we initially allocate via GetNamedDSMSegment(\"workitem-0\") a\nworkitem-struct with a smallish number of elements; if we request\nanother workitem and the array is full, we allocate another array via\nGetNamedDSMSegment(\"workitem-1\") and store a pointer to it in workitem-0\n(so that the list can be followed by an autovacuum worker that's\nprocessing the database), and it's also set as the tail of the list in\nAutoVacuumShmem (so that we know where to store further work items).\nWhen all items in a workitem-struct are processed, we can free it\n(I guess via dsm_unpin_segment), and make AutoVacuumShmem->av_workitems\npoint to the next one in the list.\n\nThis way, the \"array\" can grow arbitrarily.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 4 Mar 2024 14:50:46 +0100",
"msg_from": "'Alvaro Herrera' <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some shared memory chunks are allocated even if related\n processes won't start"
},
{
"msg_contents": "Dear Alvaro,\r\n\r\nThanks for giving comments!\r\n\r\n> > I agreed it sounds good, but I don't think it can be implemented by\r\n> > current interface. An interface for dynamically allocating memory is\r\n> > GetNamedDSMSegment(), and it returns the same shared memory region if\r\n> > input names are the same. Therefore, there is no way to re-alloc the\r\n> > shared memory.\r\n> \r\n> Yeah, I was imagining something like this: the workitem-array becomes a\r\n> struct, which has a name and a \"next\" pointer and a variable number of\r\n> workitem slots; the AutoVacuumShmem struct has a pointer to the first\r\n> workitem-struct and the last one; when a workitem is requested by\r\n> brininsert, we initially allocate via GetNamedDSMSegment(\"workitem-0\") a\r\n> workitem-struct with a smallish number of elements; if we request\r\n> another workitem and the array is full, we allocate another array via\r\n> GetNamedDSMSegment(\"workitem-1\") and store a pointer to it in workitem-0\r\n> (so that the list can be followed by an autovacuum worker that's\r\n> processing the database), and it's also set as the tail of the list in\r\n> AutoVacuumShmem (so that we know where to store further work items).\r\n> When all items in a workitem-struct are processed, we can free it\r\n> (I guess via dsm_unpin_segment), and make AutoVacuumShmem->av_workitems\r\n> point to the next one in the list.\r\n> \r\n> This way, the \"array\" can grow arbitrarily.\r\n>\r\n\r\nBasically sounds good. My concerns are:\r\n\r\n* GetNamedDSMSegment() does not returns a raw pointer to dsm_segment. This means\r\n that it may be difficult to do dsm_unpin_segment on the caller side.\r\n* dynamic shared memory is recorded in dhash (dsm_registry_table) and the entry\r\n won't be deleted. The reference for the chunk might be remained.\r\n\r\nOverall, it may be needed that dsm_registry may be also extended. I do not start\r\nworking yet, so will share results after trying them.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Tue, 5 Mar 2024 02:00:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Some shared memory chunks are allocated even if related processes\n won't start"
},
{
"msg_contents": "On 2024-Mar-05, Hayato Kuroda (Fujitsu) wrote:\n\n> Basically sounds good. My concerns are:\n> \n> * GetNamedDSMSegment() does not returns a raw pointer to dsm_segment. This means\n> that it may be difficult to do dsm_unpin_segment on the caller side.\n\nMaybe we don't need a \"named\" DSM segment at all, and instead just use\nbare dsm segments (dsm_create and friends) or a DSA -- not sure. But\nsee commit 31ae1638ce35, which removed use of a DSA in autovacuum/BRIN.\nMaybe fixing this is just a matter of reverting that commit. At the\ntime, there was a belief that DSA wasn't supported everywhere so we\ncouldn't use it for autovacuum workitem stuff, but I think our reliance\non DSA is now past the critical point.\n\nBTW, we should turn BRIN autosummarization to be on by default.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las mujeres son como hondas: mientras más resistencia tienen,\n más lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n\n\n",
"msg_date": "Tue, 5 Mar 2024 08:34:18 +0100",
"msg_from": "'Alvaro Herrera' <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some shared memory chunks are allocated even if related\n processes won't start"
}
] |
[
{
"msg_contents": "Hi All,\n\nEager aggregation is a query optimization technique that partially\npushes a group-by past a join, and finalizes it once all the relations\nare joined. Eager aggregation reduces the number of input rows to the\njoin and thus may result in a better overall plan. This technique is\nthoroughly described in the 'Eager Aggregation and Lazy Aggregation'\npaper [1].\n\nBack in 2017, a patch set has been proposed by Antonin Houska to\nimplement eager aggregation in thread [2]. However, it was at last\nwithdrawn after entering the pattern of \"please rebase thx\" followed by\nrebasing and getting no feedback until \"please rebase again thx\". A\nsecond attempt in 2022 unfortunately fell into the same pattern about\none year ago and was eventually closed again [3].\n\nThat patch set has included most of the necessary concepts to implement\neager aggregation. However, as far as I can see, it has several weak\npoints that we need to address. It introduces invasive changes to some\ncore planner functions, such as make_join_rel(). And with such changes\njoin_is_legal() would be performed three times for the same proposed\njoin, which is not great. Another weak point is that the complexity of\njoin searching dramatically increases with the growing number of\nrelations to be joined. This occurs because when we generate partially\naggregated paths, each path of the input relation is considered as an\ninput path for the grouped paths. As a result, the number of grouped\npaths we generate increases exponentially, leading to a significant\nexplosion in computational complexity. Other weak points include the\nlack of support for outer joins and partitionwise joins. And during my\nreview of the code, I came across several bugs (planning error or crash)\nthat need to be addressed.\n\nI'd like to give it another take to implement eager aggregation, while\nborrowing lots of concepts and many chunks of codes from the previous\npatch set. Please see attached. I have chosen to use the term 'Eager\nAggregation' from the paper [1] instead of 'Aggregation push-down', to\ndifferentiate the aggregation push-down technique in FDW.\n\nThe patch has been split into small pieces to make the review easier.\n\n0001 introduces the RelInfoList structure, which encapsulates both a\nlist and a hash table, so that we can leverage the hash table for faster\nlookups not only for join relations but also for upper relations. With\neager aggregation, it is possible that we generate so many upper rels of\ntype UPPERREL_PARTIAL_GROUP_AGG that a hash table can help a lot with\nlookups.\n\n0002 introduces the RelAggInfo structure to store information needed to\ncreate grouped paths for base and join rels. It also revises the\nRelInfoList related structures and functions so that they can be used\nwith RelAggInfos.\n\n0003 checks if eager aggregation is applicable, and if so, collects\nsuitable aggregate expressions and grouping expressions in the query,\nand records them in root->agg_clause_list and root->group_expr_list\nrespectively.\n\n0004 implements the functions that check if eager aggregation is\napplicable for a given relation, and if so, create RelAggInfo structure\nfor the relation, using the infos about aggregate expressions and\ngrouping expressions we collected earlier. In this patch, when we check\nif a target expression can act as grouping expression, we need to check\nif this expression can be known equal to other expressions due to ECs\nthat can act as grouping expressions. This patch leverages function\nexprs_known_equal() to achieve that, after enhancing this function to\nconsider opfamily if provided.\n\n0005 implements the functions that generate paths for grouped relations\nby adding sorted and hashed partial aggregation paths on top of paths of\nthe plain base or join relations. For sorted partial aggregation paths,\nwe only consider any suitably-sorted input paths as well as sorting the\ncheapest-total path. For hashed partial aggregation paths, we only\nconsider the cheapest-total path as input. By not considering other\npaths we can reduce the number of grouping paths as much as possible\nwhile still achieving reasonable results.\n\n0006 builds grouped relations for each base relation if possible, and\ngenerates aggregation paths for the grouped base relations.\n\n0007 builds grouped relations for each just-processed join relation if\npossible, and generates aggregation paths for the grouped join\nrelations. The changes made to make_join_rel() are relatively minor,\nwith the addition of a new function make_grouped_join_rel(), which finds\nor creates a grouped relation for the just-processed joinrel, and\ngenerates grouped paths by joining a grouped input relation with a\nnon-grouped input relation.\n\nThe other way to generate grouped paths is by adding sorted and hashed\npartial aggregation paths on top of paths of the joinrel. This occurs\nin standard_join_search(), after we've run set_cheapest() for the\njoinrel. The reason for performing this step after set_cheapest() is\nthat we need to know the joinrel's cheapest paths (see 0005).\n\nThis patch also makes the grouped relation for the topmost join rel act\nas the upper rel representing the result of partial aggregation, so that\nwe can add the final aggregation on top of that. Additionally, this\npatch extends the functionality of eager aggregation to work with\npartitionwise join and geqo.\n\nThis patch also makes eager aggregation work with outer joins. With\nouter join, the aggregate cannot be pushed down if any column referenced\nby grouping expressions or aggregate functions is nullable by an outer\njoin above the relation to which we want to apply the partiall\naggregation. Thanks to Tom's outer-join-aware-Var infrastructure, we\ncan easily identify such situations and subsequently refrain from\npushing down the aggregates.\n\nStarting from this patch, you should be able to see plans with eager\naggregation.\n\n0008 adds test cases for eager aggregation.\n\n0009 adds a section in README that describes this feature (copied from\nprevious patch set, with minor tweaks).\n\nThoughts and comments are welcome.\n\n[1] https://www.vldb.org/conf/1995/P345.PDF\n[2] https://www.postgresql.org/message-id/flat/9666.1491295317%40localhost\n[3]\nhttps://www.postgresql.org/message-id/flat/OS3PR01MB66609589B896FBDE190209F495EE9%40OS3PR01MB6660.jpnprd01.prod.outlook.com\n\nThanks\nRichard",
"msg_date": "Mon, 4 Mar 2024 16:27:24 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Eager aggregation, take 3"
},
{
"msg_contents": "\nRichard Guo <[email protected]> writes:\n\n> Hi All,\n>\n> Eager aggregation is a query optimization technique that partially\n> pushes a group-by past a join, and finalizes it once all the relations\n> are joined. Eager aggregation reduces the number of input rows to the\n> join and thus may result in a better overall plan. This technique is\n> thoroughly described in the 'Eager Aggregation and Lazy Aggregation'\n> paper [1].\n\nThis is a really helpful and not easy task, even I am not sure when I\ncan spend time to study this, I want to say \"Thanks for working on\nthis!\" first and hope we can really progress on this topic. Good luck! \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 19:45:43 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 7:49 PM Andy Fan <[email protected]> wrote:\n\n> This is a really helpful and not easy task, even I am not sure when I\n> can spend time to study this, I want to say \"Thanks for working on\n> this!\" first and hope we can really progress on this topic. Good luck!\n\n\nThanks. I hope this take can go even further and ultimately find its\nway to be committed.\n\nThis needs a rebase after dbbca2cf29. I also revised the commit message\nfor 0007 and fixed a typo in 0009.\n\nThanks\nRichard",
"msg_date": "Tue, 5 Mar 2024 14:47:27 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 2:47 PM Richard Guo <[email protected]> wrote:\n\n> This needs a rebase after dbbca2cf29. I also revised the commit message\n> for 0007 and fixed a typo in 0009.\n>\n\nHere is another rebase, mainly to make the test cases more stable by\nadding ORDER BY clauses to the test queries. Also fixed more typos in\npassing.\n\nThanks\nRichard",
"msg_date": "Tue, 5 Mar 2024 19:19:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 7:19 PM Richard Guo <[email protected]> wrote:\n\n> Here is another rebase, mainly to make the test cases more stable by\n> adding ORDER BY clauses to the test queries. Also fixed more typos in\n> passing.\n>\n\nThis needs another rebase after 97d85be365. I also addressed several\nissues that I identified during self-review, which include:\n\n* In some cases GroupPathExtraData.agg_final_costs, which is the cost of\nfinal aggregation, fails to be calculated. This can lead to bogus cost\nestimation and end up with unexpected plan.\n\n* If the cheapest partially grouped path is generated through eager\naggregation, the number of groups estimated for the final phase will be\ndifferent from the number of groups estimated for non-split aggregation.\nThat is to say, we should not use 'dNumGroups' for the final aggregation\nin add_paths_to_grouping_rel().\n\n* It is possible that we may generate dummy grouped join relations, and\nthat would trigger the Assert in make_grouped_join_rel().\n\n* More typo fixes.\n\nThanks\nRichard",
"msg_date": "Thu, 21 Mar 2024 18:51:55 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "There is a conflict in the parallel_schedule file. So here is another\nrebase. Nothing else has changed.\n\nThanks\nRichard",
"msg_date": "Wed, 10 Apr 2024 17:42:52 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Here is an update of the patchset with the following changes:\n\n* Fix a 'Aggref found where not expected' error caused by the PVC call\nin is_var_in_aggref_only. This would happen if we have Aggrefs\ncontained in other expressions.\n\n* Use joinrel's relids rather than the union of the relids of its outer\nand inner to search for its grouped rel. This is more correct as we\nneed to include OJs into consideration.\n\n* Remove RelAggInfo.agg_exprs as it is not used anymore.\n\nThanks\nRichard",
"msg_date": "Tue, 30 Apr 2024 12:06:42 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Another rebase is needed after d1d286d83c. Also I realized that the\npartially_grouped_rel generated by eager aggregation might be dummy,\nsuch as in query:\n\nselect count(t2.c) from t t1 join t t2 on t1.b = t2.b where false group by\nt1.a;\n\nIf somehow we choose this dummy path with a Finalize Agg Path on top of\nit as the final cheapest path (a very rare case), we would encounter the\n\"Aggref found in non-Agg plan node\" error. The v7 patch fixes this\nissue.\n\nThanks\nRichard",
"msg_date": "Mon, 20 May 2024 16:12:49 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Mon, May 20, 2024 at 4:12 PM Richard Guo <[email protected]> wrote:\n> Another rebase is needed after d1d286d83c. Also I realized that the\n> partially_grouped_rel generated by eager aggregation might be dummy,\n> such as in query:\n>\n> select count(t2.c) from t t1 join t t2 on t1.b = t2.b where false group by t1.a;\n>\n> If somehow we choose this dummy path with a Finalize Agg Path on top of\n> it as the final cheapest path (a very rare case), we would encounter the\n> \"Aggref found in non-Agg plan node\" error. The v7 patch fixes this\n> issue.\n\nI spent some time testing this patchset and found a few more issues.\n\nOne issue is that partially-grouped partial paths may have already been\ngenerated in the process of building up the grouped join relations by\neager aggregation, in which case the partially_grouped_rel would contain\nvalid partial paths by the time we reach create_partial_grouping_paths.\nIf we subsequently find that parallelism is not possible for\npartially_grouped_rel, we need to drop these partial paths; otherwise we\nrisk encountering Assert(subpath->parallel_safe) when creating gather /\ngather merge path. This issue can be reproduced with the query below on\nv7 patch.\n\ncreate function parallel_restricted_func(a int) returns int as\n $$ begin return a; end; $$ parallel restricted language plpgsql;\ncreate table t (a int, b int, c int) with (parallel_workers = 2);\nset enable_eager_aggregate to on;\n\nexplain (costs off)\nselect parallel_restricted_func(1) * count(t2.c)\n from t t1, t t2 where t1.b = t2.b group by t2.c;\n\n\nAnother issue I found is that when we check to see whether a given Var\nappears only within Aggrefs, we need to account for havingQual in\naddition to targetlist; otherwise there's a risk of omitting this Var\nfrom the targetlist of the partial Agg node, leading to 'ERROR: variable\nnot found in subplan target list'. This error can be reproduced with\nthe query below on v7.\n\ncreate table t (a int primary key, b int, c int);\nset enable_eager_aggregate to on;\n\nexplain (costs off)\nselect count(*) from t t1, t t2 group by t1.a having min(t1.b) < t1.b;\nERROR: variable not found in subplan target list\n\n\nA third issue I found is that with v7 we might push the Partial Agg to\nthe nullable side of an outer join, which is not correct. This happens\nbecause when determining whether a Partial Agg can be pushed down to a\nrelation, the v7 patchset indeed checks if the aggregate expressions can\nbe evaluated at this relation level. However, it overlooks checking the\ngrouping expressions. The grouping expressions can originate from two\nsources: the original GROUP BY clauses, or constructed from join\nconditions. In either case, we must verify that the grouping\nexpressions cannot be nulled by outer joins that are above the current\nrelation, otherwise the Partial Agg cannot be pushed down to this rel.\n\nHence here is the v8 patchset, with fixes for all the above issues.\n\nThanks\nRichard",
"msg_date": "Thu, 13 Jun 2024 16:07:51 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Thu, Jun 13, 2024 at 4:07 PM Richard Guo <[email protected]> wrote:\n> I spent some time testing this patchset and found a few more issues.\n> ...\n\n> Hence here is the v8 patchset, with fixes for all the above issues.\n\nI found an 'ORDER/GROUP BY expression not found in targetlist' error\nwith this patchset, with the query below:\n\ncreate table t (a boolean);\n\nset enable_eager_aggregate to on;\n\nexplain (costs off)\nselect min(1) from t t1 left join t t2 on t1.a group by (not (not\nt1.a)), t1.a order by t1.a;\nERROR: ORDER/GROUP BY expression not found in targetlist\n\nThis happens because the two grouping items are actually the same and\nstandard_qp_callback would remove one of them. The fully-processed\ngroupClause is kept in root->processed_groupClause. However, when\ncollecting grouping expressions in create_grouping_expr_infos, we are\nchecking parse->groupClause, which is incorrect.\n\nThe fix is straightforward: check root->processed_groupClause instead.\n\nHere is a new rebase with this fix.\n\nThanks\nRichard",
"msg_date": "Wed, 3 Jul 2024 16:29:27 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard:\n\nThanks for reviving this patch and for all of your work on it! Eager\naggregation pushdown will be beneficial for my work and I'm hoping to see\nit land.\n\n\nI was playing around with v9 of the patches and was specifically curious\nabout this previous statement...\n\n>This patch also makes eager aggregation work with outer joins. With\n>outer join, the aggregate cannot be pushed down if any column referenced\n>by grouping expressions or aggregate functions is nullable by an outer\n>join above the relation to which we want to apply the partiall\n>aggregation. Thanks to Tom's outer-join-aware-Var infrastructure, we\n>can easily identify such situations and subsequently refrain from\n>pushing down the aggregates.\n\n ...and this related comment in eager_aggregate.out:\n\n>-- Ensure aggregation cannot be pushed down to the nullable side\n\nWhile I'm new to this work and its subtleties, I'm wondering if this is too\nbroad a condition.\n\nI modified the first test query in eager_aggregate.sql to make it a LEFT\nJOIN and eager aggregation indeed did not happen, which is expected based\non the comments upthread.\n\nquery:\nSET enable_eager_aggregate=ON;\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT t1.a, sum(t2.c) FROM eager_agg_t1 t1 LEFT JOIN eager_agg_t2 t2 ON\nt1.b = t2.b GROUP BY t1.a ORDER BY t1.a;\n\nplan:\n QUERY PLAN\n------------------------------------------------------------\n GroupAggregate\n Output: t1.a, sum(t2.c)\n Group Key: t1.a\n -> Sort\n Output: t1.a, t2.c\n Sort Key: t1.a\n -> Hash Right Join\n Output: t1.a, t2.c\n Hash Cond: (t2.b = t1.b)\n -> Seq Scan on public.eager_agg_t2 t2\n Output: t2.a, t2.b, t2.c\n -> Hash\n Output: t1.a, t1.b\n -> Seq Scan on public.eager_agg_t1 t1\n Output: t1.a, t1.b\n(15 rows)\n\n(NOTE: I changed the aggregate from avg(...) to sum(...) for simplicity)\n\nBut, it seems that eager aggregation for the query above can be\n\"replicated\" as:\n\nquery:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT t1.a, sum(t2.c)\nFROM eager_agg_t1 t1\nLEFT JOIN (\n SELECT b, sum(c) c\n FROM eager_agg_t2 t2p\n GROUP BY b\n) t2 ON t1.b = t2.b\nGROUP BY t1.a\nORDER BY t1.a;\n\nThe output of both the original query and this one match (and the plans\nwith eager aggregation and the subquery are nearly identical if you restore\nthe LEFT JOIN to a JOIN). I admittedly may be missing a subtlety, but does\nthis mean that there are conditions under which eager aggregation can be\npushed down to the nullable side?\n\n\n-Paul-\n\nOn Sat, Jul 6, 2024 at 4:56 PM Richard Guo <[email protected]> wrote:\n\n> On Thu, Jun 13, 2024 at 4:07 PM Richard Guo <[email protected]>\n> wrote:\n> > I spent some time testing this patchset and found a few more issues.\n> > ...\n>\n> > Hence here is the v8 patchset, with fixes for all the above issues.\n>\n> I found an 'ORDER/GROUP BY expression not found in targetlist' error\n> with this patchset, with the query below:\n>\n> create table t (a boolean);\n>\n> set enable_eager_aggregate to on;\n>\n> explain (costs off)\n> select min(1) from t t1 left join t t2 on t1.a group by (not (not\n> t1.a)), t1.a order by t1.a;\n> ERROR: ORDER/GROUP BY expression not found in targetlist\n>\n> This happens because the two grouping items are actually the same and\n> standard_qp_callback would remove one of them. The fully-processed\n> groupClause is kept in root->processed_groupClause. However, when\n> collecting grouping expressions in create_grouping_expr_infos, we are\n> checking parse->groupClause, which is incorrect.\n>\n> The fix is straightforward: check root->processed_groupClause instead.\n>\n> Here is a new rebase with this fix.\n>\n> Thanks\n> Richard\n>\n\nRichard:Thanks for reviving this patch and for all of your work on it! Eager aggregation pushdown will be beneficial for my work and I'm hoping to see it land.I was playing around with v9 of the patches and was specifically curious about this previous statement...>This patch also makes eager aggregation work with outer joins. With>outer join, the aggregate cannot be pushed down if any column referenced>by grouping expressions or aggregate functions is nullable by an outer>join above the relation to which we want to apply the partiall>aggregation. Thanks to Tom's outer-join-aware-Var infrastructure, we>can easily identify such situations and subsequently refrain from>pushing down the aggregates. ...and this related comment in eager_aggregate.out:>-- Ensure aggregation cannot be pushed down to the nullable sideWhile I'm new to this work and its subtleties, I'm wondering if this is too broad a condition.I modified the first test query in eager_aggregate.sql to make it a LEFT JOIN and eager aggregation indeed did not happen, which is expected based on the comments upthread.query:SET enable_eager_aggregate=ON;EXPLAIN (VERBOSE, COSTS OFF)SELECT t1.a, sum(t2.c) FROM eager_agg_t1 t1 LEFT JOIN eager_agg_t2 t2 ON t1.b = t2.b GROUP BY t1.a ORDER BY t1.a;plan: QUERY PLAN ------------------------------------------------------------ GroupAggregate Output: t1.a, sum(t2.c) Group Key: t1.a -> Sort Output: t1.a, t2.c Sort Key: t1.a -> Hash Right Join Output: t1.a, t2.c Hash Cond: (t2.b = t1.b) -> Seq Scan on public.eager_agg_t2 t2 Output: t2.a, t2.b, t2.c -> Hash Output: t1.a, t1.b -> Seq Scan on public.eager_agg_t1 t1 Output: t1.a, t1.b(15 rows)(NOTE: I changed the aggregate from avg(...) to sum(...) for simplicity)But, it seems that eager aggregation for the query above can be \"replicated\" as:query:EXPLAIN (VERBOSE, COSTS OFF)SELECT t1.a, sum(t2.c)FROM eager_agg_t1 t1LEFT JOIN ( SELECT b, sum(c) c FROM eager_agg_t2 t2p GROUP BY b) t2 ON t1.b = t2.bGROUP BY t1.aORDER BY t1.a;The output of both the original query and this one match (and the plans with eager aggregation and the subquery are nearly identical if you restore the LEFT JOIN to a JOIN). I admittedly may be missing a subtlety, but does this mean that there are conditions under which eager aggregation can be pushed down to the nullable side?-Paul-On Sat, Jul 6, 2024 at 4:56 PM Richard Guo <[email protected]> wrote:On Thu, Jun 13, 2024 at 4:07 PM Richard Guo <[email protected]> wrote:\n> I spent some time testing this patchset and found a few more issues.\n> ...\n\n> Hence here is the v8 patchset, with fixes for all the above issues.\n\nI found an 'ORDER/GROUP BY expression not found in targetlist' error\nwith this patchset, with the query below:\n\ncreate table t (a boolean);\n\nset enable_eager_aggregate to on;\n\nexplain (costs off)\nselect min(1) from t t1 left join t t2 on t1.a group by (not (not\nt1.a)), t1.a order by t1.a;\nERROR: ORDER/GROUP BY expression not found in targetlist\n\nThis happens because the two grouping items are actually the same and\nstandard_qp_callback would remove one of them. The fully-processed\ngroupClause is kept in root->processed_groupClause. However, when\ncollecting grouping expressions in create_grouping_expr_infos, we are\nchecking parse->groupClause, which is incorrect.\n\nThe fix is straightforward: check root->processed_groupClause instead.\n\nHere is a new rebase with this fix.\n\nThanks\nRichard",
"msg_date": "Sat, 6 Jul 2024 19:45:32 -0700",
"msg_from": "Paul George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Sun, Jul 7, 2024 at 10:45 AM Paul George <[email protected]> wrote:\n> Thanks for reviving this patch and for all of your work on it! Eager aggregation pushdown will be beneficial for my work and I'm hoping to see it land.\n\nThanks for looking at this patch!\n\n> The output of both the original query and this one match (and the plans with eager aggregation and the subquery are nearly identical if you restore the LEFT JOIN to a JOIN). I admittedly may be missing a subtlety, but does this mean that there are conditions under which eager aggregation can be pushed down to the nullable side?\n\nI think it's a very risky thing to push a partial aggregation down to\nthe nullable side of an outer join, because the NULL-extended rows\nproduced by the outer join would not be available when we perform the\npartial aggregation, while with a non-eager-aggregation plan these\nrows are available for the top-level aggregation. This may put the\nrows into groups in a different way than expected, or get wrong values\nfrom the aggregate functions. I've managed to compose an example:\n\ncreate table t (a int, b int);\ninsert into t select 1, 1;\n\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n a | count\n---+-------\n | 1\n(1 row)\n\nThis is the expected result, because after the outer join we have got\na NULL-extended row.\n\nBut if we somehow push down the partial aggregation to the nullable\nside of this outer join, we would get a wrong result.\n\nexplain (costs off)\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n QUERY PLAN\n-------------------------------------------\n Finalize HashAggregate\n Group Key: t2.a\n -> Nested Loop Left Join\n Filter: (t2.a IS NULL)\n -> Seq Scan on t t1\n -> Materialize\n -> Partial HashAggregate\n Group Key: t2.a\n -> Seq Scan on t t2\n Filter: (b > 1)\n(10 rows)\n\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n a | count\n---+-------\n | 0\n(1 row)\n\nI believe there are cases where pushing a partial aggregation down to\nthe nullable side of an outer join can be safe, but I doubt that there\nis an easy way to identify these cases and do the push-down for them.\nSo for now I think we'd better refrain from doing that.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 10 Jul 2024 16:27:02 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Hey Richard,\n\nLooking more closely at this example\n\n>select t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by t2.a\nhaving t2.a is null;\n\nI wonder if the inability to exploit eager aggregation is more based on the\nfact that COUNT(*) cannot be decomposed into an aggregation of PARTIAL\nCOUNT(*)s (apologies if my terminology is off/made up...I'm new to the\ncodebase). In other words, is it the case that a given aggregate function\nalready has built-in protection against the error case you correctly\npointed out?\n\nTo highlight this, in the simple example below we don't see aggregate\npushdown even with an INNER JOIN when the agg function is COUNT(*) but we\ndo when it's COUNT(t2.*):\n\n-- same setup\ndrop table if exists t;\ncreate table t(a int, b int, c int);\ninsert into t select i % 100, i % 10, i from generate_series(1, 1000) i;\nanalyze t;\n\n-- query 1: COUNT(*) --> no pushdown\n\nset enable_eager_aggregate=on;\nexplain (verbose, costs off) select t1.a, count(*) from t t1 join t t2 on\nt1.a=t2.a group by t1.a;\n\n QUERY PLAN\n-------------------------------------------\n HashAggregate\n Output: t1.a, count(*)\n Group Key: t1.a\n -> Hash Join\n Output: t1.a\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on public.t t1\n Output: t1.a, t1.b, t1.c\n -> Hash\n Output: t2.a\n -> Seq Scan on public.t t2\n Output: t2.a\n(12 rows)\n\n\n-- query 2: COUNT(t2.*) --> agg pushdown\n\nset enable_eager_aggregate=on;\nexplain (verbose, costs off) select t1.a, count(t2.*) from t t1 join t t2\non t1.a=t2.a group by t1.a;\n\n QUERY PLAN\n-------------------------------------------------------\n Finalize HashAggregate\n Output: t1.a, count(t2.*)\n Group Key: t1.a\n -> Hash Join\n Output: t1.a, (PARTIAL count(t2.*))\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on public.t t1\n Output: t1.a, t1.b, t1.c\n -> Hash\n Output: t2.a, (PARTIAL count(t2.*))\n -> Partial HashAggregate\n Output: t2.a, PARTIAL count(t2.*)\n Group Key: t2.a\n -> Seq Scan on public.t t2\n Output: t2.*, t2.a\n(15 rows)\n\n...while it might be true that COUNT(*) ... INNER JOIN should allow eager\nagg pushdown (I haven't thought deeply about it, TBH), I did find this\nresult pretty interesting.\n\n\n-Paul\n\nOn Wed, Jul 10, 2024 at 1:27 AM Richard Guo <[email protected]> wrote:\n\n> On Sun, Jul 7, 2024 at 10:45 AM Paul George <[email protected]>\n> wrote:\n> > Thanks for reviving this patch and for all of your work on it! Eager\n> aggregation pushdown will be beneficial for my work and I'm hoping to see\n> it land.\n>\n> Thanks for looking at this patch!\n>\n> > The output of both the original query and this one match (and the plans\n> with eager aggregation and the subquery are nearly identical if you restore\n> the LEFT JOIN to a JOIN). I admittedly may be missing a subtlety, but does\n> this mean that there are conditions under which eager aggregation can be\n> pushed down to the nullable side?\n>\n> I think it's a very risky thing to push a partial aggregation down to\n> the nullable side of an outer join, because the NULL-extended rows\n> produced by the outer join would not be available when we perform the\n> partial aggregation, while with a non-eager-aggregation plan these\n> rows are available for the top-level aggregation. This may put the\n> rows into groups in a different way than expected, or get wrong values\n> from the aggregate functions. I've managed to compose an example:\n>\n> create table t (a int, b int);\n> insert into t select 1, 1;\n>\n> select t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\n> t2.a having t2.a is null;\n> a | count\n> ---+-------\n> | 1\n> (1 row)\n>\n> This is the expected result, because after the outer join we have got\n> a NULL-extended row.\n>\n> But if we somehow push down the partial aggregation to the nullable\n> side of this outer join, we would get a wrong result.\n>\n> explain (costs off)\n> select t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\n> t2.a having t2.a is null;\n> QUERY PLAN\n> -------------------------------------------\n> Finalize HashAggregate\n> Group Key: t2.a\n> -> Nested Loop Left Join\n> Filter: (t2.a IS NULL)\n> -> Seq Scan on t t1\n> -> Materialize\n> -> Partial HashAggregate\n> Group Key: t2.a\n> -> Seq Scan on t t2\n> Filter: (b > 1)\n> (10 rows)\n>\n> select t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\n> t2.a having t2.a is null;\n> a | count\n> ---+-------\n> | 0\n> (1 row)\n>\n> I believe there are cases where pushing a partial aggregation down to\n> the nullable side of an outer join can be safe, but I doubt that there\n> is an easy way to identify these cases and do the push-down for them.\n> So for now I think we'd better refrain from doing that.\n>\n> Thanks\n> Richard\n>\n\nHey Richard,Looking more closely at this example>select t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by t2.a having t2.a is null;I wonder if the inability to exploit eager aggregation is more based on the fact that COUNT(*) cannot be decomposed into an aggregation of PARTIAL COUNT(*)s (apologies if my terminology is off/made up...I'm new to the codebase). In other words, is it the case that a given aggregate function already has built-in protection against the error case you correctly pointed out?To highlight this, in the simple example below we don't see aggregate pushdown even with an INNER JOIN when the agg function is COUNT(*) but we do when it's COUNT(t2.*):-- same setupdrop table if exists t;create table t(a int, b int, c int);insert into t select i % 100, i % 10, i from generate_series(1, 1000) i;analyze t;-- query 1: COUNT(*) --> no pushdownset enable_eager_aggregate=on;explain (verbose, costs off) select t1.a, count(*) from t t1 join t t2 on t1.a=t2.a group by t1.a; QUERY PLAN ------------------------------------------- HashAggregate Output: t1.a, count(*) Group Key: t1.a -> Hash Join Output: t1.a Hash Cond: (t1.a = t2.a) -> Seq Scan on public.t t1 Output: t1.a, t1.b, t1.c -> Hash Output: t2.a -> Seq Scan on public.t t2 Output: t2.a(12 rows)-- query 2: COUNT(t2.*) --> agg pushdownset enable_eager_aggregate=on;explain (verbose, costs off) select t1.a, count(t2.*) from t t1 join t t2 on t1.a=t2.a group by t1.a; QUERY PLAN ------------------------------------------------------- Finalize HashAggregate Output: t1.a, count(t2.*) Group Key: t1.a -> Hash Join Output: t1.a, (PARTIAL count(t2.*)) Hash Cond: (t1.a = t2.a) -> Seq Scan on public.t t1 Output: t1.a, t1.b, t1.c -> Hash Output: t2.a, (PARTIAL count(t2.*)) -> Partial HashAggregate Output: t2.a, PARTIAL count(t2.*) Group Key: t2.a -> Seq Scan on public.t t2 Output: t2.*, t2.a(15 rows)...while it might be true that COUNT(*) ... INNER JOIN should allow eager agg pushdown (I haven't thought deeply about it, TBH), I did find this result pretty interesting.-PaulOn Wed, Jul 10, 2024 at 1:27 AM Richard Guo <[email protected]> wrote:On Sun, Jul 7, 2024 at 10:45 AM Paul George <[email protected]> wrote:\n> Thanks for reviving this patch and for all of your work on it! Eager aggregation pushdown will be beneficial for my work and I'm hoping to see it land.\n\nThanks for looking at this patch!\n\n> The output of both the original query and this one match (and the plans with eager aggregation and the subquery are nearly identical if you restore the LEFT JOIN to a JOIN). I admittedly may be missing a subtlety, but does this mean that there are conditions under which eager aggregation can be pushed down to the nullable side?\n\nI think it's a very risky thing to push a partial aggregation down to\nthe nullable side of an outer join, because the NULL-extended rows\nproduced by the outer join would not be available when we perform the\npartial aggregation, while with a non-eager-aggregation plan these\nrows are available for the top-level aggregation. This may put the\nrows into groups in a different way than expected, or get wrong values\nfrom the aggregate functions. I've managed to compose an example:\n\ncreate table t (a int, b int);\ninsert into t select 1, 1;\n\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n a | count\n---+-------\n | 1\n(1 row)\n\nThis is the expected result, because after the outer join we have got\na NULL-extended row.\n\nBut if we somehow push down the partial aggregation to the nullable\nside of this outer join, we would get a wrong result.\n\nexplain (costs off)\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n QUERY PLAN\n-------------------------------------------\n Finalize HashAggregate\n Group Key: t2.a\n -> Nested Loop Left Join\n Filter: (t2.a IS NULL)\n -> Seq Scan on t t1\n -> Materialize\n -> Partial HashAggregate\n Group Key: t2.a\n -> Seq Scan on t t2\n Filter: (b > 1)\n(10 rows)\n\nselect t2.a, count(*) from t t1 left join t t2 on t2.b > 1 group by\nt2.a having t2.a is null;\n a | count\n---+-------\n | 0\n(1 row)\n\nI believe there are cases where pushing a partial aggregation down to\nthe nullable side of an outer join can be safe, but I doubt that there\nis an easy way to identify these cases and do the push-down for them.\nSo for now I think we'd better refrain from doing that.\n\nThanks\nRichard",
"msg_date": "Thu, 11 Jul 2024 14:50:35 -0700",
"msg_from": "Paul George <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "I had a self-review of this patchset and made some refactoring,\nespecially to the function that creates the RelAggInfo structure for a\ngiven relation. While there were no major changes, the code should\nnow be simpler.\n\nAttached is the updated version of the patchset. Previously, the\npatchset was not well-split, which made it time-consuming to\ndistribute the changes across the patches during the refactoring. So\nI squashed them into two patches to save effort.\n\nThanks\nRichard",
"msg_date": "Fri, 16 Aug 2024 16:14:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring.\n\nThanks\nRichard",
"msg_date": "Wed, 21 Aug 2024 15:10:51 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 3:11 AM Richard Guo <[email protected]> wrote:\n> Attached is the updated version of the patchset that fixes this bug\n> and includes further code refactoring.\n\nHere are some initial, high-level thoughts about this patch set.\n\n1. As far as I can see, there's no real performance testing on this\nthread. I expect that it's possible to show an arbitrarily large gain\nfor the patch by finding a case where partial aggregation is way\nbetter than anything we currently know, but that's not very\ninteresting. What I think would be useful to do is find a corpus of\nexisting queries on an existing data set and try them with and without\nthe patch and see which query plans change and whether they're\nactually better. For example, maybe TPC-H or the subset of TPC-DS that\nwe can actually run would be a useful starting point. One could then\nalso measure how much the planning time increases with the patch to\nget a sense of what the overhead of enabling this feature would be.\nEven if it's disabled by default, people aren't going to want to\nenable it if it causes planning times to become much longer on many\nqueries for which there is no benefit.\n\n2. I think there might be techniques we could use to limit planning\neffort at an earlier stage when the approach doesn't appear promising.\nFor example, if the proposed grouping column is already unique, the\nexercise is pointless (I think). Ideally we'd like to detect that\nwithout even creating the grouped_rel. But the proposed grouping\ncolumn might also be *mostly* unique. For example, consider a table\nwith a million rows and a column 500,000 distinct values. I suspect it\nwill be difficult for partial aggregation to work out to a win in a\ncase like this, because I think that the cost of performing the\npartial aggregation will not reduce the cost either of the final\naggregation or of the intervening join steps by enough to compensate.\nIt would be best to find a way to avoid generating a lot of rels and\npaths in cases where there's really not much hope of a win.\n\nOne could, perhaps, imagine going further with this by postponing\neager aggregation planning until after regular paths have been built,\nso that we have good cardinality estimates. Suppose the query joins a\nsingle fact table to a series of dimension tables. The final plan thus\nuses the fact table as the driving table and joins to the dimension\ntables one by one. Do we really need to consider partial aggregation\nat every level? Perhaps just where there's been a significant row\ncount reduction since the last time we tried it, but at the next level\nthe row count will increase again?\n\nMaybe there are other heuristics we could use in addition or instead.\n\n3. In general, we are quite bad at estimating what will happen to the\nrow count after an aggregation, and we have no real idea what the\ndistribution of values will be. That might be a problem for this\npatch, because it seems like the decisions we will make about where to\nperform the partial aggregation might end up being quite random. At\nthe top of the join tree, I'll need to compare directly aggregating\nthe best join path with various paths that involve a finalize\naggregation step at the top and a partial aggregation step further\ndown. But my cost estimates and row counts for the partial aggregate\nsteps seem like they will often be quite poor, which means that the\nplans that use those partial aggregate steps might also be quite poor.\nEven if they're not, I fear that comparing the cost of those\nPartialAggregate-Join(s)-FinalizeAggregate paths to the direct\nAggregate path will look too much like comparing random numbers. We\nneed to know whether the combination of the FinalizeAggregate step and\nthe PartialAggregate step will be more or less expensive than a plain\nold Aggregate, but how can we tell that if we don't have accurate\ncardinality estimates?\n\nThanks for working on this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Aug 2024 11:59:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:\n\n> On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]>\n> wrote:\n> > I had a self-review of this patchset and made some refactoring,\n> > especially to the function that creates the RelAggInfo structure for a\n> > given relation. While there were no major changes, the code should\n> > now be simpler.\n>\n> I found a bug in v10 patchset: when we generate the GROUP BY clauses\n> for the partial aggregation that is pushed down to a non-aggregated\n> relation, we may produce a clause with a tleSortGroupRef that\n> duplicates one already present in the query's groupClause, which would\n> cause problems.\n>\n> Attached is the updated version of the patchset that fixes this bug\n> and includes further code refactoring.\n>\n\nRectenly, I do some benchmark tests, mainly on tpch and tpcds.\ntpch tests have no plan diff, so I do not continue to test on tpch.\ntpcds(10GB) tests have 22 plan diff as below:\n4.sql, 5.sql, 8.sql,11.sql,19.sql,23.sql,31.sql,\n33.sql,39.sql,45.sql,46.sql,47.sql,53.sql,\n56.sql,57.sql,60.sql,63.sql,68.sql,74.sql,77.sql,80.sql,89.sql\n\nI haven't look all of them. I just pick few simple plan test(e.g. 19.sql,\n45.sql).\nFor example, 19.sql, eager agg pushdown doesn't get large gain, but a little\nperformance regress.\n\nI will continue to do benchmark on this feature.\n\n[1] https://github.com/tenderwg/eager_agg\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月21日周三 15:11写道:On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring.Rectenly, I do some benchmark tests, mainly on tpch and tpcds.tpch tests have no plan diff, so I do not continue to test on tpch.tpcds(10GB) tests have 22 plan diff as below:4.sql, 5.sql, 8.sql,11.sql,19.sql,23.sql,31.sql, 33.sql,39.sql,45.sql,46.sql,47.sql,53.sql,56.sql,57.sql,60.sql,63.sql,68.sql,74.sql,77.sql,80.sql,89.sqlI haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).For example, 19.sql, eager agg pushdown doesn't get large gain, but a littleperformance regress.I will continue to do benchmark on this feature.[1] https://github.com/tenderwg/eager_agg-- Tender Wang",
"msg_date": "Wed, 28 Aug 2024 11:57:12 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> Rectenly, I do some benchmark tests, mainly on tpch and tpcds.\n> tpch tests have no plan diff, so I do not continue to test on tpch.\n\nInteresting to know.\n\n> tpcds(10GB) tests have 22 plan diff as below:\n> 4.sql, 5.sql, 8.sql,11.sql,19.sql,23.sql,31.sql, 33.sql,39.sql,45.sql,46.sql,47.sql,53.sql,\n> 56.sql,57.sql,60.sql,63.sql,68.sql,74.sql,77.sql,80.sql,89.sql\n\nOK.\n\n> I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> performance regress.\n\nYeah, this is one of the things I was worried about in my previous\nreply to Richard. It would be worth Richard, or someone, probing into\nexactly why that's happening. My fear is that we just don't have good\nenough estimates to make good decisions, but there might well be\nanother explanation.\n\n> I will continue to do benchmark on this feature.\n>\n> [1] https://github.com/tenderwg/eager_agg\n\nThanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 09:00:57 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Fri, Aug 23, 2024 at 11:59 PM Robert Haas <[email protected]> wrote:\n> Here are some initial, high-level thoughts about this patch set.\n\nThank you for your review and feedback! It helps a lot in moving this\nwork forward.\n\n> 1. As far as I can see, there's no real performance testing on this\n> thread. I expect that it's possible to show an arbitrarily large gain\n> for the patch by finding a case where partial aggregation is way\n> better than anything we currently know, but that's not very\n> interesting. What I think would be useful to do is find a corpus of\n> existing queries on an existing data set and try them with and without\n> the patch and see which query plans change and whether they're\n> actually better. For example, maybe TPC-H or the subset of TPC-DS that\n> we can actually run would be a useful starting point. One could then\n> also measure how much the planning time increases with the patch to\n> get a sense of what the overhead of enabling this feature would be.\n> Even if it's disabled by default, people aren't going to want to\n> enable it if it causes planning times to become much longer on many\n> queries for which there is no benefit.\n\nRight. I haven’t had time to run any benchmarks yet, but that is\nsomething I need to do.\n\n> 2. I think there might be techniques we could use to limit planning\n> effort at an earlier stage when the approach doesn't appear promising.\n> For example, if the proposed grouping column is already unique, the\n> exercise is pointless (I think). Ideally we'd like to detect that\n> without even creating the grouped_rel. But the proposed grouping\n> column might also be *mostly* unique. For example, consider a table\n> with a million rows and a column 500,000 distinct values. I suspect it\n> will be difficult for partial aggregation to work out to a win in a\n> case like this, because I think that the cost of performing the\n> partial aggregation will not reduce the cost either of the final\n> aggregation or of the intervening join steps by enough to compensate.\n> It would be best to find a way to avoid generating a lot of rels and\n> paths in cases where there's really not much hope of a win.\n>\n> One could, perhaps, imagine going further with this by postponing\n> eager aggregation planning until after regular paths have been built,\n> so that we have good cardinality estimates. Suppose the query joins a\n> single fact table to a series of dimension tables. The final plan thus\n> uses the fact table as the driving table and joins to the dimension\n> tables one by one. Do we really need to consider partial aggregation\n> at every level? Perhaps just where there's been a significant row\n> count reduction since the last time we tried it, but at the next level\n> the row count will increase again?\n>\n> Maybe there are other heuristics we could use in addition or instead.\n\nYeah, one of my concerns with this work is that it can use\nsignificantly more CPU time and memory during planning once enabled.\nIt would be great if we have some efficient heuristics to limit the\neffort. I'll work on that next and see what happens.\n\n> 3. In general, we are quite bad at estimating what will happen to the\n> row count after an aggregation, and we have no real idea what the\n> distribution of values will be. That might be a problem for this\n> patch, because it seems like the decisions we will make about where to\n> perform the partial aggregation might end up being quite random. At\n> the top of the join tree, I'll need to compare directly aggregating\n> the best join path with various paths that involve a finalize\n> aggregation step at the top and a partial aggregation step further\n> down. But my cost estimates and row counts for the partial aggregate\n> steps seem like they will often be quite poor, which means that the\n> plans that use those partial aggregate steps might also be quite poor.\n> Even if they're not, I fear that comparing the cost of those\n> PartialAggregate-Join(s)-FinalizeAggregate paths to the direct\n> Aggregate path will look too much like comparing random numbers. We\n> need to know whether the combination of the FinalizeAggregate step and\n> the PartialAggregate step will be more or less expensive than a plain\n> old Aggregate, but how can we tell that if we don't have accurate\n> cardinality estimates?\n\nYeah, I'm concerned about this too. In addition to the inaccuracies\nin aggregation estimates, our estimates for joins are sometimes not\nvery accurate either. All this are likely to result in regressions\nwith eager aggregation in some cases. Currently I don't have a good\nanswer to this problem. Maybe we can run some benchmarks first and\ninvestigate the regressions discovered on a case-by-case basis to better\nunderstand the specific issues.\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 29 Aug 2024 10:26:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 11:57 AM Tender Wang <[email protected]> wrote:\n> Rectenly, I do some benchmark tests, mainly on tpch and tpcds.\n> tpch tests have no plan diff, so I do not continue to test on tpch.\n> tpcds(10GB) tests have 22 plan diff as below:\n> 4.sql, 5.sql, 8.sql,11.sql,19.sql,23.sql,31.sql, 33.sql,39.sql,45.sql,46.sql,47.sql,53.sql,\n> 56.sql,57.sql,60.sql,63.sql,68.sql,74.sql,77.sql,80.sql,89.sql\n>\n> I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> performance regress.\n>\n> I will continue to do benchmark on this feature.\n\nThank you for running the benchmarks. That really helps a lot.\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 29 Aug 2024 10:29:26 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> > For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> > performance regress.\n>\n> Yeah, this is one of the things I was worried about in my previous\n> reply to Richard. It would be worth Richard, or someone, probing into\n> exactly why that's happening. My fear is that we just don't have good\n> enough estimates to make good decisions, but there might well be\n> another explanation.\n\nIt's great that we have a query to probe into. Your guess is likely\ncorrect: it may be caused by poor estimates.\n\nTender, would you please help provide the outputs of\n\nEXPLAIN (COSTS ON, ANALYZE)\n\non 19.sql with and without eager aggregation?\n\n> > I will continue to do benchmark on this feature.\n\nThanks again for running the benchmarks.\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 29 Aug 2024 10:45:58 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月29日周四 10:46写道:\n\n> On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> > On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > > I haven't look all of them. I just pick few simple plan test(e.g.\n> 19.sql, 45.sql).\n> > > For example, 19.sql, eager agg pushdown doesn't get large gain, but a\n> little\n> > > performance regress.\n> >\n> > Yeah, this is one of the things I was worried about in my previous\n> > reply to Richard. It would be worth Richard, or someone, probing into\n> > exactly why that's happening. My fear is that we just don't have good\n> > enough estimates to make good decisions, but there might well be\n> > another explanation.\n>\n> It's great that we have a query to probe into. Your guess is likely\n> correct: it may be caused by poor estimates.\n>\n> Tender, would you please help provide the outputs of\n>\n> EXPLAIN (COSTS ON, ANALYZE)\n>\n> on 19.sql with and without eager aggregation?\n>\n\nYeah, in [1], 19_off.out and 19_on.out are the output of explain(costs off,\nanalyze).\nI will do EXPLAIN(COSTS ON, ANALYZE) tests and upload them later today.\n\n\n[1] https://github.com/tenderwg/eager_agg\n\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月29日周四 10:46写道:On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> > For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> > performance regress.\n>\n> Yeah, this is one of the things I was worried about in my previous\n> reply to Richard. It would be worth Richard, or someone, probing into\n> exactly why that's happening. My fear is that we just don't have good\n> enough estimates to make good decisions, but there might well be\n> another explanation.\n\nIt's great that we have a query to probe into. Your guess is likely\ncorrect: it may be caused by poor estimates.\n\nTender, would you please help provide the outputs of\n\nEXPLAIN (COSTS ON, ANALYZE)\n\non 19.sql with and without eager aggregation?Yeah, in [1], 19_off.out and 19_on.out are the output of explain(costs off, analyze).I will do EXPLAIN(COSTS ON, ANALYZE) tests and upload them later today.[1] https://github.com/tenderwg/eager_agg -- Tender Wang",
"msg_date": "Thu, 29 Aug 2024 11:22:24 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月29日周四 10:46写道:\n\n> On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> > On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > > I haven't look all of them. I just pick few simple plan test(e.g.\n> 19.sql, 45.sql).\n> > > For example, 19.sql, eager agg pushdown doesn't get large gain, but a\n> little\n> > > performance regress.\n> >\n> > Yeah, this is one of the things I was worried about in my previous\n> > reply to Richard. It would be worth Richard, or someone, probing into\n> > exactly why that's happening. My fear is that we just don't have good\n> > enough estimates to make good decisions, but there might well be\n> > another explanation.\n>\n> It's great that we have a query to probe into. Your guess is likely\n> correct: it may be caused by poor estimates.\n>\n> Tender, would you please help provide the outputs of\n>\n> EXPLAIN (COSTS ON, ANALYZE)\n>\n> on 19.sql with and without eager aggregation?\n>\n> I upload EXPLAIN(COSTS ON, ANALYZE) test to [1].\nI ran the same query three times, and I chose the third time result.\nYou can check 19_off_explain.out and 19_on_explain.out.\n\n\n[1] https://github.com/tenderwg/eager_agg\n\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月29日周四 10:46写道:On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> > For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> > performance regress.\n>\n> Yeah, this is one of the things I was worried about in my previous\n> reply to Richard. It would be worth Richard, or someone, probing into\n> exactly why that's happening. My fear is that we just don't have good\n> enough estimates to make good decisions, but there might well be\n> another explanation.\n\nIt's great that we have a query to probe into. Your guess is likely\ncorrect: it may be caused by poor estimates.\n\nTender, would you please help provide the outputs of\n\nEXPLAIN (COSTS ON, ANALYZE)\n\non 19.sql with and without eager aggregation?I upload EXPLAIN(COSTS ON, ANALYZE) test to [1].I ran the same query three times, and I chose the third time result.You can check 19_off_explain.out and 19_on_explain.out.[1] https://github.com/tenderwg/eager_agg -- Tender Wang",
"msg_date": "Thu, 29 Aug 2024 11:38:11 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 10:26 PM Richard Guo <[email protected]> wrote:\n> Yeah, I'm concerned about this too. In addition to the inaccuracies\n> in aggregation estimates, our estimates for joins are sometimes not\n> very accurate either. All this are likely to result in regressions\n> with eager aggregation in some cases. Currently I don't have a good\n> answer to this problem. Maybe we can run some benchmarks first and\n> investigate the regressions discovered on a case-by-case basis to better\n> understand the specific issues.\n\nWhile it's true that we can make mistakes during join estimation, I\nbelieve aggregate estimation tends to be far worse.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Aug 2024 08:40:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 11:38 PM Tender Wang <[email protected]> wrote:\n> I upload EXPLAIN(COSTS ON, ANALYZE) test to [1].\n> I ran the same query three times, and I chose the third time result.\n> You can check 19_off_explain.out and 19_on_explain.out.\n\nSo, in 19_off_explain.out, we got this:\n\n -> Finalize GroupAggregate (cost=666986.48..667015.35\nrows=187 width=142) (actual time=272.649..334.318 rows=900 loops=1)\n -> Gather Merge (cost=666986.48..667010.21 rows=187\nwidth=142) (actual time=272.644..333.847 rows=901 loops=1)\n -> Partial GroupAggregate\n(cost=665986.46..665988.60 rows=78 width=142) (actual\ntime=266.379..267.476 rows=300 loops=3)\n -> Sort (cost=665986.46..665986.65\nrows=78 width=116) (actual time=266.367..266.583 rows=5081 loops=3)\n\nAnd in 19_on_explan.out, we got this:\n\n -> Finalize GroupAggregate (cost=666987.03..666989.77\nrows=19 width=142) (actual time=285.018..357.374 rows=900 loops=1)\n -> Gather Merge (cost=666987.03..666989.25 rows=19\nwidth=142) (actual time=285.000..352.793 rows=15242 loops=1)\n -> Sort (cost=665987.01..665987.03 rows=8\nwidth=142) (actual time=273.391..273.580 rows=5081 loops=3)\n -> Nested Loop (cost=665918.00..665986.89\nrows=8 width=142) (actual time=252.667..269.719 rows=5081 loops=3)\n -> Nested Loop\n(cost=665917.85..665985.43 rows=8 width=157) (actual\ntime=252.656..264.755 rows=5413 loops=3)\n -> Partial GroupAggregate\n(cost=665917.43..665920.10 rows=82 width=150) (actual\ntime=252.643..255.627 rows=5413 loops=3)\n -> Sort\n(cost=665917.43..665917.64 rows=82 width=124) (actual\ntime=252.636..252.927 rows=5413 loops=3)\n\nSo, the patch was expected to cause the number of rows passing through\nthe Gather Merge to decrease from 197 to 19, but actually caused the\nnumber of rows passing through the Gather Merge to increase from 901\nto 15242. When the PartialAggregate was positioned at the top of the\njoin tree, it reduced the number of rows from 5081 to 300; but when it\nwas pushed down below two joins, it didn't reduce the row count at\nall, and the subsequent two joins reduced it by less than 10%.\n\nNow, you could complain about the fact that the Parallel Hash Join\nisn't well-estimated here, but my question is: why does the planner\nthink that the PartialAggregate should go specifically here? In both\nplans, the PartialAggregate isn't expected to change the row count.\nAnd if that is true, then it's going to be cheapest to do it at the\npoint where the joins have reduced the row count to the minimum value.\nHere, that would be at the top of the plan tree, where we have only\n5081 estimated rows, but instead, the patch chooses to do it as soon\nas we have all of the grouping columns, when we. still have 5413 rows.\nI don't understand why that path wins on cost, unless it's just that\nthe paths compare fuzzily the same, in which case it kind of goes to\nmy earlier point about not really having the statistics to know which\nway is actually going to be better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Aug 2024 09:02:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:\n\n> On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]>\n> wrote:\n> > I had a self-review of this patchset and made some refactoring,\n> > especially to the function that creates the RelAggInfo structure for a\n> > given relation. While there were no major changes, the code should\n> > now be simpler.\n>\n> I found a bug in v10 patchset: when we generate the GROUP BY clauses\n> for the partial aggregation that is pushed down to a non-aggregated\n> relation, we may produce a clause with a tleSortGroupRef that\n> duplicates one already present in the query's groupClause, which would\n> cause problems.\n>\n> Attached is the updated version of the patchset that fixes this bug\n> and includes further code refactoring.\n>\n\nThe v11-0002 git am failed on HEAD(6c2b5edecc).\n\ntender@iZ2ze6la2dizi7df9q3xheZ:/workspace/postgres$ git am\nv11-0002-Implement-Eager-Aggregation.patch\nApplying: Implement Eager Aggregation\nerror: patch failed: src/test/regress/parallel_schedule:119\nerror: src/test/regress/parallel_schedule: patch does not apply\nPatch failed at 0001 Implement Eager Aggregation\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\nWhen you have resolved this problem, run \"git am --continue\".\nIf you prefer to skip this patch, run \"git am --skip\" instead.\nTo restore the original branch and stop patching, run \"git am --abort\".\n\n\n\n-- \nThanks,\nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月21日周三 15:11写道:On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring.The v11-0002 git am failed on HEAD(6c2b5edecc).tender@iZ2ze6la2dizi7df9q3xheZ:/workspace/postgres$ git am v11-0002-Implement-Eager-Aggregation.patchApplying: Implement Eager Aggregationerror: patch failed: src/test/regress/parallel_schedule:119error: src/test/regress/parallel_schedule: patch does not applyPatch failed at 0001 Implement Eager Aggregationhint: Use 'git am --show-current-patch=diff' to see the failed patchWhen you have resolved this problem, run \"git am --continue\".If you prefer to skip this patch, run \"git am --skip\" instead.To restore the original branch and stop patching, run \"git am --abort\". -- Thanks,Tender Wang",
"msg_date": "Wed, 4 Sep 2024 11:48:30 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:\n\n> On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]>\n> wrote:\n> > I had a self-review of this patchset and made some refactoring,\n> > especially to the function that creates the RelAggInfo structure for a\n> > given relation. While there were no major changes, the code should\n> > now be simpler.\n>\n> I found a bug in v10 patchset: when we generate the GROUP BY clauses\n> for the partial aggregation that is pushed down to a non-aggregated\n> relation, we may produce a clause with a tleSortGroupRef that\n> duplicates one already present in the query's groupClause, which would\n> cause problems.\n>\n> Attached is the updated version of the patchset that fixes this bug\n> and includes further code refactoring.\n\n\n I review the v11 patch set, and here are a few of my thoughts:\n\n1. in setup_eager_aggregation(), before calling create_agg_clause_infos(),\nit does\nsome checks if eager aggregation is available. Can we move those checks\ninto a function,\nfor example, can_eager_agg(), like can_partial_agg() does?\n\n2. I found that outside of joinrel.c we all use IS_DUMMY_REL, but in\njoinrel.c, Tom always uses\nis_dummy_rel(). Other commiters use IS_DUMMY_REL.\n\n3. The attached patch does not consider FDW when creating a path for\ngrouped_rel or grouped_join.\nDo we need to think about FDW?\n\nI haven't finished reviewing the patch set. I will continue to learn this\nfeature.\n\n-- \nThanks,\nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月21日周三 15:11写道:On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring. I review the v11 patch set, and here are a few of my thoughts: 1. in setup_eager_aggregation(), before calling create_agg_clause_infos(), it doessome checks if eager aggregation is available. Can we move those checks into a function,for example, can_eager_agg(), like can_partial_agg() does?2. I found that outside of joinrel.c we all use IS_DUMMY_REL, but in joinrel.c, Tom always usesis_dummy_rel(). Other commiters use IS_DUMMY_REL. 3. The attached patch does not consider FDW when creating a path for grouped_rel or grouped_join.Do we need to think about FDW?I haven't finished reviewing the patch set. I will continue to learn this feature.-- Thanks,Tender Wang",
"msg_date": "Thu, 5 Sep 2024 09:40:18 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:\n\n> On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]>\n> wrote:\n> > I had a self-review of this patchset and made some refactoring,\n> > especially to the function that creates the RelAggInfo structure for a\n> > given relation. While there were no major changes, the code should\n> > now be simpler.\n>\n> I found a bug in v10 patchset: when we generate the GROUP BY clauses\n> for the partial aggregation that is pushed down to a non-aggregated\n> relation, we may produce a clause with a tleSortGroupRef that\n> duplicates one already present in the query's groupClause, which would\n> cause problems.\n>\n> Attached is the updated version of the patchset that fixes this bug\n> and includes further code refactoring.\n>\n>\nI continue to review the v11 version patches. Here are some my thoughts.\n\n1. In make_one_rel(), we have the below codes:\n/*\n* Build grouped base relations for each base rel if possible.\n*/\nsetup_base_grouped_rels(root);\n\nAs far as I know, each base rel only has one grouped base relation, if\npossible.\nThe comments may be changed to \"Build a grouped base relation for each base\nrel if possible.\"\n\n2. According to the comments of generate_grouped_paths(), we may generate\npaths for a grouped\nrelation on top of paths of join relation. So the ”rel_plain\" argument in\ngenerate_grouped_paths() may be\nconfused. \"plain\" usually means \"base rel\" . How about Re-naming rel_plain\nto input_rel?\n\n3. In create_partial_grouping_paths(), The partially_grouped_rel could have\nbeen already created due to eager\naggregation. If partially_grouped_rel exists, its reltarget has been\ncreated. So do we need below logic?\n\n/*\n* Build target list for partial aggregate paths. These paths cannot just\n* emit the same tlist as regular aggregate paths, because (1) we must\n* include Vars and Aggrefs needed in HAVING, which might not appear in\n* the result tlist, and (2) the Aggrefs must be set in partial mode.\n*/\npartially_grouped_rel->reltarget =\n make_partial_grouping_target(root, grouped_rel->reltarget,\n extra->havingQual);\n\n\n--\nThanks,\nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月21日周三 15:11写道:On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring.\nI continue to review the v11 version patches. Here are some my thoughts.1. In make_one_rel(), we have the below codes:/*\t * Build grouped base relations for each base rel if possible.\t */\tsetup_base_grouped_rels(root);As far as I know, each base rel only has one grouped base relation, if possible.The comments may be changed to \"Build a grouped base relation for each base rel if possible.\"2. According to the comments of generate_grouped_paths(), we may generate paths for a grouped relation on top of paths of join relation. So the ”rel_plain\" argument in generate_grouped_paths() may beconfused. \"plain\" usually means \"base rel\" . How about Re-naming rel_plain to input_rel?3. In create_partial_grouping_paths(), The partially_grouped_rel could have been already created due to eageraggregation. If partially_grouped_rel exists, its reltarget has been created. So do we need below logic?/*\t * Build target list for partial aggregate paths. These paths cannot just\t * emit the same tlist as regular aggregate paths, because (1) we must\t * include Vars and Aggrefs needed in HAVING, which might not appear in\t * the result tlist, and (2) the Aggrefs must be set in partial mode.\t */\tpartially_grouped_rel->reltarget = make_partial_grouping_target(root, grouped_rel->reltarget, extra->havingQual);--Thanks,Tender Wang",
"msg_date": "Wed, 11 Sep 2024 10:52:05 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "Tender Wang <[email protected]> 于2024年9月4日周三 11:48写道:\n\n>\n>\n> Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:\n>\n>> On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]>\n>> wrote:\n>> > I had a self-review of this patchset and made some refactoring,\n>> > especially to the function that creates the RelAggInfo structure for a\n>> > given relation. While there were no major changes, the code should\n>> > now be simpler.\n>>\n>> I found a bug in v10 patchset: when we generate the GROUP BY clauses\n>> for the partial aggregation that is pushed down to a non-aggregated\n>> relation, we may produce a clause with a tleSortGroupRef that\n>> duplicates one already present in the query's groupClause, which would\n>> cause problems.\n>>\n>> Attached is the updated version of the patchset that fixes this bug\n>> and includes further code refactoring.\n>>\n>\n> The v11-0002 git am failed on HEAD(6c2b5edecc).\n>\n> tender@iZ2ze6la2dizi7df9q3xheZ:/workspace/postgres$ git am\n> v11-0002-Implement-Eager-Aggregation.patch\n> Applying: Implement Eager Aggregation\n> error: patch failed: src/test/regress/parallel_schedule:119\n> error: src/test/regress/parallel_schedule: patch does not apply\n> Patch failed at 0001 Implement Eager Aggregation\n> hint: Use 'git am --show-current-patch=diff' to see the failed patch\n> When you have resolved this problem, run \"git am --continue\".\n> If you prefer to skip this patch, run \"git am --skip\" instead.\n> To restore the original branch and stop patching, run \"git am --abort\".\n>\n>\nSince MERGE/SPLIT partition has been reverted, the tests *partition_merge*\nand *partition_split* should be removed\nfrom parallel_schedule. After doing the above, the 0002 patch can be\napplied.\n\n-- \nThanks,\nTender Wang\n\nTender Wang <[email protected]> 于2024年9月4日周三 11:48写道:Richard Guo <[email protected]> 于2024年8月21日周三 15:11写道:On Fri, Aug 16, 2024 at 4:14 PM Richard Guo <[email protected]> wrote:\n> I had a self-review of this patchset and made some refactoring,\n> especially to the function that creates the RelAggInfo structure for a\n> given relation. While there were no major changes, the code should\n> now be simpler.\n\nI found a bug in v10 patchset: when we generate the GROUP BY clauses\nfor the partial aggregation that is pushed down to a non-aggregated\nrelation, we may produce a clause with a tleSortGroupRef that\nduplicates one already present in the query's groupClause, which would\ncause problems.\n\nAttached is the updated version of the patchset that fixes this bug\nand includes further code refactoring.The v11-0002 git am failed on HEAD(6c2b5edecc).tender@iZ2ze6la2dizi7df9q3xheZ:/workspace/postgres$ git am v11-0002-Implement-Eager-Aggregation.patchApplying: Implement Eager Aggregationerror: patch failed: src/test/regress/parallel_schedule:119error: src/test/regress/parallel_schedule: patch does not applyPatch failed at 0001 Implement Eager Aggregationhint: Use 'git am --show-current-patch=diff' to see the failed patchWhen you have resolved this problem, run \"git am --continue\".If you prefer to skip this patch, run \"git am --skip\" instead.To restore the original branch and stop patching, run \"git am --abort\".Since MERGE/SPLIT partition has been reverted, the tests *partition_merge* and *partition_split* should be removedfrom parallel_schedule. After doing the above, the 0002 patch can be applied.-- Thanks,Tender Wang",
"msg_date": "Fri, 13 Sep 2024 15:48:20 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <[email protected]> wrote:\n> > I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 45.sql).\n> > For example, 19.sql, eager agg pushdown doesn't get large gain, but a little\n> > performance regress.\n>\n> Yeah, this is one of the things I was worried about in my previous\n> reply to Richard. It would be worth Richard, or someone, probing into\n> exactly why that's happening. My fear is that we just don't have good\n> enough estimates to make good decisions, but there might well be\n> another explanation.\n\nSorry it takes some time to switch back to this thread.\n\nI revisited the part about cost estimates for grouped paths in this\npatch, and I found a big issue: the row estimate for a join path could\nbe significantly inaccurate if there is a grouped join path beneath\nit.\n\nThe reason is that it is very tricky to set the size estimates for a\ngrouped join relation. For a non-grouped join relation, we know that\nall its paths have the same rowcount estimate (well, in theory). But\nthis is not true for a grouped join relation. Suppose we have a\ngrouped join relation for t1/t2 join. There might be two paths for\nit:\n\nAggregate\n -> Join\n -> Scan on t1\n -> Scan on t2\n\nOr\n\nJoin\n -> Scan on t1\n -> Aggregate\n -> Scan on t2\n\nThese two paths can have very different rowcount estimates, and we\nhave no way of knowing which one to set for this grouped join\nrelation, because we do not know which path would be picked in the\nfinal plan. This issue can be illustrated with the query below.\n\ncreate table t (a int, b int, c int);\ninsert into t select i%10, i%10, i%10 from generate_series(1,1000)i;\nanalyze t;\n\nset enable_eager_aggregate to on;\n\nexplain (costs on)\nselect sum(t2.c) from t t1 join t t2 on t1.a = t2.a join t t3 on t2.b\n= t3.b group by t3.a;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Finalize HashAggregate (cost=6840.60..6840.70 rows=10 width=12)\n Group Key: t3.a\n -> Nested Loop (cost=1672.00..1840.60 rows=1000000 width=12)\n Join Filter: (t2.b = t3.b)\n -> Partial HashAggregate (cost=1672.00..1672.10 rows=10 width=12)\n Group Key: t2.b\n -> Hash Join (cost=28.50..1172.00 rows=100000 width=8)\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t t1 (cost=0.00..16.00 rows=1000 width=4)\n -> Hash (cost=16.00..16.00 rows=1000 width=12)\n -> Seq Scan on t t2 (cost=0.00..16.00\nrows=1000 width=12)\n -> Materialize (cost=0.00..21.00 rows=1000 width=8)\n -> Seq Scan on t t3 (cost=0.00..16.00 rows=1000 width=8)\n(13 rows)\n\nLook at the Nested Loop node:\n\n -> Nested Loop (cost=1672.00..1840.60 rows=1000000 width=12)\n\nHow can a 10-row outer path joining a 1000-row inner path generate\n1000000 rows? This is because we are using the plan of the first path\ndescribed above, and the rowcount estimate of the second path. What a\nkluge!\n\nTo address this issue, one solution I’m considering is to recalculate\nthe row count estimate for a grouped join path using its outer and\ninner paths. While this may seem expensive, it might not be that bad\nsince we will cache the results of the selectivity calculation. In\nfact, this is already the approach we take for parameterized join\npaths (see get_parameterized_joinrel_size).\n\nAny thoughts on this?\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 25 Sep 2024 11:20:14 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 9:40 AM Tender Wang <[email protected]> wrote:\n> 1. in setup_eager_aggregation(), before calling create_agg_clause_infos(), it does\n> some checks if eager aggregation is available. Can we move those checks into a function,\n> for example, can_eager_agg(), like can_partial_agg() does?\n\nWe can do this, but I'm not sure this would be better.\n\n> 2. I found that outside of joinrel.c we all use IS_DUMMY_REL, but in joinrel.c, Tom always uses\n> is_dummy_rel(). Other commiters use IS_DUMMY_REL.\n\nThey are essentially the same: IS_DUMMY_REL() is a macro that wraps\nis_dummy_rel(). I think they are interchangeable, and I don’t have a\npreference for which one is better.\n\n> 3. The attached patch does not consider FDW when creating a path for grouped_rel or grouped_join.\n> Do we need to think about FDW?\n\nWe may add support for foreign relations in the future, but for now, I\nthink we'd better not expand the scope too much until we ensure that\neverything is working correctly.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 25 Sep 2024 14:55:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 10:52 AM Tender Wang <[email protected]> wrote:\n> 1. In make_one_rel(), we have the below codes:\n> /*\n> * Build grouped base relations for each base rel if possible.\n> */\n> setup_base_grouped_rels(root);\n>\n> As far as I know, each base rel only has one grouped base relation, if possible.\n> The comments may be changed to \"Build a grouped base relation for each base rel if possible.\"\n\nYeah, each base rel has only one grouped rel. However, there is a\ncomment nearby stating 'consider_parallel flags for each base rel',\nwhich confuses me about whether it should be singular or plural in\nthis context. Perhaps someone more proficient in English could\nclarify this.\n\n> 2. According to the comments of generate_grouped_paths(), we may generate paths for a grouped\n> relation on top of paths of join relation. So the ”rel_plain\" argument in generate_grouped_paths() may be\n> confused. \"plain\" usually means \"base rel\" . How about Re-naming rel_plain to input_rel?\n\nI don't think 'plain relation' necessarily means 'base relation'. In\nthis context I think it can mean 'non-grouped relation'. But maybe\nI'm wrong.\n\n> 3. In create_partial_grouping_paths(), The partially_grouped_rel could have been already created due to eager\n> aggregation. If partially_grouped_rel exists, its reltarget has been created. So do we need below logic?\n>\n> /*\n> * Build target list for partial aggregate paths. These paths cannot just\n> * emit the same tlist as regular aggregate paths, because (1) we must\n> * include Vars and Aggrefs needed in HAVING, which might not appear in\n> * the result tlist, and (2) the Aggrefs must be set in partial mode.\n> */\n> partially_grouped_rel->reltarget =\n> make_partial_grouping_target(root, grouped_rel->reltarget,\n> extra->havingQual);\n\nYeah, maybe we can avoid building the target list here for\npartially_grouped_rel that is generated by eager aggregation.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 25 Sep 2024 15:02:51 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 3:48 PM Tender Wang <[email protected]> wrote:\n> Since MERGE/SPLIT partition has been reverted, the tests *partition_merge* and *partition_split* should be removed\n> from parallel_schedule. After doing the above, the 0002 patch can be applied.\n\nYeah, that's what I need to do.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 25 Sep 2024 15:12:57 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
},
{
"msg_contents": "On Wed, Sep 25, 2024 at 11:20 AM Richard Guo <[email protected]> wrote:\n> Look at the Nested Loop node:\n>\n> -> Nested Loop (cost=1672.00..1840.60 rows=1000000 width=12)\n>\n> How can a 10-row outer path joining a 1000-row inner path generate\n> 1000000 rows? This is because we are using the plan of the first path\n> described above, and the rowcount estimate of the second path. What a\n> kluge!\n>\n> To address this issue, one solution I’m considering is to recalculate\n> the row count estimate for a grouped join path using its outer and\n> inner paths. While this may seem expensive, it might not be that bad\n> since we will cache the results of the selectivity calculation. In\n> fact, this is already the approach we take for parameterized join\n> paths (see get_parameterized_joinrel_size).\n\nHere is an updated version of this patch that fixes the rowcount\nestimate issue along this routine. (see set_joinpath_size.)\n\nNow the Nested Loop node looks like:\n\n -> Nested Loop (cost=1672.00..1840.60 rows=1000 width=12)\n (actual time=119.685..122.841 rows=1000 loops=1)\n\nIts rowcount estimate looks much more sane now.\n\nBut wait, why are we using nestloop here? My experience suggests that\nhashjoin typically outperforms nestloop with input paths of this size\non this type of dataset.\n\nThe thing is, the first path (join-then-aggregate one) of the t1/t2\ngrouped join relation has a much fewer rowcount but more expensive\ncosts:\n\n :path.rows 10\n :path.disabled_nodes 0\n :path.startup_cost 1672\n :path.total_cost 1672.1\n\nAnd the second path (aggregate-then-join one) has cheaper costs but\nmore rows.\n\n :jpath.path.rows 10000\n :jpath.path.disabled_nodes 0\n :jpath.path.startup_cost 25.75\n :jpath.path.total_cost 156.75\n\nBoth paths have survived the add_path() tournament for this relation,\nand the second one is selected as the cheapest path by set_cheapest,\nwhich mainly uses costs and then pathkeys as the selection criterion.\nThe rowcount estimate is not taken into account, which is reasonable\nbecause unparameterized paths for the same relation usually have the\nsame rowcount estimate. And when creating hashjoins, we only consider\nthe cheapest input paths. This is why we are unable to generate a\nhashjoin with the first path.\n\nHowever, the situation changes with grouped relations, as different\npaths of a grouped relation can have very different row counts. To\ncope with this, I modified set_cheapest() to also find the fewest-row\nunparameterized path if the relation is a grouped relation, and\ninclude it in the cheapest_parameterized_paths list. It could be\nargued that this will increase the overall planning time a lot because\nit adds one more path to cheapest_parameterized_paths. But in many\ncases the fewest-row-path is the same path as cheapest_total_path, in\nwhich case we do not need to add it again.\n\nAnd now the plan becomes:\n\nexplain (costs on)\nselect sum(t2.c) from t t1 join t t2 on t1.a = t2.a join t t3 on t2.b\n= t3.b group by t3.a;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Finalize HashAggregate (cost=1706.97..1707.07 rows=10 width=12)\n Group Key: t3.a\n -> Hash Join (cost=1672.22..1701.97 rows=1000 width=12)\n Hash Cond: (t3.b = t2.b)\n -> Seq Scan on t t3 (cost=0.00..16.00 rows=1000 width=8)\n -> Hash (cost=1672.10..1672.10 rows=10 width=12)\n -> Partial HashAggregate (cost=1672.00..1672.10\nrows=10 width=12)\n Group Key: t2.b\n -> Hash Join (cost=28.50..1172.00 rows=100000 width=8)\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t t1 (cost=0.00..16.00\nrows=1000 width=4)\n -> Hash (cost=16.00..16.00 rows=1000 width=12)\n -> Seq Scan on t t2\n(cost=0.00..16.00 rows=1000 width=12)\n(13 rows)\n\nI believe this is the most optimal plan we can find for this query on\nthis dataset.\n\nI also made some changes to how grouped relations are stored in this\nversion of the patch.\n\nThanks\nRichard",
"msg_date": "Fri, 27 Sep 2024 11:53:43 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eager aggregation, take 3"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI wanted to hook into the EXPLAIN output for queries and add some extra\ninformation, but since there is no standard_ExplainOneQuery() I had to copy\nthe code and create my own version.\n\nSince the pattern with other hooks for a function WhateverFunction() seems\nto be that there is a standard_WhateverFunction() for each\nWhateverFunction_hook, I created a patch to follow this pattern for your\nconsideration.\n\nI was also considering adding a callback so that you can annotate any node\nwith explanatory information that is not a custom scan node. This could be\nused to propagate and summarize information from custom scan nodes, but I\nhad no immediate use for that so did not add it here. I would still be\ninterested in hearing if you think this is something that would be useful\nto the community.\n\nBest wishes,\nMats Kindahl, Timescale",
"msg_date": "Mon, 4 Mar 2024 12:59:46 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "Hi,\n\n> I wanted to hook into the EXPLAIN output for queries and add some extra information, but since there is no standard_ExplainOneQuery() I had to copy the code and create my own version.\n>\n> Since the pattern with other hooks for a function WhateverFunction() seems to be that there is a standard_WhateverFunction() for each WhateverFunction_hook, I created a patch to follow this pattern for your consideration.\n>\n> I was also considering adding a callback so that you can annotate any node with explanatory information that is not a custom scan node. This could be used to propagate and summarize information from custom scan nodes, but I had no immediate use for that so did not add it here. I would still be interested in hearing if you think this is something that would be useful to the community.\n\nThanks for the patch. LGTM.\n\nI registered the patch on the nearest open CF [1] and marked it as\nRfC. It is a pretty straightforward refactoring.\n\n[1]: https://commitfest.postgresql.org/48/4879/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:41:16 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 03:41:16PM +0300, Aleksander Alekseev wrote:\n>> I wanted to hook into the EXPLAIN output for queries and add some\n>> extra information, but since there is no standard_ExplainOneQuery() I\n>> had to copy the code and create my own version. \n>>\n>> Since the pattern with other hooks for a function\n>> WhateverFunction() seems to be that there is a\n>> standard_WhateverFunction() for each WhateverFunction_hook, I\n>> created a patch to follow this pattern for your consideration.\n\nSo you've wanted to be able to add some custom information at the end\nor the beginning of ExplainState's output buffer, before falling back\nto the in-core path. What was the use case, if I may ask?\n\n>> I was also considering adding a callback so that you can annotate\n>> any node with explanatory information that is not a custom scan\n>> node. This could be used to propagate and summarize information\n>> from custom scan nodes, but I had no immediate use for that so did\n>> not add it here. I would still be interested in hearing if you\n>> think this is something that would be useful to the community.\n\nThat depends.\n\n> I registered the patch on the nearest open CF [1] and marked it as\n> RfC. It is a pretty straightforward refactoring.\n> \n> [1]: https://commitfest.postgresql.org/48/4879/\n\nI know that we're in the middle of commit fest 47 while this is in 48,\nbut I can't really see a reason why we should not do that earlier than\nv18. One point about core is to be flexible for extension code. So I\\\nhave no objections, others are free to comment.\n--\nMichael",
"msg_date": "Tue, 5 Mar 2024 15:31:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 7:31 AM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Mar 04, 2024 at 03:41:16PM +0300, Aleksander Alekseev wrote:\n> >> I wanted to hook into the EXPLAIN output for queries and add some\n> >> extra information, but since there is no standard_ExplainOneQuery() I\n> >> had to copy the code and create my own version.\n> >>\n> >> Since the pattern with other hooks for a function\n> >> WhateverFunction() seems to be that there is a\n> >> standard_WhateverFunction() for each WhateverFunction_hook, I\n> >> created a patch to follow this pattern for your consideration.\n>\n> So you've wanted to be able to add some custom information at the end\n> or the beginning of ExplainState's output buffer, before falling back\n> to the in-core path. What was the use case, if I may ask?\n>\n\nYes, that was the use-case. We have some caches added by extending\nExecutorStart and other executor-related functions using the hooks there\nand want to show cache hits and misses in the plan.\n\nI realize that a more advanced system is possible to create where you can\ncustomize the output even more, but in this case I just wanted to add a\nsection with some additional information related to plan execution. Also,\nthe code in explain.c seems to not be written with extensibility in mind,\nso I did not want to make too big a change here before thinking through how\nthis would work.\n\n\n> >> I was also considering adding a callback so that you can annotate\n> >> any node with explanatory information that is not a custom scan\n> >> node. This could be used to propagate and summarize information\n> >> from custom scan nodes, but I had no immediate use for that so did\n> >> not add it here. I would still be interested in hearing if you\n> >> think this is something that would be useful to the community.\n>\n> That depends.\n>\n\nJust to elaborate: the intention was to allow a section to be added to\nevery node in the plan containing information from further down and also\nallow this information to propagate upwards. We happen to have buffer\ninformation right now, but allowing something similar to be added\ndynamically by extending ExplainNode and passing down a callback to\nstandard_ExplainOneQuery.\n\nBest wishes,\nMats Kindahl\n\nOn Tue, Mar 5, 2024 at 7:31 AM Michael Paquier <[email protected]> wrote:On Mon, Mar 04, 2024 at 03:41:16PM +0300, Aleksander Alekseev wrote:\n>> I wanted to hook into the EXPLAIN output for queries and add some\n>> extra information, but since there is no standard_ExplainOneQuery() I\n>> had to copy the code and create my own version. \n>>\n>> Since the pattern with other hooks for a function\n>> WhateverFunction() seems to be that there is a\n>> standard_WhateverFunction() for each WhateverFunction_hook, I\n>> created a patch to follow this pattern for your consideration.\n\nSo you've wanted to be able to add some custom information at the end\nor the beginning of ExplainState's output buffer, before falling back\nto the in-core path. What was the use case, if I may ask?Yes, that was the use-case. We have some caches added by extending ExecutorStart and other executor-related functions using the hooks there and want to show cache hits and misses in the plan. I realize that a more advanced system is possible to create where you can customize the output even more, but in this case I just wanted to add a section with some additional information related to plan execution. Also, the code in explain.c seems to not be written with extensibility in mind, so I did not want to make too big a change here before thinking through how this would work.\n>> I was also considering adding a callback so that you can annotate\n>> any node with explanatory information that is not a custom scan\n>> node. This could be used to propagate and summarize information\n>> from custom scan nodes, but I had no immediate use for that so did\n>> not add it here. I would still be interested in hearing if you\n>> think this is something that would be useful to the community.\n\nThat depends.Just to elaborate: the intention was to allow a section to be added to every node in the plan containing information from further down and also allow this information to propagate upwards. We happen to have buffer information right now, but allowing something similar to be added dynamically by extending ExplainNode and passing down a callback to standard_ExplainOneQuery. Best wishes,Mats Kindahl",
"msg_date": "Tue, 5 Mar 2024 08:21:34 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 08:21:34AM +0100, Mats Kindahl wrote:\n> I realize that a more advanced system is possible to create where you can\n> customize the output even more, but in this case I just wanted to add a\n> section with some additional information related to plan execution. Also,\n> the code in explain.c seems to not be written with extensibility in mind,\n> so I did not want to make too big a change here before thinking through how\n> this would work.\n\nSure.\n\n> Just to elaborate: the intention was to allow a section to be added to\n> every node in the plan containing information from further down and also\n> allow this information to propagate upwards. We happen to have buffer\n> information right now, but allowing something similar to be added\n> dynamically by extending ExplainNode and passing down a callback to\n> standard_ExplainOneQuery.\n\nOr an extra hook at the end of ExplainNode() to be able to append more\ninformation at node level? Not sure if others would agree with that,\nthough.\n--\nMichael",
"msg_date": "Wed, 6 Mar 2024 08:25:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On 6/3/2024 06:25, Michael Paquier wrote:\n>> Just to elaborate: the intention was to allow a section to be added to\n>> every node in the plan containing information from further down and also\n>> allow this information to propagate upwards. We happen to have buffer\n>> information right now, but allowing something similar to be added\n>> dynamically by extending ExplainNode and passing down a callback to\n>> standard_ExplainOneQuery.\n> \n> Or an extra hook at the end of ExplainNode() to be able to append more\n> information at node level? Not sure if others would agree with that,\n> though.\n\nWe already discussed EXPLAIN hooks, at least in [1]. IMO, extensions \nshould have a chance to add something to the node explain and the \nsummary, if only because they can significantly influence the planner \nand executor's behaviour.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/6cd5caa7-06e1-4460-bf35-00a59da3f677%40garret.ru\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 09:26:49 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 3:27 AM Andrei Lepikhov <[email protected]>\nwrote:\n\n> On 6/3/2024 06:25, Michael Paquier wrote:\n> >> Just to elaborate: the intention was to allow a section to be added to\n> >> every node in the plan containing information from further down and also\n> >> allow this information to propagate upwards. We happen to have buffer\n> >> information right now, but allowing something similar to be added\n> >> dynamically by extending ExplainNode and passing down a callback to\n> >> standard_ExplainOneQuery.\n> >\n> > Or an extra hook at the end of ExplainNode() to be able to append more\n> > information at node level? Not sure if others would agree with that,\n> > though.\n>\n\nThat is what I had in mind, yes.\n\n\n> We already discussed EXPLAIN hooks, at least in [1]. IMO, extensions\n> should have a chance to add something to the node explain and the\n> summary, if only because they can significantly influence the planner\n> and executor's behaviour.\n>\n> [1]\n>\n> https://www.postgresql.org/message-id/flat/6cd5caa7-06e1-4460-bf35-00a59da3f677%40garret.ru\n\n\nThis is an excellent example of where such a hook would be useful.\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Wed, Mar 6, 2024 at 3:27 AM Andrei Lepikhov <[email protected]> wrote:On 6/3/2024 06:25, Michael Paquier wrote:\n>> Just to elaborate: the intention was to allow a section to be added to\n>> every node in the plan containing information from further down and also\n>> allow this information to propagate upwards. We happen to have buffer\n>> information right now, but allowing something similar to be added\n>> dynamically by extending ExplainNode and passing down a callback to\n>> standard_ExplainOneQuery.\n> \n> Or an extra hook at the end of ExplainNode() to be able to append more\n> information at node level? Not sure if others would agree with that,\n> though.That is what I had in mind, yes. \nWe already discussed EXPLAIN hooks, at least in [1]. IMO, extensions \nshould have a chance to add something to the node explain and the \nsummary, if only because they can significantly influence the planner \nand executor's behaviour.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/6cd5caa7-06e1-4460-bf35-00a59da3f677%40garret.ruThis is an excellent example of where such a hook would be useful.-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Wed, 6 Mar 2024 08:31:31 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "This patch would definitely be useful for Citus. We indeed currently\ncopy all of that code into our own explain hook. And it seems we\nactually have some bug. Because the es->memory branches were not\ncopied (probably because this code didn't exist when we copied it).\n\nhttps://github.com/citusdata/citus/blob/d59c93bc504ad32621d66583de6b65f936b0ed13/src/backend/distributed/planner/multi_explain.c#L1248-L1289\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:32:01 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Wed, Mar 06, 2024 at 10:32:01AM +0100, Jelte Fennema-Nio wrote:\n> This patch would definitely be useful for Citus. We indeed currently\n> copy all of that code into our own explain hook. And it seems we\n> actually have some bug. Because the es->memory branches were not\n> copied (probably because this code didn't exist when we copied it).\n> \n> https://github.com/citusdata/citus/blob/d59c93bc504ad32621d66583de6b65f936b0ed13/src/backend/distributed/planner/multi_explain.c#L1248-L1289\n\nThat's nice. You would be able to shave quite a bit of code. If\nthere are no objections, I propose to apply the change of this thread\nto have this standard explain wrapper at the beginning of next week.\nIf others have any comments, feel free.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2024 07:45:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 07:45:01AM +0900, Michael Paquier wrote:\n> That's nice. You would be able to shave quite a bit of code. If\n> there are no objections, I propose to apply the change of this thread\n> to have this standard explain wrapper at the beginning of next week.\n> If others have any comments, feel free.\n\nWell, done as of a04ddd077e61.\n--\nMichael",
"msg_date": "Mon, 11 Mar 2024 08:42:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 12:42 AM Michael Paquier <[email protected]>\nwrote:\n\n> On Thu, Mar 07, 2024 at 07:45:01AM +0900, Michael Paquier wrote:\n> > That's nice. You would be able to shave quite a bit of code. If\n> > there are no objections, I propose to apply the change of this thread\n> > to have this standard explain wrapper at the beginning of next week.\n> > If others have any comments, feel free.\n>\n> Well, done as of a04ddd077e61.\n>\n\nThanks Michael!\n-- \nBest wishes,\nMats Kindahl, Timescale\n\nOn Mon, Mar 11, 2024 at 12:42 AM Michael Paquier <[email protected]> wrote:On Thu, Mar 07, 2024 at 07:45:01AM +0900, Michael Paquier wrote:\n> That's nice. You would be able to shave quite a bit of code. If\n> there are no objections, I propose to apply the change of this thread\n> to have this standard explain wrapper at the beginning of next week.\n> If others have any comments, feel free.\n\nWell, done as of a04ddd077e61.Thanks Michael!-- Best wishes,Mats Kindahl, Timescale",
"msg_date": "Mon, 11 Mar 2024 07:59:48 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hooking into ExplainOneQuery() complicated by missing\n standard_ExplainOneQuery"
}
] |
[
{
"msg_contents": "When a backend is blocked on writing data (such as with a network\nerror or a very slow client), indicated with wait event ClientWrite,\nit appears to not properly notice that it's overrunning\nmax_standby_streaming_delay, and therefore does not cancel the\ntransaction on the backend.\n\nI've reproduced this repeatedly on Ubuntu 20.04 with PostgreSQL 15 out\nof the debian packages. Curiously enough, if I install the debug\nsymbols and restart, in order to get a backtrace, it starts processing\nthe cancellation again and can no longer reproduce. So it sounds like\nsome timing issue around it.\n\nMy simple test was, with session 1 on the standby and session 2 on the primary:\nSession 1: begin transaction isolation level repeatable read;\nSession 1: select count(*) from testtable;\nSession 2: alter table testtable rename to testtable2;\nSession 1: select * from testtable t1 cross join testtable t2;\nkill -STOP <the pid of session 1>\n\nAt this point, replication lag sartgs growing on the standby and it\nnever terminates the session.\n\nIf I then SIGCONT it, it will get terminated by replication conflict.\n\nIf I kill the session hard, the replication lag recovers immediately.\n\nAFAICT if the confliact happens at ClientRead, for example, it's\npicked up immediately, but there's something in ClientWrite that\nprevents it.\n\nMy first thought would be OpenSSL, but this is reproducible both on\ntls-over-tcp and on unix sockets.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 4 Mar 2024 14:12:38 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replication conflicts not processed in ClientWrite"
}
] |
[
{
"msg_contents": "Hello,\n\nAs we are currently experiencing a FSM corruption issue [1], we need to \nrebuild FSM when we detect it. \n\nI noticed we have something to truncate a visibility map, but nothing for the \nfreespace map, so I propose the attached (liberally copied from the VM \ncounterpart) to allow to truncate a FSM without incurring downtime, as \ncurrently our only options are to either VACUUM FULL the table or stop the \ncluster and remove the FSM manually.\n\nDoes that seem correct ?\n\n\n[1] https://www.postgresql.org/message-id/flat/\n1925490.taCxCBeP46%40aivenlaptop#7ace95c95cab17b6d92607e5362984ac\n\n--\nRonan Dunklau",
"msg_date": "Mon, 04 Mar 2024 14:54:25 +0100",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Provide a pg_truncate_freespacemap function"
},
{
"msg_contents": "Greetings,\n\n* Ronan Dunklau ([email protected]) wrote:\n> As we are currently experiencing a FSM corruption issue [1], we need to \n> rebuild FSM when we detect it. \n\nIdeally, we'd figure out a way to pick up on this and address it without\nthe user needing to intervene, however ...\n\n> I noticed we have something to truncate a visibility map, but nothing for the \n> freespace map, so I propose the attached (liberally copied from the VM \n> counterpart) to allow to truncate a FSM without incurring downtime, as \n> currently our only options are to either VACUUM FULL the table or stop the \n> cluster and remove the FSM manually.\n\nI agree that this would generally be a useful thing to have.\n\n> Does that seem correct ?\n\nDefinitely needs to have a 'REVOKE ALL ON FUNCTION' at the end of the\nupgrade script, similar to what you'll find at the bottom of\npg_visibility--1.1.sql in the tree today, otherwise anyone could run it.\n\nBeyond that, I'd suggest a function-level comment above the definition\nof the function itself (which is where we tend to put those- not at the\npoint where we declare the function).\n\nThanks!\n\nStephen",
"msg_date": "Wed, 6 Mar 2024 14:28:44 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Provide a pg_truncate_freespacemap function"
},
{
"msg_contents": "Le mercredi 6 mars 2024, 20:28:44 CET Stephen Frost a écrit :\n> I agree that this would generally be a useful thing to have.\n\nThanks !\n\n> \n> > Does that seem correct ?\n> \n> Definitely needs to have a 'REVOKE ALL ON FUNCTION' at the end of the\n> upgrade script, similar to what you'll find at the bottom of\n> pg_visibility--1.1.sql in the tree today, otherwise anyone could run it.\n> \n> Beyond that, I'd suggest a function-level comment above the definition\n> of the function itself (which is where we tend to put those- not at the\n> point where we declare the function).\n\nThank you for the review. Here is an updated patch for both of those.\n\n\nBest regards,\n\n--\nRonan",
"msg_date": "Thu, 07 Mar 2024 08:59:02 +0100",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Provide a pg_truncate_freespacemap function"
},
{
"msg_contents": "\n\nOn 2024/03/07 16:59, Ronan Dunklau wrote:\n> Le mercredi 6 mars 2024, 20:28:44 CET Stephen Frost a écrit :\n>> I agree that this would generally be a useful thing to have.\n> \n> Thanks !\n> \n>>\n>>> Does that seem correct ?\n>>\n>> Definitely needs to have a 'REVOKE ALL ON FUNCTION' at the end of the\n>> upgrade script, similar to what you'll find at the bottom of\n>> pg_visibility--1.1.sql in the tree today, otherwise anyone could run it.\n>>\n>> Beyond that, I'd suggest a function-level comment above the definition\n>> of the function itself (which is where we tend to put those- not at the\n>> point where we declare the function).\n> \n> Thank you for the review. Here is an updated patch for both of those.\n\nHere are my review comments:\n\nThe documentation for pg_freespace needs updating.\n\n\nA regression test for pg_truncate_freespace_map() should be added.\n\n\n+\t/* Only some relkinds have a freespace map */\n+\tif (!RELKIND_HAS_TABLE_AM(rel->rd_rel->relkind))\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+\t\t\t\t errmsg(\"relation \\\"%s\\\" is of wrong relation kind\",\n+\t\t\t\t\t\tRelationGetRelationName(rel)),\n+\t\t\t\t errdetail_relkind_not_supported(rel->rd_rel->relkind)));\n\nAn index can have an FSM, but this code doesn't account for that.\n\n\n+\t\tsmgrtruncate(RelationGetSmgr(rel), &fork, 1, &block);\n\nShouldn't truncation be performed after WAL-logging due to the WAL rule?\nI'm not sure if the current order might actually cause significant issues\nin FSM truncation case, though.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sun, 21 Jul 2024 13:51:56 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Provide a pg_truncate_freespacemap function"
},
{
"msg_contents": "Fujii Masao <[email protected]> writes:\n>> Le mercredi 6 mars 2024, 20:28:44 CET Stephen Frost a écrit :\n>>> I agree that this would generally be a useful thing to have.\n\nPersonally, I want to push back on whether this has any legitimate\nuse-case. Even if the FSM is corrupt, it should self-heal over\ntime, and I'm not seeing the argument why truncating it would\nspeed convergence towards correct values. Worse, in the interim\nwhere you don't have any FSM, you will suffer table bloat because\ninsertions will be appended at the end of the table. So this\nlooks like a foot-gun, and the patch's lack of user-visible\ndocumentation surely does nothing to make it safer.\n\n(The analogy to pg_truncate_visibility_map seems forced.\nIf you are in a situation with a trashed visibility map,\nyou are probably getting wrong query answers, and truncating\nthe map will make that better. But a trashed FSM doesn't\nresult in incorrect output, and zeroing it will make things\nworse not better.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jul 2024 01:39:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Provide a pg_truncate_freespacemap function"
},
{
"msg_contents": "Le dimanche 21 juillet 2024, 07:39:13 UTC+2 Tom Lane a écrit :\n> Fujii Masao <[email protected]> writes:\n> >> Le mercredi 6 mars 2024, 20:28:44 CET Stephen Frost a écrit :\n> >>> I agree that this would generally be a useful thing to have.\n> \n\nSorry for the late reply, as I was not available during the late summer.\n\n> Personally, I want to push back on whether this has any legitimate\n> use-case. Even if the FSM is corrupt, it should self-heal over\n> time, and I'm not seeing the argument why truncating it would\n> speed convergence towards correct values. Worse, in the interim\n> where you don't have any FSM, you will suffer table bloat because\n> insertions will be appended at the end of the table. So this\n> looks like a foot-gun, and the patch's lack of user-visible\n> documentation surely does nothing to make it safer.\n\n> (The analogy to pg_truncate_visibility_map seems forced.\n> If you are in a situation with a trashed visibility map,\n> you are probably getting wrong query answers, and truncating\n> the map will make that better. But a trashed FSM doesn't\n> result in incorrect output, and zeroing it will make things\n> worse not better.)\n\nNow that the other patch for self-healing is in, I agree it may not be that \nuseful. I'm withdrawing the patch and will keep it in mind if we encounter \nother FSM issues in the future.\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 20 Aug 2024 10:03:58 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Provide a pg_truncate_freespacemap function"
}
] |
[
{
"msg_contents": "Hi,\n\nThe function var_strcmp is a critical function.\nInside the function, there is a shortcut condition,\nwhich allows for a quick exit.\n\nUnfortunately, the current code calls a very expensive function beforehand,\nwhich if the test was true, all the call time is wasted.\nSo, IMO, it's better to postpone the function call until when it is\nactually necessary.\n\nbest regards,\nRanier Vilela",
"msg_date": "Mon, 4 Mar 2024 14:39:02 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 18:39, Ranier Vilela <[email protected]> wrote:\n>\n> Hi,\n>\n> The function var_strcmp is a critical function.\n> Inside the function, there is a shortcut condition,\n> which allows for a quick exit.\n>\n> Unfortunately, the current code calls a very expensive function beforehand, which if the test was true, all the call time is wasted.\n> So, IMO, it's better to postpone the function call until when it is actually necessary.\n\nThank you for your contribution.\n\nI agree it would be better, but your current patch is incorrect,\nbecause we need to check if the user has access to the locale (and\nthrow an error if not) before we return that the two strings are\nequal.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 4 Mar 2024 18:54:32 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
},
{
"msg_contents": "Em seg., 4 de mar. de 2024 às 14:54, Matthias van de Meent <\[email protected]> escreveu:\n\n> On Mon, 4 Mar 2024 at 18:39, Ranier Vilela <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > The function var_strcmp is a critical function.\n> > Inside the function, there is a shortcut condition,\n> > which allows for a quick exit.\n> >\n> > Unfortunately, the current code calls a very expensive function\n> beforehand, which if the test was true, all the call time is wasted.\n> > So, IMO, it's better to postpone the function call until when it is\n> actually necessary.\n>\n> Thank you for your contribution.\n>\n> I agree it would be better, but your current patch is incorrect,\n> because we need to check if the user has access to the locale (and\n> throw an error if not) before we return that the two strings are\n> equal.\n>\nI can't see any user validation at the function pg_newlocale_from_collation.\n\nmeson test pass all checks.\n\nbest regards,\nRanier Vilela\n\nEm seg., 4 de mar. de 2024 às 14:54, Matthias van de Meent <[email protected]> escreveu:On Mon, 4 Mar 2024 at 18:39, Ranier Vilela <[email protected]> wrote:\n>\n> Hi,\n>\n> The function var_strcmp is a critical function.\n> Inside the function, there is a shortcut condition,\n> which allows for a quick exit.\n>\n> Unfortunately, the current code calls a very expensive function beforehand, which if the test was true, all the call time is wasted.\n> So, IMO, it's better to postpone the function call until when it is actually necessary.\n\nThank you for your contribution.\n\nI agree it would be better, but your current patch is incorrect,\nbecause we need to check if the user has access to the locale (and\nthrow an error if not) before we return that the two strings are\nequal.I can't see any user validation at the function pg_newlocale_from_collation.meson test pass all checks.best regards,Ranier Vilela",
"msg_date": "Mon, 4 Mar 2024 15:08:03 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 03:08:03PM -0300, Ranier Vilela wrote:\n> I can't see any user validation at the function pg_newlocale_from_collation.\n\nMatthias is right, look closer. I can see more than one check,\nespecially note the one related to the collation version mismatch that\nshould not be silently ignored.\n\n> meson test pass all checks.\n\nCollations are harder to test because they depend on the environment\nwhere the test happens, especially with ICU.\n--\nMichael",
"msg_date": "Tue, 5 Mar 2024 07:53:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Mar 04, 2024 at 03:08:03PM -0300, Ranier Vilela wrote:\n>> I can't see any user validation at the function pg_newlocale_from_collation.\n\n> Matthias is right, look closer. I can see more than one check,\n> especially note the one related to the collation version mismatch that\n> should not be silently ignored.\n\nThe fast path through that code doesn't include any checks, true,\nbut the point is that finding the entry proves that we made those\nchecks previously. I can't agree with making those semantics\nsquishy in order to save a few cycles in the exact-equality case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Mar 2024 18:28:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
},
{
"msg_contents": "Em seg., 4 de mar. de 2024 às 20:28, Tom Lane <[email protected]> escreveu:\n\n> Michael Paquier <[email protected]> writes:\n> > On Mon, Mar 04, 2024 at 03:08:03PM -0300, Ranier Vilela wrote:\n> >> I can't see any user validation at the function\n> pg_newlocale_from_collation.\n>\n> > Matthias is right, look closer. I can see more than one check,\n> > especially note the one related to the collation version mismatch that\n> > should not be silently ignored.\n>\n> The fast path through that code doesn't include any checks, true,\n> but the point is that finding the entry proves that we made those\n> checks previously. I can't agree with making those semantics\n> squishy in order to save a few cycles in the exact-equality case.\n>\nRobustness is a fair point.\n\nbest regards,\nRanier Vilela\n\nEm seg., 4 de mar. de 2024 às 20:28, Tom Lane <[email protected]> escreveu:Michael Paquier <[email protected]> writes:\n> On Mon, Mar 04, 2024 at 03:08:03PM -0300, Ranier Vilela wrote:\n>> I can't see any user validation at the function pg_newlocale_from_collation.\n\n> Matthias is right, look closer. I can see more than one check,\n> especially note the one related to the collation version mismatch that\n> should not be silently ignored.\n\nThe fast path through that code doesn't include any checks, true,\nbut the point is that finding the entry proves that we made those\nchecks previously. I can't agree with making those semantics\nsquishy in order to save a few cycles in the exact-equality case.Robustness is a fair point. best regards,Ranier Vilela",
"msg_date": "Tue, 5 Mar 2024 08:44:15 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid is possible a expensive function call\n (src/backend/utils/adt/varlena.c)"
}
] |
[
{
"msg_contents": "Fix search_path to a safe value during maintenance operations.\n\nWhile executing maintenance operations (ANALYZE, CLUSTER, REFRESH\nMATERIALIZED VIEW, REINDEX, or VACUUM), set search_path to\n'pg_catalog, pg_temp' to prevent inconsistent behavior.\n\nFunctions that are used for functional indexes, in index expressions,\nor in materialized views and depend on a different search path must be\ndeclared with CREATE FUNCTION ... SET search_path='...'.\n\nThis change was previously committed as 05e1737351, then reverted in\ncommit 2fcc7ee7af because it was too late in the cycle.\n\nPreparation for the MAINTAIN privilege, which was previously reverted\ndue to search_path manipulation hazards.\n\nDiscussion: https://postgr.es/m/[email protected]\nDiscussion: https://postgr.es/m/E1q7j7Y-000z1H-Hr%40gemulon.postgresql.org\nDiscussion: https://postgr.es/m/e44327179e5c9015c8dda67351c04da552066017.camel%40j-davis.com\nReviewed-by: Greg Stark, Nathan Bossart, Noah Misch\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/2af07e2f749a9208ca1ed84fa1d8fe0e75833288\n\nModified Files\n--------------\ncontrib/amcheck/t/004_verify_nbtree_unique.pl | 33 +++++++++-------\ncontrib/amcheck/verify_nbtree.c | 2 +\ndoc/src/sgml/amcheck.sgml | 3 ++\ndoc/src/sgml/brin.sgml | 4 +-\ndoc/src/sgml/ref/analyze.sgml | 6 +++\ndoc/src/sgml/ref/cluster.sgml | 6 +++\ndoc/src/sgml/ref/create_index.sgml | 6 +++\ndoc/src/sgml/ref/refresh_materialized_view.sgml | 6 +++\ndoc/src/sgml/ref/reindex.sgml | 6 +++\ndoc/src/sgml/ref/vacuum.sgml | 6 +++\nsrc/backend/access/brin/brin.c | 2 +\nsrc/backend/catalog/index.c | 9 +++++\nsrc/backend/catalog/namespace.c | 3 ++\nsrc/backend/commands/analyze.c | 2 +\nsrc/backend/commands/cluster.c | 2 +\nsrc/backend/commands/indexcmds.c | 8 ++++\nsrc/backend/commands/matview.c | 2 +\nsrc/backend/commands/vacuum.c | 2 +\nsrc/bin/scripts/t/100_vacuumdb.pl | 4 --\nsrc/include/utils/guc.h | 6 +++\n.../test_oat_hooks/expected/alter_table.out | 2 +\n.../test_oat_hooks/expected/test_oat_hooks.out | 4 ++\nsrc/test/regress/expected/matview.out | 4 +-\nsrc/test/regress/expected/namespace.out | 44 ++++++++++++++++++++++\nsrc/test/regress/expected/privileges.out | 12 +++---\nsrc/test/regress/expected/vacuum.out | 2 +-\nsrc/test/regress/sql/matview.sql | 4 +-\nsrc/test/regress/sql/namespace.sql | 32 ++++++++++++++++\nsrc/test/regress/sql/privileges.sql | 8 ++--\nsrc/test/regress/sql/vacuum.sql | 2 +-\n30 files changed, 200 insertions(+), 32 deletions(-)",
"msg_date": "Tue, 05 Mar 2024 01:42:46 +0000",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix search_path to a safe value during maintenance operations."
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Fix search_path to a safe value during maintenance operations.\n\nThe buildfarm seems pretty unhappy with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Mar 2024 21:15:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2024-03-04 at 21:15 -0500, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > Fix search_path to a safe value during maintenance operations.\n> \n> The buildfarm seems pretty unhappy with this.\n\nLooks like I need to use GUC_ACTION_SAVE. I will remedy it shortly.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 04 Mar 2024 18:22:55 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On 2024-Mar-05, Jeff Davis wrote:\n\n> Fix search_path to a safe value during maintenance operations.\n> \n> While executing maintenance operations (ANALYZE, CLUSTER, REFRESH\n> MATERIALIZED VIEW, REINDEX, or VACUUM), set search_path to\n> 'pg_catalog, pg_temp' to prevent inconsistent behavior.\n> \n> Functions that are used for functional indexes, in index expressions,\n> or in materialized views and depend on a different search path must be\n> declared with CREATE FUNCTION ... SET search_path='...'.\n\nThis appears to have upset the sepgsql tests. In buildfarm member\nrhinoceros there's now a bunch of errors like this\n\n ALTER TABLE regtest_table_4\n ADD CONSTRAINT regtest_tbl4_con EXCLUDE USING btree (z WITH =);\n+LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name=\"regtest_schema\" permissive=0\n+LOG: SELinux: allowed { search } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema name=\"public\" permissive=0\n\nin its ddl.sql test. I suppose this is just the result of the internal\nchange of search_path. Maybe the thing to do is just accept the new\noutput as expected.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:19:35 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2024-03-05 at 17:19 +0100, Alvaro Herrera wrote:\n> This appears to have upset the sepgsql tests. In buildfarm member\n> rhinoceros there's now a bunch of errors like this\n\nThank you, pushed, and it appears to have fixed the problem.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 10:02:00 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nConditionVariableTimedSleep() accepts a timeout parameter, but it\ndoesn't explicitly state the unit for the timeout anywhere. To\ndetermine this, one needs to look into the details of the function to\nfind it out from the comments of the internally called function\nWaitLatch(). It would be beneficial to include a comment in the header\nof ConditionVariableTimedSleep() specifying that the timeout is in\nmilliseconds, similar to what we have for other non-static functions\nlike WaitLatch and WaitEventSetWait. Attached the patch for the same.\n\nthanks\nShveta",
"msg_date": "Tue, 5 Mar 2024 09:39:11 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add comment to specify timeout unit in ConditionVariableTimedSleep()"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 09:39:11AM +0530, shveta malik wrote:\n> ConditionVariableTimedSleep() accepts a timeout parameter, but it\n> doesn't explicitly state the unit for the timeout anywhere. To\n> determine this, one needs to look into the details of the function to\n> find it out from the comments of the internally called function\n> WaitLatch(). It would be beneficial to include a comment in the header\n> of ConditionVariableTimedSleep() specifying that the timeout is in\n> milliseconds, similar to what we have for other non-static functions\n> like WaitLatch and WaitEventSetWait. Attached the patch for the same.\n\nThat sounds like a good idea to me, so I'm OK with your suggestion.\n--\nMichael",
"msg_date": "Tue, 5 Mar 2024 15:20:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add comment to specify timeout unit in\n ConditionVariableTimedSleep()"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 03:20:48PM +0900, Michael Paquier wrote:\n> That sounds like a good idea to me, so I'm OK with your suggestion.\n\nApplied this one as f160bf06f72a. Thanks.\n--\nMichael",
"msg_date": "Sat, 9 Mar 2024 15:48:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add comment to specify timeout unit in\n ConditionVariableTimedSleep()"
},
{
"msg_contents": "On Sat, Mar 9, 2024 at 12:19 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Mar 05, 2024 at 03:20:48PM +0900, Michael Paquier wrote:\n> > That sounds like a good idea to me, so I'm OK with your suggestion.\n>\n> Applied this one as f160bf06f72a. Thanks.\n\nThanks!\n\nthanks\nShveta\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:18:35 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add comment to specify timeout unit in\n ConditionVariableTimedSleep()"
}
] |
[
{
"msg_contents": "I think this is a typo introduced in 0452b461b.\n\n + root->processed_groupClause = list_copy(parse->groupClause);;\n\nThe extra empty statement is harmless in most times, but I still think\nit would be better to get rid of it.\n\nAttached is a trivial patch to do that.\n\nThanks\nRichard",
"msg_date": "Tue, 5 Mar 2024 19:43:21 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Get rid of the excess semicolon in planner.c"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 00:43, Richard Guo <[email protected]> wrote:\n>\n> I think this is a typo introduced in 0452b461b.\n>\n> + root->processed_groupClause = list_copy(parse->groupClause);;\n\n\"git grep -E \";;$\" -- *.c *.h\" tell me it's the only one.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Wed, 6 Mar 2024 11:00:10 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Get rid of the excess semicolon in planner.c"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 6:00 AM David Rowley <[email protected]> wrote:\n\n> On Wed, 6 Mar 2024 at 00:43, Richard Guo <[email protected]> wrote:\n> >\n> > I think this is a typo introduced in 0452b461b.\n> >\n> > + root->processed_groupClause = list_copy(parse->groupClause);;\n>\n> \"git grep -E \";;$\" -- *.c *.h\" tell me it's the only one.\n>\n> Pushed.\n\n\nThanks for checking and pushing.\n\nThanks\nRichard\n\nOn Wed, Mar 6, 2024 at 6:00 AM David Rowley <[email protected]> wrote:On Wed, 6 Mar 2024 at 00:43, Richard Guo <[email protected]> wrote:\n>\n> I think this is a typo introduced in 0452b461b.\n>\n> + root->processed_groupClause = list_copy(parse->groupClause);;\n\n\"git grep -E \";;$\" -- *.c *.h\" tell me it's the only one.\n\nPushed.Thanks for checking and pushing.ThanksRichard",
"msg_date": "Thu, 7 Mar 2024 15:14:09 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Get rid of the excess semicolon in planner.c"
}
] |
[
{
"msg_contents": "Inspired by feedback to [1], I thought about how to reduce log spam.\n\nMy experience from the field is that a lot of log spam looks like\n\n database/table/... \"xy\" does not exist\n duplicate key value violates unique constraint \"xy\"\n\nSo what about a parameter \"log_suppress_sqlstates\" that suppresses\nlogging ERROR and FATAL messages with the enumerated SQL states?\n\nMy idea for a default setting would be something like\n\n log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n\nbut that's of course bikeshedding territory.\n\nYours,\nLaurenz Albe\n\n\n\n [1]: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at\n\n\n",
"msg_date": "Tue, 05 Mar 2024 13:55:34 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reducing the log spam"
},
{
"msg_contents": "Hi\n\nút 5. 3. 2024 v 13:55 odesílatel Laurenz Albe <[email protected]>\nnapsal:\n\n> Inspired by feedback to [1], I thought about how to reduce log spam.\n>\n> My experience from the field is that a lot of log spam looks like\n>\n> database/table/... \"xy\" does not exist\n> duplicate key value violates unique constraint \"xy\"\n>\n> So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> logging ERROR and FATAL messages with the enumerated SQL states?\n>\n> My idea for a default setting would be something like\n>\n> log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n>\n\n+1 in this form\n\nthe overhead of this implementation should be small\n\nRegards\n\nPavel\n\n\n> but that's of course bikeshedding territory.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n> [1]:\n> https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at\n>\n>\n>\n\nHiút 5. 3. 2024 v 13:55 odesílatel Laurenz Albe <[email protected]> napsal:Inspired by feedback to [1], I thought about how to reduce log spam.\n\nMy experience from the field is that a lot of log spam looks like\n\n database/table/... \"xy\" does not exist\n duplicate key value violates unique constraint \"xy\"\n\nSo what about a parameter \"log_suppress_sqlstates\" that suppresses\nlogging ERROR and FATAL messages with the enumerated SQL states?\n\nMy idea for a default setting would be something like\n\n log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'+1 in this formthe overhead of this implementation should be smallRegardsPavel\n\nbut that's of course bikeshedding territory.\n\nYours,\nLaurenz Albe\n\n\n\n [1]: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at",
"msg_date": "Tue, 5 Mar 2024 14:08:25 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "Hi Laurenz\n\nOn 05.03.24 13:55, Laurenz Albe wrote:\n> Inspired by feedback to [1], I thought about how to reduce log spam.\n>\n> My experience from the field is that a lot of log spam looks like\n>\n> database/table/... \"xy\" does not exist\n> duplicate key value violates unique constraint \"xy\"\n>\n> So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> logging ERROR and FATAL messages with the enumerated SQL states?\n>\n> My idea for a default setting would be something like\n>\n> log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n>\n> but that's of course bikeshedding territory.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n> [1]: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at\n\nI like this idea, and I could see myself using it a lot in some projects.\n\nAdditionally, it would be nice to also have the possibility suppress a \nwhole class instead of single SQL states, e.g.\n\nlog_suppress_sqlstates = 'class_08' to suppress these all at once:\n\n08000 \tconnection_exception\n08003 \tconnection_does_not_exist\n08006 \tconnection_failure\n08001 \tsqlclient_unable_to_establish_sqlconnection\n08004 \tsqlserver_rejected_establishment_of_sqlconnection\n08007 \ttransaction_resolution_unknown\n08P01 \tprotocol_violation\n\nBest regards,\nJim\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 14:55:21 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "Hi\n\nút 5. 3. 2024 v 14:55 odesílatel Jim Jones <[email protected]>\nnapsal:\n\n> Hi Laurenz\n>\n> On 05.03.24 13:55, Laurenz Albe wrote:\n> > Inspired by feedback to [1], I thought about how to reduce log spam.\n> >\n> > My experience from the field is that a lot of log spam looks like\n> >\n> > database/table/... \"xy\" does not exist\n> > duplicate key value violates unique constraint \"xy\"\n> >\n> > So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> > logging ERROR and FATAL messages with the enumerated SQL states?\n> >\n> > My idea for a default setting would be something like\n> >\n> > log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n> >\n> > but that's of course bikeshedding territory.\n> >\n> > Yours,\n> > Laurenz Albe\n> >\n> >\n> >\n> > [1]:\n> https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at\n>\n> I like this idea, and I could see myself using it a lot in some projects.\n>\n> Additionally, it would be nice to also have the possibility suppress a\n> whole class instead of single SQL states, e.g.\n>\n> log_suppress_sqlstates = 'class_08' to suppress these all at once:\n>\n> 08000 connection_exception\n> 08003 connection_does_not_exist\n> 08006 connection_failure\n> 08001 sqlclient_unable_to_establish_sqlconnection\n> 08004 sqlserver_rejected_establishment_of_sqlconnection\n> 08007 transaction_resolution_unknown\n> 08P01 protocol_violation\n>\n>\nIt can take code from PLpgSQL.\n\nRegards\n\nPavel\n\n\n\n> Best regards,\n> Jim\n>\n>\n>\n>\n\nHiút 5. 3. 2024 v 14:55 odesílatel Jim Jones <[email protected]> napsal:Hi Laurenz\n\nOn 05.03.24 13:55, Laurenz Albe wrote:\n> Inspired by feedback to [1], I thought about how to reduce log spam.\n>\n> My experience from the field is that a lot of log spam looks like\n>\n> database/table/... \"xy\" does not exist\n> duplicate key value violates unique constraint \"xy\"\n>\n> So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> logging ERROR and FATAL messages with the enumerated SQL states?\n>\n> My idea for a default setting would be something like\n>\n> log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n>\n> but that's of course bikeshedding territory.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n> [1]: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at\n\nI like this idea, and I could see myself using it a lot in some projects.\n\nAdditionally, it would be nice to also have the possibility suppress a \nwhole class instead of single SQL states, e.g.\n\nlog_suppress_sqlstates = 'class_08' to suppress these all at once:\n\n08000 connection_exception\n08003 connection_does_not_exist\n08006 connection_failure\n08001 sqlclient_unable_to_establish_sqlconnection\n08004 sqlserver_rejected_establishment_of_sqlconnection\n08007 transaction_resolution_unknown\n08P01 protocol_violation\nIt can take code from PLpgSQL.RegardsPavel \nBest regards,\nJim",
"msg_date": "Tue, 5 Mar 2024 14:57:05 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 14:55, Jim Jones <[email protected]> wrote:\n> > So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> > logging ERROR and FATAL messages with the enumerated SQL states?\n\nBig +1 from me for this idea.\n\n\n",
"msg_date": "Tue, 5 Mar 2024 15:08:35 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "Hi,\n\n> So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> logging ERROR and FATAL messages with the enumerated SQL states?\n>\n> My idea for a default setting would be something like\n>\n> log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n>\n> but that's of course bikeshedding territory.\n\nI like the idea of suppressing certain log messages in general, but\nthe particular user interface doesn't strike me as an especially\nconvenient one.\n\nFirstly I don't think many people remember sqlstates and what 3F000\nstands for. IMO most users don't know such a thing exists. Secondly,\nwhether we should list sqlstates to suppress or the opposite - list\nthe states that shouldn't be suppressed, is a debatable question. Last\nbut not least, it's not quite clear whether PostgreSQL core is the\nright place for implementing this functionality. For instance, one\ncould argue that the log message should just contain sqlstate and be\ndirected to |grep instead.\n\nI suspect this could be one of \"there is no one size fits all\"\nsituations. The typical solution in such cases is to form a structure\ncontaining the log message and its attributes and submit this\nstructure to a registered hook of a pluggable logging subsystem. This\nwould be the most flexible approach. It will allow not only filtering\nthe messages but also using binary logging, JSON logging, logging to\nexternal systems like Loki instead of a given text file, etc.\n\nI don't think we currently have this in the core, but maybe I just missed it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 6 Mar 2024 17:09:47 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 7:55 AM Laurenz Albe <[email protected]>\nwrote:\n\n> My experience from the field is that a lot of log spam looks like\n>\n> database/table/... \"xy\" does not exist\n> duplicate key value violates unique constraint \"xy\"\n\n\nForcibly hiding those at the Postgres level seems a heavy hammer for what\nis ultimately an application problem.\n\nTell me about a system that logs different classes of errors to different\nlog files, and I'm interested again.\n\nCheers,\nGreg\n\nOn Tue, Mar 5, 2024 at 7:55 AM Laurenz Albe <[email protected]> wrote:My experience from the field is that a lot of log spam looks like\n\n database/table/... \"xy\" does not exist\n duplicate key value violates unique constraint \"xy\"Forcibly hiding those at the Postgres level seems a heavy hammer for what is ultimately an application problem.Tell me about a system that logs different classes of errors to different log files, and I'm interested again.Cheers,Greg",
"msg_date": "Wed, 6 Mar 2024 10:50:14 -0500",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Wed, 2024-03-06 at 17:09 +0300, Aleksander Alekseev wrote:\n> I like the idea of suppressing certain log messages in general, but\n> the particular user interface doesn't strike me as an especially\n> convenient one.\n> \n> Firstly I don't think many people remember sqlstates and what 3F000\n> stands for. IMO most users don't know such a thing exists. Secondly,\n> whether we should list sqlstates to suppress or the opposite - list\n> the states that shouldn't be suppressed, is a debatable question. Last\n> but not least, it's not quite clear whether PostgreSQL core is the\n> right place for implementing this functionality. For instance, one\n> could argue that the log message should just contain sqlstate and be\n> directed to |grep instead.\n> \n> I suspect this could be one of \"there is no one size fits all\"\n> situations. The typical solution in such cases is to form a structure\n> containing the log message and its attributes and submit this\n> structure to a registered hook of a pluggable logging subsystem. This\n> would be the most flexible approach. It will allow not only filtering\n> the messages but also using binary logging, JSON logging, logging to\n> external systems like Loki instead of a given text file, etc.\n> \n> I don't think we currently have this in the core, but maybe I just missed it.\n\nThe target would not primarily be installations where people configure\nnifty logging software to filter logs (those people know how to deal\nwith log spam), but installations where people don't even know enough\nto configure \"shared_buffers\". So I'd like something that is part of\ncore and reduces spam without the user needing to configure anything.\n\nI am somewhat worried that people will come up with all kinds of\njustified but complicated wishes for such a feature:\n\n- an option to choose whether to include or to exclude certain errors\n- be able to configure that certain errors be logged on FATAL, but\n not on ERROR\n- allow exception names in addition to SQL states\n- have wildcards for exception names\n- ...\n\nI would like to write a simple patch that covers the basic functionality\nI described, provided enough people find it useful. That does not\nexclude the option for future extensions for this feature.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 06 Mar 2024 17:01:12 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Wed, 2024-03-06 at 10:50 -0500, Greg Sabino Mullane wrote:\n> On Tue, Mar 5, 2024 at 7:55 AM Laurenz Albe <[email protected]> wrote:\n> > My experience from the field is that a lot of log spam looks like\n> > \n> > database/table/... \"xy\" does not exist\n> > duplicate key value violates unique constraint \"xy\"\n> \n> Forcibly hiding those at the Postgres level seems a heavy hammer for what is ultimately an application problem.\n\nYes... or no. Lots of applications violate constraints routinely.\nAs long as the error is caught and handled, that's not a problem.\n\nWhoever cares about the log messages can enable them. My impression\nis that most people don't care about them.\n\nBut thanks for your opinion.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 06 Mar 2024 21:31:17 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 07:55, Laurenz Albe <[email protected]> wrote:\n\n> Inspired by feedback to [1], I thought about how to reduce log spam.\n>\n> My experience from the field is that a lot of log spam looks like\n>\n> database/table/... \"xy\" does not exist\n> duplicate key value violates unique constraint \"xy\"\n>\n> So what about a parameter \"log_suppress_sqlstates\" that suppresses\n> logging ERROR and FATAL messages with the enumerated SQL states?\n>\n> My idea for a default setting would be something like\n>\n> log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n>\n> but that's of course bikeshedding territory.\n>\n\nI like the basic idea and the way of specifying states seems likely to\ncover a lot of typical use cases. Of course in principle the application\nshould be fixed, but in practice we can't always control that.\n\nI have two questions about this:\n\nFirst, can it be done per role? If I have a particular application which is\nconstantly throwing some particular error, I might want to suppress it, but\nnot suppress the same error occasionally coming from another application. I\nsee ALTER DATABASE name SET configuration_parameter … as being useful here,\nbut often multiple applications share a database.\n\nSecond, where can this setting be adjusted? Can any session turn off\nlogging of arbitrary sets of sqlstates resulting from its queries? It feels\nto me like that might allow security problems to be hidden. Specifically,\nthe first thing an SQL injection might do would be to turn off logging of\nimportant error states, then proceed to try various nefarious things.\n\nIt seems to me the above questions interact; an answer to the first might\nbe \"ALTER ROLE role_specification SET configuration_parameter\", but I think\nthat would allow roles to change their own settings, contrary to the\nconcern raised by the second question.\n\nOn Tue, 5 Mar 2024 at 07:55, Laurenz Albe <[email protected]> wrote:Inspired by feedback to [1], I thought about how to reduce log spam.\n\nMy experience from the field is that a lot of log spam looks like\n\n database/table/... \"xy\" does not exist\n duplicate key value violates unique constraint \"xy\"\n\nSo what about a parameter \"log_suppress_sqlstates\" that suppresses\nlogging ERROR and FATAL messages with the enumerated SQL states?\n\nMy idea for a default setting would be something like\n\n log_suppress_sqlstates = '23505,3D000,3F000,42601,42704,42883,42P01'\n\nbut that's of course bikeshedding territory.I like the basic idea and the way of specifying states seems likely to cover a lot of typical use cases. Of course in principle the application should be fixed, but in practice we can't always control that.I have two questions about this:First, can it be done per role? If I have a particular application which is constantly throwing some particular error, I might want to suppress it, but not suppress the same error occasionally coming from another application. I see ALTER DATABASE name SET configuration_parameter … as being useful here, but often multiple applications share a database.Second, where can this setting be adjusted? Can any session turn off logging of arbitrary sets of sqlstates resulting from its queries? It feels to me like that might allow security problems to be hidden. Specifically, the first thing an SQL injection might do would be to turn off logging of important error states, then proceed to try various nefarious things.It seems to me the above questions interact; an answer to the first might be \"ALTER ROLE role_specification SET configuration_parameter\", but I think that would allow roles to change their own settings, contrary to the concern raised by the second question.",
"msg_date": "Wed, 6 Mar 2024 17:33:33 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Wed, 2024-03-06 at 17:33 -0500, Isaac Morland wrote:\n> I have two questions about this:\n> \n> First, can it be done per role? If I have a particular application which is\n> constantly throwing some particular error, I might want to suppress it, but\n> not suppress the same error occasionally coming from another application.\n> I see ALTER DATABASE name SET configuration_parameter … as being useful here,\n> but often multiple applications share a database.\n>\n> Second, where can this setting be adjusted? Can any session turn off logging\n> of arbitrary sets of sqlstates resulting from its queries? It feels to me\n> like that might allow security problems to be hidden. Specifically, the first\n> thing an SQL injection might do would be to turn off logging of important\n> error states, then proceed to try various nefarious things.\n\nI was envisioning the parameter to be like other logging parameters, for\nexample \"log_statement\": only superusers can set the parameter or GRANT\nthat privilege to others. Also, a superuser could use ALTER ROLE to set\nthe parameter for all sessions by that role.\n\n> It seems to me the above questions interact; an answer to the first might be\n> \"ALTER ROLE role_specification SET configuration_parameter\", but I think that\n> would allow roles to change their own settings, contrary to the concern\n> raised by the second question.\n\nIf a superuser sets \"log_statement\" on a role, that role cannot undo or change\nthe setting. That's just how I plan to implement the new parameter.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 07 Mar 2024 08:30:59 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, 2024-03-07 at 08:30 +0100, Laurenz Albe wrote:\n> On Wed, 2024-03-06 at 17:33 -0500, Isaac Morland wrote:\n> > I have two questions about this:\n> > \n> > First, can it be done per role? If I have a particular application which is\n> > constantly throwing some particular error, I might want to suppress it, but\n> > not suppress the same error occasionally coming from another application.\n> > I see ALTER DATABASE name SET configuration_parameter … as being useful here,\n> > but often multiple applications share a database.\n> > \n> > Second, where can this setting be adjusted? Can any session turn off logging\n> > of arbitrary sets of sqlstates resulting from its queries? It feels to me\n> > like that might allow security problems to be hidden. Specifically, the first\n> > thing an SQL injection might do would be to turn off logging of important\n> > error states, then proceed to try various nefarious things.\n> \n> I was envisioning the parameter to be like other logging parameters, for\n> example \"log_statement\": only superusers can set the parameter or GRANT\n> that privilege to others. Also, a superuser could use ALTER ROLE to set\n> the parameter for all sessions by that role.\n> \n> > It seems to me the above questions interact; an answer to the first might be\n> > \"ALTER ROLE role_specification SET configuration_parameter\", but I think that\n> > would allow roles to change their own settings, contrary to the concern\n> > raised by the second question.\n> \n> If a superuser sets \"log_statement\" on a role, that role cannot undo or change\n> the setting. That's just how I plan to implement the new parameter.\n\nHere is a patch that implements this.\n\nI went with \"log_suppress_errcodes\", since the term \"error code\" is used\nthroughout our documentation.\n\nThe initial value is 23505,3D000,3F000,42601,42704,42883,42P01,57P03\n\nYours,\nLaurenz Albe",
"msg_date": "Sat, 09 Mar 2024 14:03:55 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Sat, 2024-03-09 at 14:03 +0100, Laurenz Albe wrote:\n> Here is a patch that implements this.\n\nAnd here is patch v2 that fixes a bug and passes the regression tests.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 11 Mar 2024 03:43:58 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "- the subscriber's server log.\n+ the subscriber's server log if you remove <literal>23505</literal> from\n+ <xref linkend=\"guc-log-suppress-errcodes\"/>.\n\nThis seems like a pretty big regression. Being able to know why your\nreplication got closed seems pretty critical.\n\nOn Mon, 11 Mar 2024 at 03:44, Laurenz Albe <[email protected]> wrote:\n>\n> On Sat, 2024-03-09 at 14:03 +0100, Laurenz Albe wrote:\n> > Here is a patch that implements this.\n>\n> And here is patch v2 that fixes a bug and passes the regression tests.\n>\n> Yours,\n> Laurenz Albe\n\n\n",
"msg_date": "Mon, 11 Mar 2024 09:33:48 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Mon, 2024-03-11 at 09:33 +0100, Jelte Fennema-Nio wrote:\n> - the subscriber's server log.\n> + the subscriber's server log if you remove <literal>23505</literal> from\n> + <xref linkend=\"guc-log-suppress-errcodes\"/>.\n> \n> This seems like a pretty big regression. Being able to know why your\n> replication got closed seems pretty critical.\n\nThe actual SQLSTATEs that get suppressed are subject to discussion\n(an I have a gut feeling that some people will want the list empty).\n\nAs far as this specific functionality is concerned, I think that the\nactual problem is a deficiency in PostgreSQL. The problem is that\nthe log is the *only* place where you can get this information. That\nwill be a problem for many people, even without \"log_suppress_errcodes\".\n\nI think that this isformation should be available in some statistics\nview.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 11 Mar 2024 12:18:01 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Mon, 2024-03-11 at 09:33 +0100, Jelte Fennema-Nio wrote:\n> - the subscriber's server log.\n> + the subscriber's server log if you remove <literal>23505</literal> from\n> + <xref linkend=\"guc-log-suppress-errcodes\"/>.\n> \n> This seems like a pretty big regression. Being able to know why your\n> replication got closed seems pretty critical.\n\nYes. But I'd argue that that is a shortcoming of logical replication:\nthere should be a ways to get this information via SQL. Having to look into\nthe log file is not a very useful option.\n\nThe feature will become much less useful if unique voilations keep getting logged.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 02 May 2024 12:47:45 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, 2 May 2024 at 12:47, Laurenz Albe <[email protected]> wrote:\n> Yes. But I'd argue that that is a shortcoming of logical replication:\n> there should be a ways to get this information via SQL. Having to look into\n> the log file is not a very useful option.\n\nDefinitely agreed that accessing the error details using SQL would be\nmuch better. But having no way at all (by default) to find the cause\nof the failure is clearly much worse.\n\n> The feature will become much less useful if unique voilations keep getting logged.\n\nAgreed. How about changing the patch so that the GUC is not applied to\nlogical replication apply workers (and document that accordingly). I\ncan think of two ways of achieving that (but there might be\nother/better ones):\n1. Set the GUC to empty string when an apply worker is started.\n2. Change the newly added check in errcode() to only set\noutput_to_server to false when IsLogicalWorker() returns false.\n\n\n",
"msg_date": "Thu, 2 May 2024 13:08:30 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, 2 May 2024 at 13:08, Jelte Fennema-Nio <[email protected]> wrote:\n> 2. Change the newly added check in errcode() to only set\n> output_to_server to false when IsLogicalWorker() returns false.\n\nActually a third, and probably even better solution would be to only\napply this new GUC to non-backgroundworker processes. That seems quite\nreasonable, since often the only way to access background worker\nerrors is often through the logs.\n\n\n",
"msg_date": "Thu, 2 May 2024 13:11:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, 2024-05-02 at 13:11 +0200, Jelte Fennema-Nio wrote:\n> On Thu, 2 May 2024 at 13:08, Jelte Fennema-Nio <[email protected]> wrote:\n> > 2. Change the newly added check in errcode() to only set\n> > output_to_server to false when IsLogicalWorker() returns false.\n> \n> Actually a third, and probably even better solution would be to only\n> apply this new GUC to non-backgroundworker processes. That seems quite\n> reasonable, since often the only way to access background worker\n> errors is often through the logs.\n\nThat is a good idea. This version only suppresses error messages\nin ordinary backend processes.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 03 May 2024 14:49:38 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, May 02, 2024 at 12:47:45PM +0200, Laurenz Albe wrote:\n> On Mon, 2024-03-11 at 09:33 +0100, Jelte Fennema-Nio wrote:\n> > -�� the subscriber's server log.\n> > +�� the subscriber's server log if you remove <literal>23505</literal> from\n> > +�� <xref linkend=\"guc-log-suppress-errcodes\"/>.\n> > \n> > This seems like a pretty big regression. Being able to know why your\n> > replication got closed seems pretty critical.\n> \n> Yes. But I'd argue that that is a shortcoming of logical replication:\n> there should be a ways to get this information via SQL. Having to look into\n> the log file is not a very useful option.\n> \n> The feature will become much less useful if unique voilations keep getting logged.\n\nUh, to be clear, your patch is changing the *defaults*, which I found\nsurprising, even after reaading the thread. Evidently, the current\nbehavior is not what you want, and you want to change it, but I'm *sure*\nthat whatever default you want to use at your site/with your application\nis going to make someone else unhappy. I surely want unique violations\nto be logged, for example.\n\n> @@ -6892,6 +6892,41 @@ local0.* /var/log/postgresql\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"guc-log-suppress-errcodes\" xreflabel=\"log_suppress_errcodes\">\n> + <term><varname>log_suppress_errcodes</varname> (<type>string</type>)\n> + <indexterm>\n> + <primary><varname>log_suppress_errcodes</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Causes <literal>ERROR</literal> and <literal>FATAL</literal> messages\n> + from client backend processes with certain error codes to be excluded\n> + from the log.\n> + The value is a comma-separated list of five-character error codes as\n> + listed in <xref linkend=\"errcodes-appendix\"/>. An error code that\n> + represents a class of errors (ends with three zeros) suppresses logging\n> + of all error codes within that class. For example, the entry\n> + <literal>08000</literal> (<literal>connection_exception</literal>)\n> + would suppress an error with code <literal>08P01</literal>\n> + (<literal>protocol_violation</literal>). The default setting is\n> + <literal>23505,3D000,3F000,42601,42704,42883,42P01,57P03</literal>.\n> + Only superusers and users with the appropriate <literal>SET</literal>\n> + privilege can change this setting.\n> + </para>\n\n> +\n> + <para>\n> + This setting is useful to exclude error messages from the log that are\n> + frequent but irrelevant.\n\nI think you should phrase the feature as \".. *allow* skipping error\nlogging for messages ... that are frequent but irrelevant for a given\nsite/role/DB/etc.\"\n\nI suggest that this patch should not change the default behavior at all:\nits default should be empty. That you, personally, would plan to\nexclude this or that error code is pretty uninteresting. I think the\nidea of changing the default behavior will kill the patch, and even if\nyou want to propose to do that, it should be a separate discussion.\nMaybe you should make it an 002 patch.\n\n> +\t\t{\"log_suppress_errcodes\", PGC_SUSET, LOGGING_WHEN,\n> +\t\t\tgettext_noop(\"ERROR and FATAL messages with these error codes don't get logged.\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_LIST_INPUT\n> +\t\t},\n> +\t\t&log_suppress_errcodes,\n> +\t\tDEFAULT_LOG_SUPPRESS_ERRCODES,\n> +\t\tcheck_log_suppress_errcodes, assign_log_suppress_errcodes, NULL\n\n> +/*\n> + * Default value for log_suppress_errcodes. ERROR or FATAL messages with\n> + * these error codes are never logged. Error classes (error codes ending with\n> + * three zeros) match all error codes in the class. The idea is to suppress\n> + * messages that usually don't indicate a serious problem but tend to pollute\n> + * the log file.\n> + */\n> +\n> +#define DEFAULT_LOG_SUPPRESS_ERRCODES \"23505,3D000,3F000,42601,42704,42883,42P01,57P03\"\n> +\n\n../src/backend/utils/errcodes.txt:23505 E ERRCODE_UNIQUE_VIOLATION unique_violation\n../src/backend/utils/errcodes.txt:3D000 E ERRCODE_INVALID_CATALOG_NAME invalid_catalog_name\n../src/backend/utils/errcodes.txt:3F000 E ERRCODE_INVALID_SCHEMA_NAME invalid_schema_name\n../src/backend/utils/errcodes.txt:42601 E ERRCODE_SYNTAX_ERROR syntax_error\n../src/backend/utils/errcodes.txt:3D000 E ERRCODE_UNDEFINED_DATABASE\n../src/backend/utils/errcodes.txt:42883 E ERRCODE_UNDEFINED_FUNCTION undefined_function\n../src/backend/utils/errcodes.txt:3F000 E ERRCODE_UNDEFINED_SCHEMA\n../src/backend/utils/errcodes.txt:42P01 E ERRCODE_UNDEFINED_TABLE undefined_table\n../src/backend/utils/errcodes.txt:42704 E ERRCODE_UNDEFINED_OBJECT undefined_object\n../src/backend/utils/errcodes.txt:57P03 E ERRCODE_CANNOT_CONNECT_NOW cannot_connect_now\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 17 Jun 2024 16:40:01 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Mon, 2024-06-17 at 16:40 -0500, Justin Pryzby wrote:\n> > The feature will become much less useful if unique voilations keep getting logged.\n> \n> Uh, to be clear, your patch is changing the *defaults*, which I found\n> surprising, even after reaading the thread. Evidently, the current\n> behavior is not what you want, and you want to change it, but I'm *sure*\n> that whatever default you want to use at your site/with your application\n> is going to make someone else unhappy. I surely want unique violations\n> to be logged, for example.\n\nI was afraid that setting the default non-empty would cause objections.\n\n> > + <para>\n> > + This setting is useful to exclude error messages from the log that are\n> > + frequent but irrelevant.\n> \n> I think you should phrase the feature as \".. *allow* skipping error\n> logging for messages ... that are frequent but irrelevant for a given\n> site/role/DB/etc.\"\n\nI have reworded that part.\n\n> I suggest that this patch should not change the default behavior at all:\n> its default should be empty. That you, personally, would plan to\n> exclude this or that error code is pretty uninteresting. I think the\n> idea of changing the default behavior will kill the patch, and even if\n> you want to propose to do that, it should be a separate discussion.\n> Maybe you should make it an 002 patch.\n\nI have attached a new version that leaves the parameter empty by default.\n\nThe patch is not motivated by my personal dislike of certain error messages.\nA well-written application would not need that parameter at all.\nThe motivation for me is based on my dealing with customers' log files,\nwhich are often full of messages that are only distracting from serious\nproblems and fill up the disk.\n\nBut you are probably right that it would be hard to find a default setting\nthat nobody has quibbles with, and the default can always be changed with\na future patch.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 18 Jun 2024 18:49:36 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "Hello Laurenz,\n\nI liked the idea for this patch. I will also go for the default being\nan empty string.\nI went through this patch and have some comments on the code,\n\n1. In general, I don't like the idea of goto, maybe we can have a\nfree_something function to call here.\n\n2.\nif (!SplitIdentifierString(new_copy, ',', &states))\n{\nGUC_check_errdetail(\"List syntax is invalid.\");\ngoto failed;\n}\nHere, we don't need all that free-ing, we can just return false here.\n\n3.\n/*\n* Check the the values are alphanumeric and convert them to upper case\n* (SplitIdentifierString converted them to lower case).\n*/\nfor (p = state; *p != '\\0'; p++)\nif (*p >= 'a' && *p <= 'z')\n*p += 'A' - 'a';\nelse if (*p < '0' || *p > '9')\n{\nGUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\ngoto failed;\n}\nI was thinking, maybe we can use tolower() function here.\n\n4.\nlist_free(states);\npfree(new_copy);\n\n*extra = statelist;\nreturn true;\n\nfailed:\nlist_free(states);\npfree(new_copy);\nguc_free(statelist);\nreturn false;\n\nThis looks like duplication that can be easily avoided.\nYou may have free_somthing function to do all free-ing stuff only and\nits callee can then have a return statement.\ne.g for here,\nfree_states(states, new_copy, statelist);\nreturn true;\n\n5. Also, for alphanumeric check, maybe we can have something like,\nif (isalnum(*state) == 0)\n{\nGUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\ngoto failed;\n}\nand we can do this in the beginning after the len check.\n\nOn Tue, 18 Jun 2024 at 18:49, Laurenz Albe <[email protected]> wrote:\n>\n> On Mon, 2024-06-17 at 16:40 -0500, Justin Pryzby wrote:\n> > > The feature will become much less useful if unique voilations keep getting logged.\n> >\n> > Uh, to be clear, your patch is changing the *defaults*, which I found\n> > surprising, even after reaading the thread. Evidently, the current\n> > behavior is not what you want, and you want to change it, but I'm *sure*\n> > that whatever default you want to use at your site/with your application\n> > is going to make someone else unhappy. I surely want unique violations\n> > to be logged, for example.\n>\n> I was afraid that setting the default non-empty would cause objections.\n>\n> > > + <para>\n> > > + This setting is useful to exclude error messages from the log that are\n> > > + frequent but irrelevant.\n> >\n> > I think you should phrase the feature as \".. *allow* skipping error\n> > logging for messages ... that are frequent but irrelevant for a given\n> > site/role/DB/etc.\"\n>\n> I have reworded that part.\n>\n> > I suggest that this patch should not change the default behavior at all:\n> > its default should be empty. That you, personally, would plan to\n> > exclude this or that error code is pretty uninteresting. I think the\n> > idea of changing the default behavior will kill the patch, and even if\n> > you want to propose to do that, it should be a separate discussion.\n> > Maybe you should make it an 002 patch.\n>\n> I have attached a new version that leaves the parameter empty by default.\n>\n> The patch is not motivated by my personal dislike of certain error messages.\n> A well-written application would not need that parameter at all.\n> The motivation for me is based on my dealing with customers' log files,\n> which are often full of messages that are only distracting from serious\n> problems and fill up the disk.\n>\n> But you are probably right that it would be hard to find a default setting\n> that nobody has quibbles with, and the default can always be changed with\n> a future patch.\n>\n> Yours,\n> Laurenz Albe\n\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Wed, 24 Jul 2024 15:27:15 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "Thanks for the review!\n\nOn Wed, 2024-07-24 at 15:27 +0200, Rafia Sabih wrote:\n> I liked the idea for this patch. I will also go for the default being\n> an empty string.\n> I went through this patch and have some comments on the code,\n> \n> 1. In general, I don't like the idea of goto, maybe we can have a\n> free_something function to call here.\n\nThe PostgreSQL code base has over 3000 goto's...\n\nSure, that can be factored out to a function (except the final \"return\"),\nbut I feel that a function for three \"free\" calls is code bloat.\n\nDo you think that avoiding the goto and using a function would make the\ncode simpler and clearer?\n\n> 2.\n> if (!SplitIdentifierString(new_copy, ',', &states))\n> {\n> GUC_check_errdetail(\"List syntax is invalid.\");\n> goto failed;\n> }\n> Here, we don't need all that free-ing, we can just return false here.\n\nI am OK with changing that; I had thought it was more clearer and more\nfoolproof to use the same pattern everywhere.\n\n> 3.\n> /*\n> * Check the the values are alphanumeric and convert them to upper case\n> * (SplitIdentifierString converted them to lower case).\n> */\n> for (p = state; *p != '\\0'; p++)\n> if (*p >= 'a' && *p <= 'z')\n> *p += 'A' - 'a';\n> else if (*p < '0' || *p > '9')\n> {\n> GUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\n> goto failed;\n> }\n> I was thinking, maybe we can use tolower() function here.\n\nThat is a good idea, but these C library respect the current locale.\nI would have to explicitly specify the C locale or switch the locale\ntemporarily.\n\nSwitching the locale seems clumsy, and I have no idea what I would have\nto feed as second argument to toupper_l() or isalnum_l().\nDo you have an idea?\n\n> 4.\n> list_free(states);\n> pfree(new_copy);\n> \n> *extra = statelist;\n> return true;\n> \n> failed:\n> list_free(states);\n> pfree(new_copy);\n> guc_free(statelist);\n> return false;\n> \n> This looks like duplication that can be easily avoided.\n> You may have free_somthing function to do all free-ing stuff only and\n> its callee can then have a return statement.\n> e.g for here,\n> free_states(states, new_copy, statelist);\n> return true;\n\nThat free_states() function would just contain two function calls.\nI think that defining a special function for that is somewhat out of\nproportion.\n\n> 5. Also, for alphanumeric check, maybe we can have something like,\n> if (isalnum(*state) == 0)\n> {\n> GUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\n> goto failed;\n> }\n> and we can do this in the beginning after the len check.\n\nisalnum() operates on a single character and depends on the current locale.\nSee my comments to 3. above.\n\n\nPlease let me know what you think, particularly about the locale problem.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 25 Jul 2024 18:03:23 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the log spam"
},
{
"msg_contents": "On Thu, 25 Jul 2024 at 18:03, Laurenz Albe <[email protected]> wrote:\n\n> Thanks for the review!\n>\n> On Wed, 2024-07-24 at 15:27 +0200, Rafia Sabih wrote:\n> > I liked the idea for this patch. I will also go for the default being\n> > an empty string.\n> > I went through this patch and have some comments on the code,\n> >\n> > 1. In general, I don't like the idea of goto, maybe we can have a\n> > free_something function to call here.\n>\n> The PostgreSQL code base has over 3000 goto's...\n>\n> Sure, that can be factored out to a function (except the final \"return\"),\n> but I feel that a function for three \"free\" calls is code bloat.\n>\n> On a detailed look over this, you are right Laurenz about this.\n\n> Do you think that avoiding the goto and using a function would make the\n> code simpler and clearer?\n>\n> > 2.\n> > if (!SplitIdentifierString(new_copy, ',', &states))\n> > {\n> > GUC_check_errdetail(\"List syntax is invalid.\");\n> > goto failed;\n> > }\n> > Here, we don't need all that free-ing, we can just return false here.\n>\n> I am OK with changing that; I had thought it was more clearer and more\n> foolproof to use the same pattern everywhere.\n>\n> > 3.\n> > /*\n> > * Check the the values are alphanumeric and convert them to upper case\n> > * (SplitIdentifierString converted them to lower case).\n> > */\n> > for (p = state; *p != '\\0'; p++)\n> > if (*p >= 'a' && *p <= 'z')\n> > *p += 'A' - 'a';\n> > else if (*p < '0' || *p > '9')\n> > {\n> > GUC_check_errdetail(\"error codes can only contain digits and ASCII\n> letters.\");\n> > goto failed;\n> > }\n> > I was thinking, maybe we can use tolower() function here.\n>\n> That is a good idea, but these C library respect the current locale.\n> I would have to explicitly specify the C locale or switch the locale\n> temporarily.\n>\nHmm. actually I don't have any good answers to this locale issue.\n\n>\n> Switching the locale seems clumsy, and I have no idea what I would have\n> to feed as second argument to toupper_l() or isalnum_l().\n> Do you have an idea?\n>\n> > 4.\n> > list_free(states);\n> > pfree(new_copy);\n> >\n> > *extra = statelist;\n> > return true;\n> >\n> > failed:\n> > list_free(states);\n> > pfree(new_copy);\n> > guc_free(statelist);\n> > return false;\n> >\n> > This looks like duplication that can be easily avoided.\n> > You may have free_somthing function to do all free-ing stuff only and\n> > its callee can then have a return statement.\n> > e.g for here,\n> > free_states(states, new_copy, statelist);\n> > return true;\n>\n> That free_states() function would just contain two function calls.\n> I think that defining a special function for that is somewhat out of\n> proportion.\n>\n> > 5. Also, for alphanumeric check, maybe we can have something like,\n> > if (isalnum(*state) == 0)\n> > {\n> > GUC_check_errdetail(\"error codes can only contain digits and ASCII\n> letters.\");\n> > goto failed;\n> > }\n> > and we can do this in the beginning after the len check.\n>\n> isalnum() operates on a single character and depends on the current locale.\n> See my comments to 3. above.\n>\n>\n> Please let me know what you think, particularly about the locale problem.\n>\n> Yours,\n> Laurenz Albe\n>\n\n\n-- \nRegards,\nRafia Sabih\n\nOn Thu, 25 Jul 2024 at 18:03, Laurenz Albe <[email protected]> wrote:Thanks for the review!\n\nOn Wed, 2024-07-24 at 15:27 +0200, Rafia Sabih wrote:\n> I liked the idea for this patch. I will also go for the default being\n> an empty string.\n> I went through this patch and have some comments on the code,\n> \n> 1. In general, I don't like the idea of goto, maybe we can have a\n> free_something function to call here.\n\nThe PostgreSQL code base has over 3000 goto's...\n\nSure, that can be factored out to a function (except the final \"return\"),\nbut I feel that a function for three \"free\" calls is code bloat.\nOn a detailed look over this, you are right Laurenz about this. \nDo you think that avoiding the goto and using a function would make the\ncode simpler and clearer?\n\n> 2.\n> if (!SplitIdentifierString(new_copy, ',', &states))\n> {\n> GUC_check_errdetail(\"List syntax is invalid.\");\n> goto failed;\n> }\n> Here, we don't need all that free-ing, we can just return false here.\n\nI am OK with changing that; I had thought it was more clearer and more\nfoolproof to use the same pattern everywhere.\n\n> 3.\n> /*\n> * Check the the values are alphanumeric and convert them to upper case\n> * (SplitIdentifierString converted them to lower case).\n> */\n> for (p = state; *p != '\\0'; p++)\n> if (*p >= 'a' && *p <= 'z')\n> *p += 'A' - 'a';\n> else if (*p < '0' || *p > '9')\n> {\n> GUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\n> goto failed;\n> }\n> I was thinking, maybe we can use tolower() function here.\n\nThat is a good idea, but these C library respect the current locale.\nI would have to explicitly specify the C locale or switch the locale\ntemporarily.Hmm. actually I don't have any good answers to this locale issue. \n\nSwitching the locale seems clumsy, and I have no idea what I would have\nto feed as second argument to toupper_l() or isalnum_l().\nDo you have an idea?\n\n> 4.\n> list_free(states);\n> pfree(new_copy);\n> \n> *extra = statelist;\n> return true;\n> \n> failed:\n> list_free(states);\n> pfree(new_copy);\n> guc_free(statelist);\n> return false;\n> \n> This looks like duplication that can be easily avoided.\n> You may have free_somthing function to do all free-ing stuff only and\n> its callee can then have a return statement.\n> e.g for here,\n> free_states(states, new_copy, statelist);\n> return true;\n\nThat free_states() function would just contain two function calls.\nI think that defining a special function for that is somewhat out of\nproportion.\n\n> 5. Also, for alphanumeric check, maybe we can have something like,\n> if (isalnum(*state) == 0)\n> {\n> GUC_check_errdetail(\"error codes can only contain digits and ASCII letters.\");\n> goto failed;\n> }\n> and we can do this in the beginning after the len check.\n\nisalnum() operates on a single character and depends on the current locale.\nSee my comments to 3. above.\n\n\nPlease let me know what you think, particularly about the locale problem.\n\nYours,\nLaurenz Albe\n-- Regards,Rafia Sabih",
"msg_date": "Thu, 15 Aug 2024 19:52:55 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the log spam"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the current code of do_watch(), sigsetjmp is called if WIN32\nis defined, but siglongjmp is not called in the signal handler\nin this condition. On Windows, currently, cancellation is checked\nonly by cancel_pressed, and calling sigsetjmp in do_watch() is\nunnecessary. Therefore, we can remove code around sigsetjmp in\ndo_watch(). I've attached the patch for this fix.\n\nRegards,\nYugo Ngata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 5 Mar 2024 22:05:52 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary code from psql's watch command"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 10:05:52PM +0900, Yugo NAGATA wrote:\n> In the current code of do_watch(), sigsetjmp is called if WIN32\n> is defined, but siglongjmp is not called in the signal handler\n> in this condition. On Windows, currently, cancellation is checked\n> only by cancel_pressed, and calling sigsetjmp in do_watch() is\n> unnecessary. Therefore, we can remove code around sigsetjmp in\n> do_watch(). I've attached the patch for this fix.\n\nRe-reading the top comment of sigint_interrupt_enabled, it looks like\nyou're right here. As long as we check for cancel_pressed there\nshould be no need for any special cancellation handling here.\n--\nMichael",
"msg_date": "Wed, 6 Mar 2024 08:11:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary code from psql's watch command"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Mar 05, 2024 at 10:05:52PM +0900, Yugo NAGATA wrote:\n>> In the current code of do_watch(), sigsetjmp is called if WIN32\n>> is defined, but siglongjmp is not called in the signal handler\n>> in this condition. On Windows, currently, cancellation is checked\n>> only by cancel_pressed, and calling sigsetjmp in do_watch() is\n>> unnecessary. Therefore, we can remove code around sigsetjmp in\n>> do_watch(). I've attached the patch for this fix.\n\n> Re-reading the top comment of sigint_interrupt_enabled, it looks like\n> you're right here. As long as we check for cancel_pressed there\n> should be no need for any special cancellation handling here.\n\nI don't have Windows here to test on, but does the WIN32 code\npath work at all? It looks to me like cancel_pressed becoming\ntrue doesn't get us to exit the outer loop, only the inner delay\none, meaning that trying to control-C out of a \\watch will just\ncause it to repeat the command even faster. That path isn't\nsetting the \"done\" variable, and it isn't checking it either,\nbecause all of that only happens in the other #ifdef arm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Mar 2024 13:03:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary code from psql's watch command"
},
{
"msg_contents": "On Wed, 06 Mar 2024 13:03:39 -0500\nTom Lane <[email protected]> wrote:\n\n> Michael Paquier <[email protected]> writes:\n> > On Tue, Mar 05, 2024 at 10:05:52PM +0900, Yugo NAGATA wrote:\n> >> In the current code of do_watch(), sigsetjmp is called if WIN32\n> >> is defined, but siglongjmp is not called in the signal handler\n> >> in this condition. On Windows, currently, cancellation is checked\n> >> only by cancel_pressed, and calling sigsetjmp in do_watch() is\n> >> unnecessary. Therefore, we can remove code around sigsetjmp in\n> >> do_watch(). I've attached the patch for this fix.\n> \n> > Re-reading the top comment of sigint_interrupt_enabled, it looks like\n> > you're right here. As long as we check for cancel_pressed there\n> > should be no need for any special cancellation handling here.\n> \n> I don't have Windows here to test on, but does the WIN32 code\n> path work at all? It looks to me like cancel_pressed becoming\n> true doesn't get us to exit the outer loop, only the inner delay\n> one, meaning that trying to control-C out of a \\watch will just\n> cause it to repeat the command even faster. That path isn't\n> setting the \"done\" variable, and it isn't checking it either,\n> because all of that only happens in the other #ifdef arm.\n\nThe outer loop is eventually exited even because PSQLexecWatch returns 0\nwhen cancel_pressed = 0. However, it happens after executing an extra\nquery in this function not just after exit of the inner loop. Therefore,\nit would be better to adding set and check of \"done\" in WIN32, too.\n\nI've attached the updated patch (v2_remove_unnecessary_code_in_psql_watch.patch).\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 8 Mar 2024 14:22:52 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove unnecessary code from psql's watch command"
},
{
"msg_contents": "Yugo NAGATA <[email protected]> writes:\n> On Wed, 06 Mar 2024 13:03:39 -0500\n> Tom Lane <[email protected]> wrote:\n>> I don't have Windows here to test on, but does the WIN32 code\n>> path work at all?\n\n> The outer loop is eventually exited even because PSQLexecWatch returns 0\n> when cancel_pressed = 0. However, it happens after executing an extra\n> query in this function not just after exit of the inner loop. Therefore,\n> it would be better to adding set and check of \"done\" in WIN32, too.\n\nAh, I see now. Agreed, that could stand improvement.\n\n> I've attached the updated patch (v2_remove_unnecessary_code_in_psql_watch.patch).\n\nPushed with minor tidying.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Mar 2024 12:09:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary code from psql's watch command"
},
{
"msg_contents": "On Fri, 08 Mar 2024 12:09:12 -0500\nTom Lane <[email protected]> wrote:\n\n> Yugo NAGATA <[email protected]> writes:\n> > On Wed, 06 Mar 2024 13:03:39 -0500\n> > Tom Lane <[email protected]> wrote:\n> >> I don't have Windows here to test on, but does the WIN32 code\n> >> path work at all?\n> \n> > The outer loop is eventually exited even because PSQLexecWatch returns 0\n> > when cancel_pressed = 0. However, it happens after executing an extra\n> > query in this function not just after exit of the inner loop. Therefore,\n> > it would be better to adding set and check of \"done\" in WIN32, too.\n> \n> Ah, I see now. Agreed, that could stand improvement.\n> \n> > I've attached the updated patch (v2_remove_unnecessary_code_in_psql_watch.patch).\n> \n> Pushed with minor tidying.\n\nThanks!\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Sat, 9 Mar 2024 11:57:36 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove unnecessary code from psql's watch command"
}
] |
[
{
"msg_contents": "Thanks to Jeff's recent work with commits 2af07e2 and 59825d1, the issue\nthat led to the revert of the MAINTAIN privilege and the pg_maintain\npredefined role (commit 151c22d) should now be resolved. Specifically,\nthere was a concern that roles with the MAINTAIN privilege could use\nsearch_path tricks to run arbitrary code as the table owner. Jeff's work\nprevents this by restricting search_path to a known safe value when running\nmaintenance commands. (This approach and others were discussed on the\nlists quite extensively, and it was also brought up at the developer\nmeeting at FOSDEM [0] earlier this year.)\n\nGiven this, I'd like to finally propose un-reverting MAINTAIN and\npg_maintain. I created a commitfest entry for this [1] a few weeks ago and\nattached it to Jeff's search_path thread, but I figured it would be good to\ncreate a dedicated thread for this, too. The attached patch is a straight\nrevert of commit 151c22d except for the following small changes:\n\n* The catversion bump has been removed for now. The catversion will need\n to be bumped appropriately if/when this is committed.\n\n* The OID for the pg_maintain predefined role needed to be changed. The\n original OID has been reused for something else since this feature was\n reverted.\n\n* The change in AdjustUpgrade.pm needed to be updated to check for\n \"$old_version < 17\" instead of \"$old_version < 16\".\n\nThoughts?\n\n[0] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2024_Developer_Meeting#The_Path_to_un-reverting_the_MAINTAIN_privilege\n[1] https://commitfest.postgresql.org/47/4836/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 5 Mar 2024 10:12:35 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "un-revert the MAINTAIN privilege and the pg_maintain predefined role"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 10:12:35AM -0600, Nathan Bossart wrote:\n> Thanks to Jeff's recent work with commits 2af07e2 and 59825d1, the issue\n> that led to the revert of the MAINTAIN privilege and the pg_maintain\n> predefined role (commit 151c22d) should now be resolved. Specifically,\n> there was a concern that roles with the MAINTAIN privilege could use\n> search_path tricks to run arbitrary code as the table owner. Jeff's work\n> prevents this by restricting search_path to a known safe value when running\n> maintenance commands. (This approach and others were discussed on the\n> lists quite extensively, and it was also brought up at the developer\n> meeting at FOSDEM [0] earlier this year.)\n> \n> Given this, I'd like to finally propose un-reverting MAINTAIN and\n> pg_maintain. I created a commitfest entry for this [1] a few weeks ago and\n> attached it to Jeff's search_path thread, but I figured it would be good to\n> create a dedicated thread for this, too. The attached patch is a straight\n> revert of commit 151c22d except for the following small changes:\n> \n> * The catversion bump has been removed for now. The catversion will need\n> to be bumped appropriately if/when this is committed.\n> \n> * The OID for the pg_maintain predefined role needed to be changed. The\n> original OID has been reused for something else since this feature was\n> reverted.\n> \n> * The change in AdjustUpgrade.pm needed to be updated to check for\n> \"$old_version < 17\" instead of \"$old_version < 16\".\n\nGiven all of this code was previously reviewed and committed, I am planning\nto forge ahead and commit this early next week, provided no objections or\nadditional feedback materialize.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 10:50:00 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: un-revert the MAINTAIN privilege and the pg_maintain predefined\n role"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 10:50:00AM -0600, Nathan Bossart wrote:\n> Given all of this code was previously reviewed and committed, I am planning\n> to forge ahead and commit this early next week, provided no objections or\n> additional feedback materialize.\n\nJeff Davis and I spent some additional time looking at this patch. There\nare existing inconsistencies among the privilege checks for the various\nmaintenance commands, and the MAINTAIN privilege just builds on the status\nquo, with one exception. In the v1 patch, I proposed skipping privilege\nchecks when VACUUM recurses to TOAST tables, which means that a user may be\nable to process a TOAST table for which they've concurrent lost privileges\non the main relation (since each table is vacuumed in a separate\ntransaction). It's easy enough to resolve this inconsistency by sending\ndown the parent OID when recursing to a TOAST table and using that for the\nprivilege checks. AFAICT this avoids any kind of cache lookup hazards\nbecause we hold a session lock on the main relation in this case. I've\ndone this in the attached v2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 12 Mar 2024 16:05:41 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: un-revert the MAINTAIN privilege and the pg_maintain predefined\n role"
},
{
"msg_contents": "On Tue, 2024-03-12 at 16:05 -0500, Nathan Bossart wrote:\n> It's easy enough to resolve this inconsistency by sending\n> down the parent OID when recursing to a TOAST table and using that\n> for the\n> privilege checks. AFAICT this avoids any kind of cache lookup\n> hazards\n> because we hold a session lock on the main relation in this case. \n> I've\n> done this in the attached v2.\n\nLooks good to me. Thank you for expanding on the comment, as well.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 13 Mar 2024 09:49:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: un-revert the MAINTAIN privilege and the pg_maintain predefined\n role"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 09:49:26AM -0700, Jeff Davis wrote:\n> Looks good to me. Thank you for expanding on the comment, as well.\n\nThanks for reviewing! Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 14:55:25 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: un-revert the MAINTAIN privilege and the pg_maintain predefined\n role"
}
] |
[
{
"msg_contents": "After reading the thread at [1], I could not escape the feeling\nthat contrib/tablefunc's error reporting is very confusing.\nLooking into the source code, I soon found that it is also\nvery inconsistent, with similar error reports being phrased\nquite differently. The terminology for column names doesn't\nmatch the SGML docs either. And there are some places that are\nso confused about whether they are complaining about the calling\nquery or the called query that the output is flat-out backwards.\nSo at that point my nascent OCD wouldn't let me rest without\nfixing it. Here's a quick patch series to do that.\n\nFor review purposes, I split this into two patches. 0001 simply\nadds some more test cases to reach currently-unexercised error\nreports. Then 0002 makes my proposed code changes and shows\nhow the existing error messages change.\n\nI'm not necessarily wedded to the phrasings I used here,\nin case anyone has better ideas.\n\nBTW, while I didn't touch it here, it seems fairly bogus that\nconnectby() checks both type OID and typmod for its output\ncolumns while crosstab() only checks type OID. I think\ncrosstab() is in the wrong and needs to be checking typmod.\nThat might be fit material for a separate patch though.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/DM4PR19MB597886696589C5CE33F5D58AD3222%40DM4PR19MB5978.namprd19.prod.outlook.com",
"msg_date": "Tue, 05 Mar 2024 17:04:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "On 3/5/24 17:04, Tom Lane wrote:\n> After reading the thread at [1], I could not escape the feeling\n> that contrib/tablefunc's error reporting is very confusing.\n> Looking into the source code, I soon found that it is also\n> very inconsistent, with similar error reports being phrased\n> quite differently. The terminology for column names doesn't\n> match the SGML docs either. And there are some places that are\n> so confused about whether they are complaining about the calling\n> query or the called query that the output is flat-out backwards.\n> So at that point my nascent OCD wouldn't let me rest without\n> fixing it. Here's a quick patch series to do that.\n> \n> For review purposes, I split this into two patches. 0001 simply\n> adds some more test cases to reach currently-unexercised error\n> reports. Then 0002 makes my proposed code changes and shows\n> how the existing error messages change.\n> \n> I'm not necessarily wedded to the phrasings I used here,\n> in case anyone has better ideas.\n> \n> BTW, while I didn't touch it here, it seems fairly bogus that\n> connectby() checks both type OID and typmod for its output\n> columns while crosstab() only checks type OID. I think\n> crosstab() is in the wrong and needs to be checking typmod.\n> That might be fit material for a separate patch though.\n\nBeen a long time since I gave contrib/tablefunc any love I guess ;-)\n\nI will have a look at your patches, and the other issue you mention, but \nit might be a day or three before I can give it some quality time.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 17:16:49 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> I will have a look at your patches, and the other issue you mention, but \n> it might be a day or three before I can give it some quality time.\n\nNo hurry certainly. Thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Mar 2024 17:31:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "On 3/5/24 17:04, Tom Lane wrote:\n> After reading the thread at [1], I could not escape the feeling\n> that contrib/tablefunc's error reporting is very confusing.\n> Looking into the source code, I soon found that it is also\n> very inconsistent, with similar error reports being phrased\n> quite differently. The terminology for column names doesn't\n> match the SGML docs either. And there are some places that are\n> so confused about whether they are complaining about the calling\n> query or the called query that the output is flat-out backwards.\n> So at that point my nascent OCD wouldn't let me rest without\n> fixing it. Here's a quick patch series to do that.\n> \n> For review purposes, I split this into two patches. 0001 simply\n> adds some more test cases to reach currently-unexercised error\n> reports. Then 0002 makes my proposed code changes and shows\n> how the existing error messages change.\n> \n> I'm not necessarily wedded to the phrasings I used here,\n> in case anyone has better ideas.\n\nThe changes all look good to me and indeed more consistent with the docs.\n\nDo you want me to push these? If so, development tip only, or backpatch?\n\n> BTW, while I didn't touch it here, it seems fairly bogus that\n> connectby() checks both type OID and typmod for its output\n> columns while crosstab() only checks type OID. I think\n> crosstab() is in the wrong and needs to be checking typmod.\n> That might be fit material for a separate patch though.\n\nI can take a look at this. Presumably this would not be for backpatching.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 9 Mar 2024 12:56:19 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> On 3/5/24 17:04, Tom Lane wrote:\n>> After reading the thread at [1], I could not escape the feeling\n>> that contrib/tablefunc's error reporting is very confusing.\n\n> The changes all look good to me and indeed more consistent with the docs.\n> Do you want me to push these? If so, development tip only, or backpatch?\n\nI can push that. I was just thinking HEAD, we aren't big on changing\nerror reporting in back branches.\n\n>> BTW, while I didn't touch it here, it seems fairly bogus that\n>> connectby() checks both type OID and typmod for its output\n>> columns while crosstab() only checks type OID. I think\n>> crosstab() is in the wrong and needs to be checking typmod.\n>> That might be fit material for a separate patch though.\n\n> I can take a look at this. Presumably this would not be for backpatching.\n\nI'm not sure whether that could produce results bad enough to be\ncalled a bug or not. But I too would lean towards not back-patching,\nin view of the lack of field complaints.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Mar 2024 13:07:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "On 3/9/24 13:07, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> On 3/5/24 17:04, Tom Lane wrote:\n>>> After reading the thread at [1], I could not escape the feeling\n>>> that contrib/tablefunc's error reporting is very confusing.\n> \n>> The changes all look good to me and indeed more consistent with the docs.\n>> Do you want me to push these? If so, development tip only, or backpatch?\n> \n> I can push that. I was just thinking HEAD, we aren't big on changing\n> error reporting in back branches.\n> \n>>> BTW, while I didn't touch it here, it seems fairly bogus that\n>>> connectby() checks both type OID and typmod for its output\n>>> columns while crosstab() only checks type OID. I think\n>>> crosstab() is in the wrong and needs to be checking typmod.\n>>> That might be fit material for a separate patch though.\n> \n>> I can take a look at this. Presumably this would not be for backpatching.\n> \n> I'm not sure whether that could produce results bad enough to be\n> called a bug or not. But I too would lean towards not back-patching,\n> in view of the lack of field complaints.\n\n\nSomething like the attached what you had in mind? (applies on top of \nyour two patches)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 9 Mar 2024 14:58:09 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> On 3/9/24 13:07, Tom Lane wrote:\n>>> BTW, while I didn't touch it here, it seems fairly bogus that\n>>> connectby() checks both type OID and typmod for its output\n>>> columns while crosstab() only checks type OID. I think\n>>> crosstab() is in the wrong and needs to be checking typmod.\n>>> That might be fit material for a separate patch though.\n\n> Something like the attached what you had in mind? (applies on top of \n> your two patches)\n\nYeah, exactly.\n\nAs far as the comment change goes:\n\n- * - attribute [1] of the sql tuple is the category; no need to check it -\n- * attribute [2] of the sql tuple should match attributes [1] to [natts]\n+ * attribute [1] of the sql tuple is the category; no need to check it\n+ * attribute [2] of the sql tuple should match attributes [1] to [natts - 1]\n * of the return tuple\n\nI suspect that this block looked better when originally committed but\ndidn't survive contact with pgindent. You should check whether your\nversion will; if not, some dashes on the /* line will help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Mar 2024 15:39:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
},
{
"msg_contents": "On 3/9/24 15:39, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> On 3/9/24 13:07, Tom Lane wrote:\n>>>> BTW, while I didn't touch it here, it seems fairly bogus that\n>>>> connectby() checks both type OID and typmod for its output\n>>>> columns while crosstab() only checks type OID. I think\n>>>> crosstab() is in the wrong and needs to be checking typmod.\n>>>> That might be fit material for a separate patch though.\n> \n>> Something like the attached what you had in mind? (applies on top of \n>> your two patches)\n> \n> Yeah, exactly.\n> \n> As far as the comment change goes:\n> \n> - * - attribute [1] of the sql tuple is the category; no need to check it -\n> - * attribute [2] of the sql tuple should match attributes [1] to [natts]\n> + * attribute [1] of the sql tuple is the category; no need to check it\n> + * attribute [2] of the sql tuple should match attributes [1] to [natts - 1]\n> * of the return tuple\n> \n> I suspect that this block looked better when originally committed but\n> didn't survive contact with pgindent. You should check whether your\n> version will; if not, some dashes on the /* line will help.\n\nThanks for the review and heads up. I fiddled with it a bit to make it \npgindent clean. I saw you commit your patches so I just pushed mine.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 9 Mar 2024 17:34:41 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving contrib/tablefunc's error reporting"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.